patent_id
stringlengths
7
8
description
stringlengths
125
2.47M
length
int64
125
2.47M
11943069
DETAILED DESCRIPTION For making the purposes, technical solutions and advantages of the present disclosure clearer, the present disclosure will further be described below in combination with implementations and accompanying drawings in detail. Herein, schematic implementations of the present disclosure and descriptions thereof are adopted to explain the present disclosure and not intended to limit the present disclosure. The embodiments of the present disclosure provide a data transmission method and device, a base station, a terminal and a storage medium. The present disclosure will be described below in combination with the accompanying drawings in detail. FIG.1is a structure diagram of a communication system according to an exemplary embodiment. As shown inFIG.1, the communication system includes a base station101and a terminal102. The base station101may be connected with the terminal102through a communication network. In a communication process, the base station101may transmit communication data to the terminal102, and the terminal102may receive the communication data, detect whether the communication data is correctly received or not, acquire feedback data based on a detection result and transmit the feedback data to the base station101such that the base station101can determine whether to retransmit the communication data or continue to transmit a next piece of communication data based on the feedback data. In the embodiments of the present disclosure, for applying a process that the terminal102transmits feedback data to an unlicensed channel to increase frequency-domain resources for data transmission, the base station101may transmit a control instruction to the terminal102, the control instruction carrying indication information and the indication information indicating multiple resource positions configured for the terminal102to transmit the feedback data in the unlicensed channel, and then the terminal102may transmit the feedback data through at least one resource position among the multiple resource positions in a case of the unlicensed channel being occupied by the terminal. The base station101may transmit the communication data to the terminal102through a licensed channel, or may transmit the communication data to the terminal102through an unlicensed channel after successfully occupying the unlicensed channel. The unlicensed channel adopted when the terminal102transmits feedback data may be the same as or different from a channel adopted when the base station101transmits communication data. FIG.2is a flow chart of a data transmission method according to an exemplary embodiment. The method is implemented by a base station. As shown inFIG.2, the method includes the following operations. In Operation201, a control instruction is transmitted to a terminal, the control instruction carrying indication information and the indication information indicating multiple resource positions configured for the terminal to transmit feedback data in an unlicensed channel. In Operation202, the feedback data transmitted by the terminal is received, the feedback data being transmitted by the terminal through at least one resource position among the multiple resource positions in a case of the unlicensed channel being occupied by the terminal and the feedback data indicating whether the terminal has correctly received communication data from the base station or not. According to the method provided in the embodiment of the present disclosure, a base station may transmit a control instruction to a terminal, the control instruction carrying indication information and the indication information indicating multiple resource positions configured for the terminal to transmit feedback data in an unlicensed channel, so that multiple opportunities are provided for the terminal to transmit feedback data. Under the condition that the terminal occupies the unlicensed channel, the feedback data may be transmitted through at least one resource position among the multiple resource positions. Therefore, the situation that a terminal successfully occupies an unlicensed channel but misses resource positions and thus cannot transmit feedback data is can be avoided, a success rate of transmitting the feedback data by the terminal through the unlicensed channel is increased, and furthermore, the reliability of data transmission is improved. In a possible implementation, the operation that the indication information is transmitted to the terminal may include that: a scheduling instruction is transmitted to the terminal. In another possible implementation, the indication information may be an index identifier. The method may further include that:an index table is transmitted to the terminal, the index table including multiple resource positions corresponding to each index identifier and the index table being configured to query corresponding multiple resource positions based on any index identifier. In another possible implementation, the communication data may include multiple data sets. The operation that the feedback data transmitted by the terminal is received may include that:the feedback data corresponding to the multiple data sets is received through the multiple resource positions. In another possible implementation, the indication information may be in an information field at a fixed position in the control instruction or in a configurable information field of the control instruction. A bit length of the indication information may be fixed or may be determined based on a configuration. In another possible implementation, the indication information may include time-domain information and frequency-domain information of the multiple resource positions; or,the indication information may include time-domain information of the multiple resource positions, and frequency-domain information of the multiple resource positions may be determined according to a first preset rule; or,the indication information may include frequency-domain information of the multiple resource positions, and time-domain information of the multiple resource positions may be determined according to a second preset rule. FIG.3is a flow chart of a data transmission method according to an exemplary embodiment. The method is implemented by a terminal. As shown inFIG.3, the data transmission method includes the following operations. In Operation301, a control instruction transmitted by a base station is received, the control instruction carrying indication information and the indication information indicating multiple resource positions configured for the terminal to transmit feedback data in an unlicensed channel. In Operation302, the multiple resource positions are determined based on the indication information. In Operation303, the feedback data is transmitted through at least one resource position among the multiple resource positions in a case of the unlicensed channel being occupied by the terminal, the feedback data indicating whether the terminal has correctly received communication data from the base station or not. According to the method provided in the embodiment of the present disclosure, a terminal may receive a control instruction from a base station, the control instruction carrying indication information and the indication information indicating multiple resource positions configured for the terminal to transmit feedback data in an unlicensed channel, so that multiple opportunities are provided for the terminal to transmit feedback data. Under the condition that the terminal occupies the unlicensed channel, the feedback data may be transmitted through at least one resource position among the multiple resource positions. Therefore, the situation that a terminal successfully occupies an unlicensed channel but misses resource positions and thus cannot transmit feedback data may be avoided. A success rate of transmitting feedback data by a terminal through an unlicensed channel is increased, and furthermore, the reliability of data transmission is improved. In a possible implementation, the operation that the control instruction transmitted by the base station is received may include that:a scheduling instruction transmitted by the base station is received. In another possible implementation, the indication information may be an index identifier. The operation that the multiple resource positions are determined based on the indication information may include that:an index table is queried based on the index identifier to determine the multiple resource positions corresponding to the index identifier, the index table including multiple resource positions corresponding to each index identifier. In another possible implementation, the operation that the feedback data is transmitted through the at least one resource position among the multiple resource positions in the case of the unlicensed channel being occupied by the terminal may include that:the feedback data is transmitted through one resource position after a successful occupation time among the multiple resource positions in the case of the unlicensed channel being occupied by the terminal; or,the feedback data is transmitted through a preset number of resource positions after a successful occupation time among the multiple resource positions respectively; or,the feedback data is transmitted through each resource position after a successful occupation time among the multiple resource positions respectively. In another possible implementation, the communication data may include multiple data sets. The operation that the feedback data is transmitted through the at least one resource position among the multiple resource positions in the case of the unlicensed channel being occupied by the terminal may include that:the feedback data corresponding to the multiple data sets is transmitted through multiple resource positions after a successful occupation time in the unlicensed channel respectively in the case of the unlicensed channel being occupied by the terminal. In another possible implementation, the operation that the feedback data is transmitted through the at least one resource position among the multiple resource positions in the case of the unlicensed channel being occupied by the terminal may include that:in response to acquiring the feedback data, multiple resource positions within a time bucket after present time and an unlicensed channel where the multiple resource positions are located are determined; andthe feedback data is transmitted through a resource position in the unlicensed channel in the case of the unlicensed channel being occupied by the terminal. In another possible implementation, the indication information may be in an information field at a fixed position in the control instruction or in a configurable information field of the control instruction. A bit length of the indication information may be fixed or may be determined based on a configuration. In another possible implementation, the indication information may include time-domain information and frequency-domain information of the multiple resource positions; or,the indication information may include time-domain information of the multiple resource positions, and frequency-domain information of the multiple resource positions may be determined according to a first preset rule; or,the indication information may include frequency-domain information of the multiple resource positions, and the time-domain information of the multiple resource positions may be determined according to a second preset rule. In related art, a HARQ mechanism may be applied to data transmission implemented between a base station and a terminal through a licensed channel. However, with the gradual increase of service requirements, licensed channels already cannot meet transmission requirements. Data transmission carried out through an unlicensed channel is proposed. A base station or a terminal, when intending to perform data transmission through an unlicensed channel, is required to contend for the unlicensed channel. Namely a state of the unlicensed channel is detected at first. The unlicensed channel may be occupied when the unlicensed channel is in an idle state. Then, data may be transmitted through the unlicensed channel. In a case that the terminal fails to occupy the unlicensed channel, data cannot be transmitted. Therefore, the HARQ mechanism is not available for an unlicensed channel. In the embodiments of the present disclosure, the HARQ mechanism is applied to an unlicensed channel. A base station may configure multiple resource positions in the unlicensed channel for a terminal to transmit feedback data, namely multiple opportunities are provided for the terminal to transmit the feedback data. In such a manner, even though the terminal fails to occupy the unlicensed channel, the feedback data still can be transmitted through a resource position after successful occupation of the unlicensed channel. Therefore, the success rate of transmitting feedback data is increased. Details about a specific process refer to the following method embodiments. FIG.4is a flow chart of a data transmission method according to an exemplary embodiment. A base station and a terminal are interaction bodies. Descriptions are made in the embodiment taking a case that indication information includes multiple resource positions as an example. As shown inFIG.4, the method includes the following operations. In Operation401, the base station transmits a scheduling instruction to the terminal. In the embodiment of the present disclosure, the base station, when performing data transmission with the terminal, may transmit communication data to the terminal, and the terminal then may receive the communication data. In such a case, for ensuring the reliability of the data transmission, the terminal may detect whether the communication data is correctly received or not, acquire feedback data based on a detection result and transmit the feedback data to the base station. The base station may receive the feedback data and determine whether to retransmit the communication data or continue to transmit a next piece of communication data based on the feedback data. The feedback data may indicate whether the terminal has correctly received the communication data or not. In a case that the terminal does not correctly receive the communication data, the feedback data may also indicate a position of error data in the communication data, an error cause and the like. For making it convenient for the terminal to transmit feedback data, the base station may transmit a scheduling instruction to the terminal, the scheduling instruction carrying indication information and the indication information indicating multiple resource positions configured for the terminal to transmit the feedback data in an unlicensed channel. The terminal, when receiving the indication information, may determine multiple resource positions configured to transmit the feedback data based on the indication information. Each resource position in time domain may be a slot, a subframe, a frame or another time-domain unit. In addition, the scheduling instruction may also carry a resource position configured for the base station to transmit the communication data. In such a case, the base station subsequently transmits the communication data to the terminal through the resource position, and the terminal may determine the resource position based on the scheduling instruction and receive the communication data from the base station through the resource position. For example, the resource positions for transmitting the scheduling instruction, the communication data and the feedback data may be shown inFIG.5. It is to be noted that descriptions are made in the embodiment of the present disclosure by taking a condition of carrying the indication information in the scheduling instruction by the base station for transmission as an example. In another embodiment, the base station may transmit a control instruction to the terminal, the control instruction carrying the indication information. The control instruction may be the scheduling instruction, or may also be a control instruction of another type. In a possible implementation, the base station, when transmitting a control instruction, may transmit the control instruction through a licensed channel. Or the base station may occupy an unlicensed channel and transmit the control instruction through the unlicensed channel. In another possible implementation, the base station, when transmitting the control instruction, may add the indication information to an information field at a fixed position in the control instruction to make the indication information in the information field at the fixed position in the control instruction. The terminal, when receiving the control instruction, can acquire the indication information more conveniently and quickly from the information field at the fixed position in the control instruction. A bit length of the indication information may be determined based on a configuration or may be fixed. In another possible implementation, the base station may flexibly configure an information field for the indication information. When transmitting the control instruction, the base station may add the indication information to the configured information field in the control instruction to make the indication information in the configurable information field of the control instruction. The terminal, when receiving the control instruction, can acquire the indication information from the present configured information field. The bit length of the indication information may be fixed or may be determined based on the configuration. In such manners, an information field of the indication information may be extended in the control instruction to transmit the indication information and make it convenient for the base station to flexibly configure for the terminal the resource positions for transmitting feedback data. In Operation402, the terminal receives the scheduling instruction and determines multiple resource positions based on indication information. The terminal, after receiving the scheduling instruction, may acquire the indication information in the scheduling instruction and determine the multiple resource positions configured to transmit feedback data based on the indication information. In the embodiment of the present disclosure, the condition that the indication information includes the multiple resource positions is taken as an example. The terminal, after acquiring the indication information in the scheduling instruction, may directly determine the multiple resource positions. In a possible implementation, the indication information may include time-domain information and frequency-domain information of the multiple resource positions, and corresponding resource positions may be determined based on the time-domain information and the frequency-domain information. The time-domain information represents a time bucket where the resource position is. For example, the time-domain information may be a slot, symbol, subframe or frame, etc., where the resource position is. The frequency-domain information represents a frequency-domain range where the resource position is, and the unit of the frequency-domain range may be subcarrier, Resource Block (RB), etc. For the multiple resource positions, different resource positions may have the same or different time-domain information and the same or different frequency-domain information as long as it is ensured that either the time-domain information or the frequency-domain information is not completely the same. When the indication information includes information about multiple resource positions in the same unlicensed channel and the multiple resource positions have different time-domain information, the terminal may transmit the feedback data through resource positions in the unlicensed channel at multiple time points. Or, when the indication information includes information about multiple resource positions in the same time bucket and the multiple resource positions have different frequency-domain information and are in different unlicensed channels, the terminal, when intending to transmit feedback data at a certain time point, may select a resource position for transmitting the feedback data based on unlicensed channels where the multiple resource positions are and a state of each of the unlicensed channels. In another possible implementation, the indication information may include the time-domain information of the multiple resource positions, and the frequency-domain information of the multiple resource positions may be determined according to a first preset rule. The first preset rule is a rule of determining frequency-domain information based on time-domain information of a resource position. The first preset rule may be predefined by the base station and the terminal, or may also be pre-configured for the terminal by the base station through signaling. The first preset rule may include preset frequency-domain information to specify an unlicensed channel configured for the terminal to transmit feedback data. Or, the first preset rule may include a fixed offset to instruct frequency-domain information of a resource position to be determined every time based on the offset. In such a case, the frequency-domain information of the resource position is increased or decreased by the fixed offset every time according to a sequence of the time-domain information of the multiple resource positions, thereby determining the frequency-domain information of each resource position. For example, a band of each resource position and a band of the next resource position differ fixedly by 5 MHZ. Or, the first preset rule may also be calculating frequency-domain information of a resource position based on time-domain information of the resource position. A calculation manner may be predefined by the base station and the terminal or configured for the terminal by the base station through signaling. For example, the first preset rule may include a corresponding relationship between time-domain information and frequency-domain information. Every time when time-domain information of a resource position is acquired, the corresponding relationship may be queried to acquire corresponding frequency-domain information. In another possible implementation, the indication information may include the frequency-domain information of the multiple resource positions, and the time-domain information of the multiple resource positions may be determined according to a second preset rule. The second preset rule is a rule of determining time-domain information based on frequency-domain information of a resource position. The second preset rule may be predefined by the base station and the terminal, or may also be pre-configured for the terminal by the base station through signaling. The second preset rule may include preset time-domain information to specify a time bucket configured for the terminal to transmit feedback data. Or, the second preset rule may include a fixed offset to instruct time-domain information of a resource position to be determined every time based on the offset. In such a case, the time-domain information of the resource position is increased or decreased by the fixed offset every time according to a sequence of the frequency-domain information of the multiple resource positions, thereby determining the time-domain information of each resource position. For example, a time bucket of each resource position and a time bucket of the next resource position differ fixedly by 0.1 seconds. Or, the second preset rule may also be calculating time-domain information of a resource position based on frequency-domain information of a resource position. A calculation manner may be predefined by the base station and the terminal or configured for the terminal by the base station through signaling. For example, the second preset rule may include a corresponding relationship between frequency-domain information and time-domain information. Every time when frequency-domain information of a resource position is acquired, the corresponding relationship may be queried to acquire corresponding time-domain information. In Operation403, the base station transmits communication data to the terminal. The resource position for the communication data may be carried in the scheduling instruction by the base station and the communication data may be transmitted to the terminal by the base station through the resource position. The terminal may receive the communication data through the resource position. In addition, The resource position for the communication data may be carried in another control instruction by the base station, the another control instruction may be transmitted by the base station to the terminal, and the communication data may be transmitted to the terminal through the resource position. In a possible implementation, the base station may divide the communication data into multiple data sets and transmit the multiple data sets to the terminal through one or more resource positions. When the communication data are divided into the multiple data sets, a sequence identifier may be added to each data set, the sequence identifier representing a sequence of a data set in the communication data. In Operation404, the terminal receives the communication data, detects whether the communication data is correctly received or not and acquires feedback data based on a detection result. In a possible implementation, when the communication data includes the multiple data sets, the terminal may detect whether each data set in the communication data is correctly received or not, thereby obtaining feedback data of each data set. In such a case, a sequence identifier matched with a data set may be added to each piece of feedback data, so that the acquired feedback data can clearly indicate whether each data set is successfully transmitted or not, and the base station, after acquiring the feedback data, may pertinently process each data set. In Operation405, the feedback data is transmitted through at least one resource position among the multiple resource positions in a case of an unlicensed channel being occupied. The terminal, after acquiring the feedback data, may detect whether the unlicensed channel including the multiple resource positions configured to transmit the feedback data is in the idle state or not. When the unlicensed channel is in the idle state, the terminal may occupy the unlicensed channel and transmit the feedback data through at least one resource position among the multiple resource positions in the case of the unlicensed channel being occupied by the terminal. When the unlicensed channel is in a busy state, the terminal cannot occupy the unlicensed channel. In such a case, the terminal may keep waiting until the unlicensed channel is changed to the idle state and transmit the feedback data through at least one resource position among the multiple resource positions in the case of the unlicensed channel being occupied by the terminal. Operation405may include any one of the following operations4051to4055. In the operation4051, the feedback data is transmitted through one resource position after a successful occupation time among the multiple resource positions in the case of the unlicensed channel being occupied by the terminal. The successful occupation time refers to a time when the terminal successfully occupies the unlicensed channel. The base station may provide multiple resource positions for the terminal to ensure that there are still tome resource positions after the successful occupation time as much as possible and the terminal may transmit the feedback data through one of the resource positions after the successful occupation time. For example, the feedback data may be transmitted through a first resource position after the successful occupation time to save transmission time. In the operation4052, the feedback data is transmitted through a preset number of resource positions after a successful occupation time among the multiple resource positions respectively. When there are multiple resource positions after the successful occupation time, the feedback data may be transmitted through the preset number of resource positions after the successful occupation time respectively, namely multiple copies of the feedback data are transmitted to the base station, to prevent loss of the feedback data and improve the reliability of data transmission. The preset number may be predefined by the base station and the terminal or may also be transmitted to the terminal in advance by the base station through signaling. In operation4053, the feedback data is transmitted through each resource position after a successful occupation time among the multiple resource positions respectively. When there are multiple resource positions after the successful occupation time, the feedback data may be transmitted through each resource position after the successful occupation time respectively, namely multiple copies of the feedback data are transmitted to the base station, to prevent loss of the feedback data and improve the reliability of data transmission. In the operation4054, feedback data corresponding to multiple data sets is transmitted through multiple resource positions after a successful occupation time in the unlicensed channel in the case of the unlicensed channel being occupied by the terminal. When the terminal acquires the feedback data corresponding to the multiple data sets in the communication data and there are multiple resource positions after the successful occupation time, multiple sets of feedback data may be distributed to the multiple resource positions for transmission. For example when one set of feedback data is transmitted at a resource position, multiple sets of feedback data may be transmitted through resource positions with a quantity equal to that of the data sets respectively. Or, when one or more sets of feedback data are transmitted at a resource position, the multiple sets of feedback data may be transmitted through resource positions with a quantity smaller than that of the data sets. Moreover, the set numbers of the feedback data transmitted at the resource positions may be the same or different. In addition, when the multiple sets of feedback data are transmitted, a sequence of the multiple sets of feedback data may be determined based on the sequence identifiers of the feedback data, and the feedback data may be allocated according to the sequence of the multiple sets of feedback data and a sequence of the multiple resource positions to ensure that the feedback data with smaller sequence identifiers may be transmitted through the earlier resource positions. As shown inFIG.6, the communication data is divided into three data sets: a data set1, a data set2and a data set3, and there are three resource positions after the successful occupation time: a resource position1, a resource position2and a resource position3. In such a case, the terminal transmits feedback data of the data set1through the resource position1, transmits feedback data of the data set2through the resource position2and transmits feedback data of the data set3through the resource position3. Or, the terminal transmits the feedback data of the data set1and the data set2through the resource position1, transmits the feedback data of the data set3and the data set1through the resource position2and transmits the feedback data of the data set2and the data set3through the resource position3. It is to be noted that, in the operations4052to4054, after the terminal successfully occupies the unlicensed channel, the feedback data may be transmitted through the multiple resource positions respectively. In a transmission process, the terminal may keep occupying the unlicensed channel, and every time when transmitting feedback data through a resource position, is not required to detect a state of the unlicensed channel again. Or, in the transmission process, every time when transmitting feedback data through a resource position, the terminal may detect the state of the unlicensed channel, occupy the unlicensed channel when determining that the unlicensed channel is in the idle state and release the unlicensed channel after transmitting the feedback data through a resource position in the unlicensed channel. The unlicensed channel may subsequently be reoccupied to transmit feedback data again. In the operation4055, in response to acquiring the feedback data, multiple resource positions within a time bucket after present time and an unlicensed channel where the multiple resource positions are located are determined, and the feedback data is transmitted through a resource position in the unlicensed channel in the case of the unlicensed channel being occupied by the terminal. The terminal, after acquiring the feedback data, may determine resource positions after the present time in the multiple resource positions provided by the base station to determine the multiple resource positions in a time bucket after the present time and the unlicensed channel where the multiple resource positions are located and try to occupy the unlicensed channel. The feedback data may be transmitted through the resource position in the unlicensed channel in the case of the unlicensed channel being occupied by the terminal. In a possible implementation, the time bucket may be a first time bucket after the present time, and the resource positions are selected from the first time bucket, so that a transmission speed of the feedback data may be increased, and the time for data transmission may be saved. In another possible implementation, when the multiple resource positions in the time bucket are in multiple unlicensed channels, the terminal may try to occupy one of the unlicensed channels; if failing, the terminal may occupy the next unlicensed channel. When successfully occupying a certain unlicensed channel, the terminal may transmit the feedback data through a resource position in the unlicensed channel that is successfully occupied. The operation4055may also be combined with the operations4052to4054to form another possible implementation. For example, the terminal, after occupying the unlicensed channel and transmitting the feedback data through a resource position in the time bucket in the unlicensed channel in the operation4055, may further repeatedly transmit the feedback data through another resource position in the unlicensed channel. Or, the terminal, after occupying the unlicensed channel and transmitting the feedback data of one or more data sets through resource positions in the time bucket in the unlicensed channel in the operation4055, may further transmit the feedback data of another data set through another resource position in the unlicensed channel. In Operation406, the base station receives the feedback data from the terminal. In a possible implementation, when the terminal transmits the feedback data through one of the resource positions, the base station may sequentially detect the multiple resource positions provided for the terminal until receiving the feedback data at a certain resource position. In another possible implementation, when the terminal transmits the feedback data through the multiple resource positions, the base station may receive the feedback data through one of the resource positions or may also receive the feedback data through the multiple resource positions. In another possible implementation, when the communication data includes the multiple data sets and the terminal transmits the feedback data corresponding to the multiple data sets through the multiple resource positions, the base station may receive the feedback data of each data set through one or more resource positions if it is ensured that the feedback data of all the data sets is received. According to the method provided in the embodiments of the present disclosure, a base station may transmit a control instruction to a terminal, the control instruction carrying indication information and the indication information indicating multiple resource positions configured for the terminal to transmit feedback data in an unlicensed channel, so that multiple opportunities are provided for the terminal to transmit the feedback data, and under the condition that the terminal occupies the unlicensed channel, the feedback data may be transmitted through at least one resource position among the multiple resource positions. Therefore, the situation that the terminal successfully occupies the unlicensed channel but misses resource positions and thus cannot transmit the feedback data may be avoided, a success rate of transmitting the feedback data by the terminal through the unlicensed channel is increased, and furthermore, the reliability of data transmission is improved. FIG.7is a flow chart of a data transmission method according to an exemplary embodiment. A base station and a terminal are interaction bodies. Descriptions are made in the embodiment taking the condition that indication information is an index identifier as an example. As shown inFIG.6, the method includes the following operations. In Operation701, the base station transmits an index table to the terminal. In the embodiment of the present disclosure, the base station may create the index table based on multiple resource positions configured for the terminal to transmit data and transmit the index table to one or more terminals. The index table includes multiple resource positions corresponding to each index identifier. Different index identifiers correspond to different multiple resource positions. Each terminal is only required to determine an index identifier and then may query the index table to determine multiple resource positions corresponding to the index identifier, thereby transmitting feedback data based on the determined multiple resource positions. For example, the index table may be shown in the following Table 1. If the index identifier is 01, the corresponding multiple resource positions are 1 slot and 3 slots. TABLE 1Index identifierResource position001 slot, 2 slots011 slot, 3 slots101 slot, 3 slots, 5 slots111 slot, 3 slots, 5 slots, 6 slots In a possible implementation, the base station may transmit the index table to the terminal in multiple manners such as by high-layer signaling or physical-layer signaling. In another possible implementation, the base station may create an index table for all terminals and transmit the index table to each terminal. Each terminal may subsequently determine resource positions based on an index identifier. Or, the base station may also independently create an index table for each terminal and transmit the index table for each terminal to a corresponding terminal, and each terminal may subsequently determine a resource position to be adopted based on an index identifier. Moreover, in the index tables for different terminals, the numbers of resource positions and time-domain information and frequency-domain information of each resource position may be the same or different. In Operation702, the terminal receives the index table and stores the index table. It is to be noted that the condition that a base station creates an index table and transmits the index table to a terminal is taken as an example only in the embodiment of the present disclosure. In practice, the operation701and operation702are optional. In another embodiment, the index table may be predefined in an agreement achieved by the base station and the terminal and is not required to be transmitted to the terminal by the base station. In the embodiment of the present disclosure, signaling for transmitting the index table is extended, namely a set of the resource positions is transmitted to the terminal in advance, and then the terminal may subsequently determine resource positions from the index table as required, so that the flexibility is improved. Moreover, every time when configuring a resource position, the base station is required only to transmit an index identifier and but not needed to transmit the resource position, so that transmission resources are saved. In Operation703, the base station transmits a scheduling instruction to the terminal, the scheduling instruction carrying indication information and the indication information being an index identifier. Operation703is similar to Operation701, and the only difference is that: descriptions are made in the embodiment of the present disclosure taking the condition that the indication information is the index identifier as an example, and the terminal may subsequently determine the corresponding multiple resource positions based on the index identifier. When the base station transmits the same index table to multiple terminals, the base station may determine the same index identifier for different terminals to determine the same resource positions for different terminals, or may also determine different index identifiers for different terminals to determine different resource positions for different terminals. In a possible implementation, the base station, when transmitting a control instruction, may add the index identifier to an information field that is at a fixed position or configurable in the control instruction, and then the terminal, in response to receiving the control instruction, may acquire the index identifier from the information field at the fixed position in the control instruction and determine the multiple resource positions corresponding to the index identifier. Moreover, a bit length of the index identifier may be fixed or may be determined based on the number of resource positions configured in the index table. For example, when 11 resource positions and 4 corresponding index identifiers are set in the index table, the bit length of the index identifier in the control instruction is 2, namely the 4 different index identifiers are represented by 2 bits. In Operation704, the terminal receives the scheduling instruction and queries the index table based on the index identifier to determine multiple resource positions corresponding to the index identifier. Since the terminal has stored the index table, the terminal, after receiving the scheduling instruction, may query the index table based on the index identifier in the scheduling instruction to determine the corresponding multiple resource positions. In a possible implementation, the index table may include time-domain information and frequency-domain information of the multiple resource positions, or the time-domain information of the multiple resource positions, or the frequency-domain information of the multiple resource positions. A manner for determining the time-domain information and frequency-domain information of the resource positions is similar to that in Operation402and will not be elaborated herein. In Operation705, the base station transmits communication data to the terminal. In Operation706, the terminal receives the communication data, detects whether the communication data is correctly received or not and acquires feedback data based on a detection result. The feedback data is configured to indicate whether the terminal has correctly received the communication data from the base station or not. In Operation707, the feedback data is transmitted through at least one resource position among the multiple resource positions in a case of an unlicensed channel being occupied. In Operation708, the base station receives the feedback data from the terminal. Operations705to708are similar to Operations403to406and will not be elaborated herein. According to the method provided in the embodiments of the present disclosure, a base station may transmit a control instruction to a terminal, the control instruction carrying index identifier, and the terminal may query an index table based on the index identifier to determine corresponding multiple resource positions, so that multiple opportunities are provided for the terminal to transmit feedback data. Under the condition that the terminal occupies the unlicensed channel, the feedback data may be transmitted through at least one resource position among the multiple resource positions. Therefore, the situation that the terminal successfully occupies the unlicensed channel but misses resource positions and thus cannot transmit the feedback data may be avoided, a success rate of transmitting the feedback data by the terminal through the unlicensed channel is increased, and furthermore, the reliability of data transmission is improved. FIG.8is a block diagram of a data transmission device according to an exemplary embodiment. The device is implemented by a base station. Referring toFIG.8, the base station includes a first transmitting module801and a receiving module802. The first transmitting module801is configured to transmit a control instruction to a terminal, the control instruction carrying indication information and the indication information indicating multiple resource positions configured for the terminal to transmit feedback data in an unlicensed channel. The receiving module802is configured to receive the feedback data from the terminal, the feedback data being transmitted by the terminal through at least one resource position among the multiple resource positions in a case of the unlicensed channel being occupied by the terminal and the feedback data indicating whether the terminal has correctly received communication data from the base station or not. According to the device provided in the embodiment of the present disclosure, the base station may transmit a control instruction, the control instruction carrying indication information and the indication information indicating multiple resource positions configured for the terminal to transmit feedback data, so that multiple opportunities are provided for the terminal to transmit the feedback data. Under the condition that the terminal occupies the unlicensed channel, the feedback data may be transmitted through at least one resource position among the multiple resource positions. Therefore, the situation that the terminal successfully occupies the unlicensed channel but misses resource positions and thus cannot transmit the feedback data may be avoided, a success rate of transmitting the feedback data by the terminal through the unlicensed channel is increased, and furthermore, the reliability of data transmission is improved. In a possible implementation, the first transmitting module801may include:a transmission unit, configured to transmit a scheduling instruction to the terminal. In another possible implementation, the indication information is an index identifier; and the device may further include:a second transmitting module, configured to transmit an index table to the terminal, the index table including multiple resource positions corresponding to each index identifier and the index table being configured to query corresponding multiple resource positions based on any index identifier. In another possible implementation, the communication data may includes multiple data sets; and the receiving module802may include:a receiving unit, configured to receive feedback data corresponding to the multiple data sets through the multiple resource positions. In another possible implementation, the indication information may be in an information field at a fixed position in the control instruction or in a configurable information field of the control instruction; anda bit length of the indication information may be fixed or may be determined based on a configuration. In another possible implementation, the indication information may include time-domain information and frequency-domain information of the multiple resource positions; or,the indication information may include the time-domain information of the multiple resource positions, and the frequency-domain information of the multiple resource positions may be determined according to a first preset rule; or,the indication information may include the frequency-domain information of the multiple resource positions, and the time-domain information of the multiple resource positions may be determined according to a second preset rule. It is to be noted that: during data transmission, the data transmission device provided in the embodiments is only described taking division of each abovementioned functional module as an example and, during a practical application, the abovementioned functions may be allocated to different functional modules for realization according to a requirement, that is, an internal structure of the base station is divided into different functional modules to realize all or part of the functions described above. In addition, the base station provided in the embodiment belongs to the same concept of the data transmission method embodiment and details about a specific implementation process thereof refer to the method embodiment and will not be elaborated herein. FIG.9is a block diagram of a data transmission device according to an exemplary embodiment. The device is implemented by a terminal. Referring toFIG.9, the terminal includes a receiving module901, a first determining module902and a transmitting module903. The receiving module901is configured to receive a control instruction from a base station, the control instruction carrying indication information and the indication information indicating multiple resource positions configured for the terminal to transmit feedback data in an unlicensed channel. The first determining module902is configured to determine the multiple resource positions based on the indication information. The transmitting module903is configured to transmit the feedback data through at least one resource position among the multiple resource positions in the case of the unlicensed channel being occupied by the terminal, the feedback data indicating whether the terminal has correctly received communication data from the base station or not. According to the device provided in the embodiment of the present disclosure, the terminal may receive a control instruction from the base station, the control instruction carrying indication information and the indication information indicating multiple resource positions configured for the terminal to transmit feedback data, so that multiple opportunities are provided for the terminal to transmit the feedback data. Under the condition that the terminal occupies the unlicensed channel, the feedback data may be transmitted through at least one resource position among the multiple resource positions. Therefore, the situation that the terminal successfully occupies the unlicensed channel but misses resource positions and thus cannot transmit the feedback data may be avoided, a success rate of transmitting the feedback data by the terminal through the unlicensed channel is increased, and furthermore, the reliability of data transmission is improved. In a possible implementation, the receiving module901may include:a receiving unit, configured to receive a scheduling instruction transmitted by the base station. In another possible implementation, the indication information may be an index identifier; and the device may further include:a second determining module, configured to query an index table based on the index identifier to determine the multiple resource positions corresponding to the index identifier, the index table including multiple resource positions corresponding to each index identifier. In a possible implementation, the transmitting module903may include:a first transmission unit, configured to transmit the feedback data through one resource position after a successful occupation time among the multiple resource positions in the case of the unlicensed channel being occupied by the terminal; or,a second transmission unit, configured to transmit the feedback data through a preset number of resource positions after a successful occupation time among the multiple resource positions respectively; or,a third transmission unit, configured to transmit the feedback data through each resource position after a successful occupation time among the multiple resource positions respectively. In another possible implementation, the communication data includes multiple data sets; and the transmitting module903may include:a fourth transmission unit, configured to transmit feedback data corresponding to the multiple data sets through multiple resource positions after a successful occupation time in the unlicensed channel in the case of the unlicensed channel being occupied by the terminal. In another possible implementation, the transmitting module903is further configured to, in response to acquiring the feedback data, determine multiple resource positions within a time bucket after present time and an unlicensed channel where the multiple resource positions are located, and transmit the feedback data through a resource position in the unlicensed channel in the case of the unlicensed channel being occupied by the terminal. In another possible implementation, the indication information may be in an information field at a fixed position in the control instruction or in a configurable information field of the control instruction; anda bit length of the indication information may be fixed or may be determined based on a configuration. In another possible implementation, the indication information may include time-domain information and frequency-domain information of the multiple resource positions; or,the indication information may include time-domain information of the multiple resource positions, and frequency-domain information of the multiple resource positions may be determined according to a first preset rule; or,the indication information may include frequency-domain information of the multiple resource positions, and the time-domain information of the multiple resource positions may be determined according to a second preset rule. It is to be noted that: during data transmission, the data transmission device provided in the embodiment is only described taking division of each abovementioned functional module as an example and, during a practical application, the abovementioned functions may be allocated to different functional modules for realization according to a requirement, that is, an internal structure of the terminal is divided into different functional modules to realize all or part of the functions described above. In addition, the terminal provided in the embodiment belongs to the same concept of the data transmission method embodiment and details about a specific implementation process thereof refer to the method embodiment and will not be elaborated herein. FIG.10is a block diagram of a base station according to an exemplary embodiment. Referring toFIG.10, the base station includes a processor1001, memory1002configured to store instructions executable by the processor, and a transceiver1003. The processor1001is configured to execute the following instructions of:transmitting a control instruction to a terminal, the control instruction carrying indication information and the indication information indicating multiple resource positions configured for the terminal to transmit feedback data in an unlicensed channel; andreceiving the feedback data from the terminal, the feedback data being transmitted by the terminal through at least one resource position among the multiple resource positions in the case of the unlicensed channel being occupied by the terminal and the feedback data indicating whether the terminal has correctly received communication data from the base station or not. There is also provided a computer-readable storage medium. An instruction in the computer-readable storage medium is executed by a processor of a base station to enable the base station to execute the data transmission method in the abovementioned embodiment. FIG.11is a block diagram of a terminal1100according to an exemplary embodiment. For example, the terminal1100may be a mobile phone, a computer, a digital broadcast device, a messaging device, a gaming console, a tablet, a medical device, exercise equipment, a personal digital assistant and the like. Referring toFIG.11, the terminal1100may include one or more of the following components: a processing component1102, memory1104, a power component1106, a multimedia component1108, an audio component1110, an Input/Output (I/O) interface1112, a sensor component1114, and a communication component1116. The processing component1102typically controls overall operations of the terminal1100, such as the operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component1102may include one or more processors1120to execute instructions to perform all or part of the operations in the abovementioned method. Moreover, the processing component1102may include one or more modules which facilitate interaction between the processing component1102and the other components. For instance, the processing component1102may include a multimedia module to facilitate interaction between the multimedia component1108and the processing component1102. The memory1104is configured to store various types of data to support the operation of the terminal1100. Examples of such data include instructions for any application programs or methods operated on the terminal1100, contact data, phonebook data, messages, pictures, video, etc. The memory1104may be implemented by any type of volatile or non-volatile memory devices, or a combination thereof, such as a Static Random-Access Memory (SRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), an Erasable Programmable Read-Only Memory (EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic memory, a flash memory, and a magnetic or optical disk. The power component1106provides power for various components of the terminal1100. The power component1106may include a power management system, one or more power supplies, and other components associated with generation, management and distribution of power for the terminal1100. The multimedia component1108includes a screen providing an output interface between the terminal1100and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes the TP, the screen may be implemented as a touch screen to receive an input signal from the user. The touch panel includes one or more touch sensors to sense touches, swipes and gestures on the touch panel. The touch sensors may not only sense a boundary of a touch or swipe action but also detect a duration and pressure associated with the touch or swipe action. In some embodiments, the multimedia component1108includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the terminal1100is in an operation mode, such as a photographing mode or a video mode. Each of the front camera and the rear camera may be a fixed optical lens system or have focusing and optical zooming capabilities. The audio component1110is configured to output and/or input an audio signal. For example, the audio component1110includes a Microphone (MIC), and the MIC is configured to receive an external audio signal when the terminal1100is in the operation mode, such as a call mode, a recording mode and a voice recognition mode. The received audio signal may further be stored in the memory1104or transmitted through the communication component1116. In some embodiments, the audio component1110may further include a speaker configured to output the audio signal. The I/O interface1112provides an interface between the processing component1102and a peripheral interface module, and the peripheral interface module may be a keyboard, a click wheel, a button and the like. The button may include, but not limited to: a home button, a volume button, a starting button and a locking button. The sensor component1114includes one or more sensors configured to provide status assessment in various aspects for the terminal1100. For instance, the sensor component1114may detect an on/off status of the terminal1100and relative positioning of components, such as a display and small keyboard of the terminal1100, and the sensor component1114may further detect a change in a position of the terminal1100or a component of the terminal1100, presence or absence of contact between the user and the terminal1100, orientation or acceleration/deceleration of the terminal1100and a change in temperature of the terminal1100. The sensor component1114may include a proximity sensor configured to detect presence of an object nearby without any physical contact. The sensor component1114may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, configured for use in an imaging application. In some embodiments, the sensor component1114may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor. The communication component1116is configured to facilitate wired or wireless communication between the terminal1100and another device. The terminal1100may access a communication-standard-based wireless network, such as a Wireless Fidelity (Wi-Fi) network, a 2nd-Generation (2G) or 3rd-Generation (3G) network or a combination thereof. In an exemplary embodiment, the communication component1116receives a broadcast signal or broadcast associated information from an external broadcast management system through a broadcast channel. In an exemplary embodiment, the communication component1116may further include a Near Field Communication (NFC) module to facilitate short-range communication. In an exemplary embodiment, the terminal1100may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components, and is configured to execute the data transmission method. In an exemplary embodiment, there is also provided a non-transitory computer-readable storage medium including an instruction, such as the memory1104including an instruction, and the instruction may be executed by the processor1120of the terminal1100to implement the abovementioned method. For example, the non-transitory computer-readable storage medium may be a ROM, a Random-Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disc, an optical data storage device and the like. There is also provided a computer-readable storage medium. An instruction in the computer-readable storage medium is executed by a processor of a terminal to enable the terminal to execute the method in the abovementioned embodiment. The method includes that:a control instruction transmitted by a base station is received, the control instruction carrying indication information and the indication information indicating multiple resource positions configured for the terminal to transmit feedback data in an unlicensed channel;the multiple resource positions are determined based on the indication information; andthe feedback data is transmitted through at least one resource position among the multiple resource positions in the case of the unlicensed channel being occupied by the terminal, the feedback data indicating whether the terminal has correctly received communication data from the base station or not. Those of ordinary skill in the art should know that implementation of all or part of the operations of the abovementioned embodiments may be completed through hardware and may also be completed through related hardware instructed by a program. The program may be stored in a computer-readable storage medium. The storage medium may be a ROM, a magnetic disk, an optical disc or the like. The above is only some optional embodiments of the embodiments of the present disclosure and not intended to limit the present disclosure. Any modifications, equivalent replacements, improvements and the like made within the spirit and principle of the embodiments of the present disclosure shall fall within the scope of protection of the embodiments of the present disclosure.
65,133
11943070
DETAILED DESCRIPTION A user equipment (UE) and a base station may be configured to perform joint channel estimation by using demodulation reference signal (DMRS) bundling across uplink transmissions such as physical uplink control channel (PUCCH) repetitions. In some cases, DMRS bundling for uplink transmissions (e.g., PUCCH, physical uplink shared channel (PUSCH)) may be associated with one or more parameters (e.g., uplink continuity parameters) such as phase continuity, amplitude continuity, and a timing advance continuity. When DMRS bundling is utilized, a base station may receive multiple DMRSs across multiple uplink transmissions (e.g., PUCCH repetitions) and bundle the multiple DMRSs as part of a joint channel estimation procedure. The base station may determine a quality of uplink continuity across the multiple uplink transmissions by measuring variations of the one or more parameters. In some examples, the amount that the one or more parameters vary may be used to determine or classify a quality of the uplink continuity observed or experienced by the base station across the multiple uplink transmissions. For instance, high quality uplink continuity may correspond to small variations (e.g., variations below a threshold) of the one or more parameters. Conversely, low quality uplink continuity (e.g., or lack of uplink continuity) may be associated with large variations (e.g., variations above a threshold) of the one or more parameters. In some cases, uplink continuity may be leveraged by the base station based on a capability report transmitted by the UE. For example, the UE may indicate a capability to maintain a degree or classification of uplink continuity to the base station. However, the UE may not be able to track the quality of uplink continuity (e.g., phase continuity, amplitude continuity, timing advance continuity) during communications with the base station (e.g., in real time or near real time), which may result in an uplink continuity mismatch between the UE (e.g., what the UE is capable of maintaining) and the base station (e.g., what the base station observes). For example, uplink transmissions from the UE to the base station may initially correspond to a degree of uplink continuity, which may be supported or indicated by the UE to the base station. One or more uplink parameters associated with uplink continuity, however, may vary such that the uplink continuity as initially determined or indicated by the UE may be different from the uplink continuity observed by the base station. That is, there may be a mismatch between uplink continuity capability assumptions at the UE and the uplink continuity observed at the base station. Mismatches in uplink continuity at the base station and the UE may lead to inaccurate channel estimation, increased latency, and an overall decrease in network efficiency. Techniques to enable a base station to send a feedback message to a UE indicating a quality of uplink continuity are described in accordance with aspects of the present disclosure. The base station may perform one or more measurements, analyses, or both, to determine a quality of uplink continuity associated with the UE. For example, the base station may determine that a variation of one or more uplink parameters associated with uplink continuity (e.g., phase continuity, amplitude continuity, timing advance continuity) exceeds one or more thresholds, where the thresholds may correspond to classifications (e.g., degrees) of uplink continuity. Based on the determinations at the base station, the base station may transmit a feedback message to the UE indicating a status of uplink continuity (e.g., a quality of uplink continuity as observed by the base station), one or more variations or values of the variation(s) of one or more uplink parameters, an observed classification of uplink continuity, and the like. The feedback message may include other information, such as observed phase drift, observed amplitude variation, and observed timing advance variation. Additionally, or alternatively, the feedback message may include an indication of a presence of phase continuity, a presence of amplitude continuity, a length of time in which uplink continuity has been maintained, or any combination thereof. By utilizing techniques for indicating the quality of uplink continuity to the UE, the UE may have real time information to be able to address mismatches in uplink continuity (e.g., due to phase drift). In other words, the UE may be informed when uplink continuity is not being maintained at an acceptable level and the base station may cease DMRS bundling of uplink transmission from the UE. The described techniques provide for informing the UE of the quality of uplink continuity being maintained or observed by the base station. By keeping the UE informed of uplink continuity, overall network efficiency may increase due to improved channel estimation. Aspects of the disclosure are initially described in the context of wireless communications systems. Aspects of the disclosure are further illustrated by and described with reference to a bitmap and a process flow. Aspects of the disclosure are further illustrated by and described with reference to apparatus diagrams, system diagrams, and flowcharts that relate to network feedback for uplink continuity. FIG.1illustrates an example of a wireless communications system100that supports network feedback for uplink continuity in accordance with aspects of the present disclosure. The wireless communications system100may include one or more base stations105, one or more UEs115, and a core network130. In some examples, the wireless communications system100may be a Long Term Evolution (LTE) network, an LTE-Advanced (LTE-A) network, an LTE-A Pro network, or a New Radio (NR) network. In some examples, the wireless communications system100may support enhanced broadband communications, ultra-reliable communications, low latency communications, communications with low-cost and low-complexity devices, or any combination thereof. The base stations105may be dispersed throughout a geographic area to form the wireless communications system100and may be devices in different forms or having different capabilities. The base stations105and the UEs115may wirelessly communicate via one or more communication links125. Each base station105may provide a coverage area110over which the UEs115and the base station105may establish one or more communication links125. The coverage area110may be an example of a geographic area over which a base station105and a UE115may support the communication of signals according to one or more radio access technologies. The UEs115may be dispersed throughout a coverage area110of the wireless communications system100, and each UE115may be stationary, or mobile, or both at different times. The UEs115may be devices in different forms or having different capabilities. Some example UEs115are illustrated inFIG.1. The UEs115described herein may be able to communicate with various types of devices, such as other UEs115, the base stations105, or network equipment (e.g., core network nodes, relay devices, integrated access and backhaul (IAB) nodes, or other network equipment), as shown inFIG.1. The base stations105may communicate with the core network130, or with one another, or both. For example, the base stations105may interface with the core network130through one or more backhaul links120(e.g., via an S1, N2, N3, or other interface). The base stations105may communicate with one another over the backhaul links120(e.g., via an X2, Xn, or other interface) either directly (e.g., directly between base stations105), or indirectly (e.g., via core network130), or both. In some examples, the backhaul links120may be or include one or more wireless links. One or more of the base stations105described herein may include or may be referred to by a person having ordinary skill in the art as a base transceiver station, a radio base station, an access point, a radio transceiver, a NodeB, an eNodeB (eNB), a next-generation NodeB or a giga-NodeB (either of which may be referred to as a gNB), a Home NodeB, a Home eNodeB, or other suitable terminology. A UE115may include or may be referred to as a mobile device, a wireless device, a remote device, a handheld device, or a subscriber device, or some other suitable terminology, where the “device” may also be referred to as a unit, a station, a terminal, or a client, among other examples. A UE115may also include or may be referred to as a personal electronic device such as a cellular phone, a personal digital assistant (PDA), a tablet computer, a laptop computer, or a personal computer. In some examples, a UE115may include or be referred to as a wireless local loop (WLL) station, an Internet of Things (IoT) device, an Internet of Everything (IoE) device, or a machine type communications (MTC) device, among other examples, which may be implemented in various objects such as appliances, or vehicles, meters, among other examples. The UEs115described herein may be able to communicate with various types of devices, such as other UEs115that may sometimes act as relays as well as the base stations105and the network equipment including macro eNBs or gNBs, small cell eNBs or gNBs, or relay base stations, among other examples, as shown inFIG.1. The UEs115and the base stations105may wirelessly communicate with one another via one or more communication links125over one or more carriers. The term “carrier” may refer to a set of radio frequency spectrum resources having a defined physical layer structure for supporting the communication links125. For example, a carrier used for a communication link125may include a portion of a radio frequency spectrum band (e.g., a bandwidth part (BWP)) that is operated according to one or more physical layer channels for a given radio access technology (e.g., LTE, LTE-A, LTE-A Pro, NR). Each physical layer channel may carry acquisition signaling (e.g., synchronization signals, system information), control signaling that coordinates operation for the carrier, user data, or other signaling. The wireless communications system100may support communication with a UE115using carrier aggregation or multi-carrier operation. A UE115may be configured with multiple downlink component carriers and one or more uplink component carriers according to a carrier aggregation configuration. Carrier aggregation may be used with both frequency division duplexing (FDD) and time division duplexing (TDD) component carriers. In some examples (e.g., in a carrier aggregation configuration), a carrier may also have acquisition signaling or control signaling that coordinates operations for other carriers. A carrier may be associated with a frequency channel (e.g., an evolved universal mobile telecommunication system terrestrial radio access (E-UTRA) absolute radio frequency channel number (EARFCN)) and may be positioned according to a channel raster for discovery by the UEs115. A carrier may be operated in a standalone mode where initial acquisition and connection may be conducted by the UEs115via the carrier, or the carrier may be operated in a non-standalone mode where a connection is anchored using a different carrier (e.g., of the same or a different radio access technology). The communication links125shown in the wireless communications system100may include uplink transmissions from a UE115to a base station105, or downlink transmissions from a base station105to a UE115. Carriers may carry downlink or uplink communications (e.g., in an FDD mode) or may be configured to carry downlink and uplink communications (e.g., in a TDD mode). A carrier may be associated with a particular bandwidth of the radio frequency spectrum, and in some examples the carrier bandwidth may be referred to as a “system bandwidth” of the carrier or the wireless communications system100. For example, the carrier bandwidth may be one of a number of determined bandwidths for carriers of a particular radio access technology (e.g., 1.4, 3, 5, 10, 15, 20, 40, or 80 megahertz (MHz)). Devices of the wireless communications system100(e.g., the base stations105, the UEs115, or both) may have hardware configurations that support communications over a particular carrier bandwidth or may be configurable to support communications over one of a set of carrier bandwidths. In some examples, the wireless communications system100may include base stations105or UEs115that support simultaneous communications via carriers associated with multiple carrier bandwidths. In some examples, each served UE115may be configured for operating over portions (e.g., a sub-band, a BWP) or all of a carrier bandwidth. Signal waveforms transmitted over a carrier may be made up of multiple subcarriers (e.g., using multi-carrier modulation (MCM) techniques such as orthogonal frequency division multiplexing (OFDM) or discrete Fourier transform spread OFDM (DFT-S-OFDM)). In a system employing MCM techniques, a resource element may consist of one symbol period (e.g., a duration of one modulation symbol) and one subcarrier, where the symbol period and subcarrier spacing are inversely related. The number of bits carried by each resource element may depend on the modulation scheme (e.g., the order of the modulation scheme, the coding rate of the modulation scheme, or both). Thus, the more resource elements that a UE115receives and the higher the order of the modulation scheme, the higher the data rate may be for the UE115. A wireless communications resource may refer to a combination of a radio frequency spectrum resource, a time resource, and a spatial resource (e.g., spatial layers or beams), and the use of multiple spatial layers may further increase the data rate or data integrity for communications with a UE115. One or more numerologies for a carrier may be supported, where a numerology may include a subcarrier spacing (Δf) and a cyclic prefix. A carrier may be divided into one or more BWPs having the same or different numerologies. In some examples, a UE115may be configured with multiple BWPs. In some examples, a single BWP for a carrier may be active at a given time and communications for the UE115may be restricted to one or more active BWPs. The time intervals for the base stations105or the UEs115may be expressed in multiples of a basic time unit which may, for example, refer to a sampling period of Ts=1/ (Δfmax·Nf) seconds, where Δfmaxmay represent the maximum supported subcarrier spacing, and Nfmay represent the maximum supported discrete Fourier transform (DFT) size. Time intervals of a communications resource may be organized according to radio frames each having a specified duration (e.g., 10 milliseconds (ms)). Each radio frame may be identified by a system frame number (SFN) (e.g., ranging from 0 to 1023). Each frame may include multiple consecutively numbered subframes or slots, and each subframe or slot may have the same duration. In some examples, a frame may be divided (e.g., in the time domain) into subframes, and each subframe may be further divided into a number of slots. Alternatively, each frame may include a variable number of slots, and the number of slots may depend on subcarrier spacing. Each slot may include a number of symbol periods (e.g., depending on the length of the cyclic prefix prepended to each symbol period). In some wireless communications systems100, a slot may further be divided into multiple mini-slots containing one or more symbols. Excluding the cyclic prefix, each symbol period may contain one or more (e.g., Nf) sampling periods. The duration of a symbol period may depend on the subcarrier spacing or frequency band of operation. A subframe, a slot, a mini-slot, or a symbol may be the smallest scheduling unit (e.g., in the time domain) of the wireless communications system100and may be referred to as a transmission time interval (TTI). In some examples, the TTI duration (e.g., the number of symbol periods in a TTI) may be variable. Additionally, or alternatively, the smallest scheduling unit of the wireless communications system100may be dynamically selected (e.g., in bursts of shortened TTIs (sTTIs)). Physical channels may be multiplexed on a carrier according to various techniques. A physical control channel and a physical data channel may be multiplexed on a downlink carrier, for example, using one or more of time division multiplexing (TDM) techniques, frequency division multiplexing (FDM) techniques, or hybrid TDM-FDM techniques. A control region (e.g., a control resource set (CORESET)) for a physical control channel may be defined by a number of symbol periods and may extend across the system bandwidth or a subset of the system bandwidth of the carrier. One or more control regions (e.g., CORESETs) may be configured for a set of the UEs115. For example, one or more of the UEs115may monitor or search control regions for control information according to one or more search space sets, and each search space set may include one or multiple control channel candidates in one or more aggregation levels arranged in a cascaded manner. An aggregation level for a control channel candidate may refer to a number of control channel resources (e.g., control channel elements (CCEs)) associated with encoded information for a control information format having a given payload size. Search space sets may include common search space sets configured for sending control information to multiple UEs115and UE-specific search space sets for sending control information to a specific UE115. Each base station105may provide communication coverage via one or more cells, for example a macro cell, a small cell, a hot spot, or other types of cells, or any combination thereof. The term “cell” may refer to a logical communication entity used for communication with a base station105(e.g., over a carrier) and may be associated with an identifier for distinguishing neighboring cells (e.g., a physical cell identifier (PCID), a virtual cell identifier (VCID), or others). In some examples, a cell may also refer to a geographic coverage area110or a portion of a geographic coverage area110(e.g., a sector) over which the logical communication entity operates. Such cells may range from smaller areas (e.g., a structure, a subset of structure) to larger areas depending on various factors such as the capabilities of the base station105. For example, a cell may be or include a building, a subset of a building, or exterior spaces between or overlapping with geographic coverage areas110, among other examples. A macro cell generally covers a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by the UEs115with service subscriptions with the network provider supporting the macro cell. A small cell may be associated with a lower-powered base station105, as compared with a macro cell, and a small cell may operate in the same or different (e.g., licensed, unlicensed) frequency bands as macro cells. Small cells may provide unrestricted access to the UEs115with service subscriptions with the network provider or may provide restricted access to the UEs115having an association with the small cell (e.g., the UEs115in a closed subscriber group (CSG), the UEs115associated with users in a home or office). A base station105may support one or multiple cells and may also support communications over the one or more cells using one or multiple component carriers. In some examples, a carrier may support multiple cells, and different cells may be configured according to different protocol types (e.g., MTC, narrowband IoT (NB-IoT), enhanced mobile broadband (eMBB)) that may provide access for different types of devices. In some examples, a base station105may be movable and therefore provide communication coverage for a moving geographic coverage area110. In some examples, different geographic coverage areas110associated with different technologies may overlap, but the different geographic coverage areas110may be supported by the same base station105. In other examples, the overlapping geographic coverage areas110associated with different technologies may be supported by different base stations105. The wireless communications system100may include, for example, a heterogeneous network in which different types of the base stations105provide coverage for various geographic coverage areas110using the same or different radio access technologies. The wireless communications system100may support synchronous or asynchronous operation. For synchronous operation, the base stations105may have similar frame timings, and transmissions from different base stations105may be approximately aligned in time. For asynchronous operation, the base stations105may have different frame timings, and transmissions from different base stations105may, in some examples, not be aligned in time. The techniques described herein may be used for either synchronous or asynchronous operations. Some UEs115, such as MTC or IoT devices, may be low cost or low complexity devices and may provide for automated communication between machines (e.g., via Machine-to-Machine (M2M) communication). M2M communication or MTC may refer to data communication technologies that allow devices to communicate with one another or a base station105without human intervention. In some examples, M2M communication or MTC may include communications from devices that integrate sensors or meters to measure or capture information and relay such information to a central server or application program that makes use of the information or presents the information to humans interacting with the application program. Some UEs115may be designed to collect information or enable automated behavior of machines or other devices. Examples of applications for MTC devices include smart metering, inventory monitoring, water level monitoring, equipment monitoring, healthcare monitoring, wildlife monitoring, weather and geological event monitoring, fleet management and tracking, remote security sensing, physical access control, and transaction-based business charging. Some UEs115may be configured to employ operating modes that reduce power consumption, such as half-duplex communications (e.g., a mode that supports one-way communication via transmission or reception, but not transmission and reception simultaneously). In some examples, half-duplex communications may be performed at a reduced peak rate. Other power conservation techniques for the UEs115include entering a power saving deep sleep mode when not engaging in active communications, operating over a limited bandwidth (e.g., according to narrowband communications), or a combination of these techniques. For example, some UEs115may be configured for operation using a narrowband protocol type that is associated with a defined portion or range (e.g., set of subcarriers or resource blocks (RBs)) within a carrier, within a guard-band of a carrier, or outside of a carrier. The wireless communications system100may be configured to support ultra-reliable communications or low-latency communications, or various combinations thereof. For example, the wireless communications system100may be configured to support ultra-reliable low-latency communications (URLLC). The UEs115may be designed to support ultra-reliable, low-latency, or critical functions. Ultra-reliable communications may include private communication or group communication and may be supported by one or more services such as push-to-talk, video, or data. Support for ultra-reliable, low-latency functions may include prioritization of services, and such services may be used for public safety or general commercial applications. The terms ultra-reliable, low-latency, and ultra-reliable low-latency may be used interchangeably herein. In some examples, a UE115may also be able to communicate directly with other UEs115over a device-to-device (D2D) communication link135(e.g., using a peer-to-peer (P2P) or D2D protocol). One or more UEs115utilizing D2D communications may be within the geographic coverage area110of a base station105. Other UEs115in such a group may be outside the geographic coverage area110of a base station105or be otherwise unable to receive transmissions from a base station105. In some examples, groups of the UEs115communicating via D2D communications may utilize a one-to-many (1:M) system in which each UE115transmits to every other UE115in the group. In some examples, a base station105facilitates the scheduling of resources for D2D communications. In other cases, D2D communications are carried out between the UEs115without the involvement of a base station105. In some systems, the D2D communication link135may be an example of a communication channel, such as a sidelink communication channel, between vehicles (e.g., UEs115). In some examples, vehicles may communicate using vehicle-to-everything (V2X) communications, vehicle-to-vehicle (V2V) communications, or some combination of these. A vehicle may signal information related to traffic conditions, signal scheduling, weather, safety, emergencies, or any other information relevant to a V2X system. In some examples, vehicles in a V2X system may communicate with roadside infrastructure, such as roadside units, or with the network via one or more network nodes (e.g., base stations105) using vehicle-to-network (V2N) communications, or with both. The core network130may provide user authentication, access authorization, tracking, Internet Protocol (IP) connectivity, and other access, routing, or mobility functions. The core network130may be an evolved packet core (EPC) or 5G core (5GC), which may include at least one control plane entity that manages access and mobility (e.g., a mobility management entity (MME), an access and mobility management function (AMF)) and at least one user plane entity that routes packets or interconnects to external networks (e.g., a serving gateway (S-GW), a Packet Data Network (PDN) gateway (P-GW), or a user plane function (UPF)). The control plane entity may manage non-access stratum (NAS) functions such as mobility, authentication, and bearer management for the UEs115served by the base stations105associated with the core network130. User IP packets may be transferred through the user plane entity, which may provide IP address allocation as well as other functions. The user plane entity may be connected to IP services150for one or more network operators. The IP services150may include access to the Internet, Intranet(s), an IP Multimedia Subsystem (IMS), or a Packet-Switched Streaming Service. Some of the network devices, such as a base station105, may include subcomponents such as an access network entity140, which may be an example of an access node controller (ANC). Each access network entity140may communicate with the UEs115through one or more other access network transmission entities145, which may be referred to as radio heads, smart radio heads, or transmission/reception points (TRPs). Each access network transmission entity145may include one or more antenna panels. In some configurations, various functions of each access network entity140or base station105may be distributed across various network devices (e.g., radio heads and ANCs) or consolidated into a single network device (e.g., a base station105). The wireless communications system100may operate using one or more frequency bands, typically in the range of 300 megahertz (MHz) to 300 gigahertz (GHz). Generally, the region from 300 MHz to 3 GHz is known as the ultra-high frequency (UHF) region or decimeter band because the wavelengths range from approximately one decimeter to one meter in length. The UHF waves may be blocked or redirected by buildings and environmental features, but the waves may penetrate structures sufficiently for a macro cell to provide service to the UEs115located indoors. The transmission of UHF waves may be associated with smaller antennas and shorter ranges (e.g., less than 100 kilometers) compared to transmission using the smaller frequencies and longer waves of the high frequency (HF) or very high frequency (VHF) portion of the spectrum below 300 MHz. The wireless communications system100may also operate in a super high frequency (SHF) region using frequency bands from 3 GHz to 30 GHz, also known as the centimeter band, or in an extremely high frequency (EHF) region of the spectrum (e.g., from 30 GHz to 300 GHz), also known as the millimeter band. In some examples, the wireless communications system100may support millimeter wave (mmW) communications between the UEs115and the base stations105, and EHF antennas of the respective devices may be smaller and more closely spaced than UHF antennas. In some examples, this may facilitate use of antenna arrays within a device. The propagation of EHF transmissions, however, may be subject to even greater atmospheric attenuation and shorter range than SHF or UHF transmissions. The techniques disclosed herein may be employed across transmissions that use one or more different frequency regions, and designated use of bands across these frequency regions may differ by country or regulating body. The wireless communications system100may utilize both licensed and unlicensed radio frequency spectrum bands. For example, the wireless communications system100may employ License Assisted Access (LAA), LTE-Unlicensed (LTE-U) radio access technology, or NR technology in an unlicensed band such as the 5 GHz industrial, scientific, and medical (ISM) band. When operating in unlicensed radio frequency spectrum bands, devices such as the base stations105and the UEs115may employ carrier sensing for collision detection and avoidance. In some examples, operations in unlicensed bands may be based on a carrier aggregation configuration in conjunction with component carriers operating in a licensed band (e.g., LAA). Operations in unlicensed spectrum may include downlink transmissions, uplink transmissions, P2P transmissions, or D2D transmissions, among other examples. A base station105or a UE115may be equipped with multiple antennas, which may be used to employ techniques such as transmit diversity, receive diversity, multiple-input multiple-output (MIMO) communications, or beamforming. The antennas of a base station105or a UE115may be located within one or more antenna arrays or antenna panels, which may support MIMO operations or transmit or receive beamforming. For example, one or more base station antennas or antenna arrays may be co-located at an antenna assembly, such as an antenna tower. In some examples, antennas or antenna arrays associated with a base station105may be located in diverse geographic locations. A base station105may have an antenna array with a number of rows and columns of antenna ports that the base station105may use to support beamforming of communications with a UE115. Likewise, a UE115may have one or more antenna arrays that may support various MIMO or beamforming operations. Additionally, or alternatively, an antenna panel may support radio frequency beamforming for a signal transmitted via an antenna port. The base stations105or the UEs115may use MIMO communications to exploit multipath signal propagation and increase the spectral efficiency by transmitting or receiving multiple signals via different spatial layers. Such techniques may be referred to as spatial multiplexing. The multiple signals may, for example, be transmitted by the transmitting device via different antennas or different combinations of antennas. Likewise, the multiple signals may be received by the receiving device via different antennas or different combinations of antennas. Each of the multiple signals may be referred to as a separate spatial stream and may carry bits associated with the same data stream (e.g., the same codeword) or different data streams (e.g., different codewords). Different spatial layers may be associated with different antenna ports used for channel measurement and reporting. MIMO techniques include single-user MIMO (SU-MIMO), where multiple spatial layers are transmitted to the same receiving device, and multiple-user MIMO (MU-MIMO), where multiple spatial layers are transmitted to multiple devices. Beamforming, which may also be referred to as spatial filtering, directional transmission, or directional reception, is a signal processing technique that may be used at a transmitting device or a receiving device (e.g., a base station105, a UE115) to shape or steer an antenna beam (e.g., a transmit beam, a receive beam) along a spatial path between the transmitting device and the receiving device. Beamforming may be achieved by combining the signals communicated via antenna elements of an antenna array such that some signals propagating at particular orientations with respect to an antenna array experience constructive interference while others experience destructive interference. The adjustment of signals communicated via the antenna elements may include a transmitting device or a receiving device applying amplitude offsets, phase offsets, or both to signals carried via the antenna elements associated with the device. The adjustments associated with each of the antenna elements may be defined by a beamforming weight set associated with a particular orientation (e.g., with respect to the antenna array of the transmitting device or receiving device, or with respect to some other orientation). A base station105or a UE115may use beam sweeping techniques as part of beam forming operations. For example, a base station105may use multiple antennas or antenna arrays (e.g., antenna panels) to conduct beamforming operations for directional communications with a UE115. Some signals (e.g., synchronization signals, reference signals, beam selection signals, or other control signals) may be transmitted by a base station105multiple times in different directions. For example, the base station105may transmit a signal according to different beamforming weight sets associated with different directions of transmission. Transmissions in different beam directions may be used to identify (e.g., by a transmitting device, such as a base station105, or by a receiving device, such as a UE115) a beam direction for later transmission or reception by the base station105. Some signals, such as data signals associated with a particular receiving device, may be transmitted by a base station105in a single beam direction (e.g., a direction associated with the receiving device, such as a UE115). In some examples, the beam direction associated with transmissions along a single beam direction may be determined based on a signal that was transmitted in one or more beam directions. For example, a UE115may receive one or more of the signals transmitted by the base station105in different directions and may report to the base station105an indication of the signal that the UE115received with a highest signal quality or an otherwise acceptable signal quality. In some examples, transmissions by a device (e.g., by a base station105or a UE115) may be performed using multiple beam directions, and the device may use a combination of digital precoding or radio frequency beamforming to generate a combined beam for transmission (e.g., from a base station105to a UE115). The UE115may report feedback that indicates precoding weights for one or more beam directions, and the feedback may correspond to a configured number of beams across a system bandwidth or one or more sub-bands. The base station105may transmit a reference signal (e.g., a cell-specific reference signal (CRS), a channel state information reference signal (CSI-RS)), which may be precoded or unprecoded. The UE115may provide feedback for beam selection, which may be a precoding matrix indicator (PMI) or codebook-based feedback (e.g., a multi-panel type codebook, a linear combination type codebook, a port selection type codebook). Although these techniques are described with reference to signals transmitted in one or more directions by a base station105, a UE115may employ similar techniques for transmitting signals multiple times in different directions (e.g., for identifying a beam direction for subsequent transmission or reception by the UE115) or for transmitting a signal in a single direction (e.g., for transmitting data to a receiving device). A receiving device (e.g., a UE115) may try multiple receive configurations (e.g., directional listening) when receiving various signals from the base station105, such as synchronization signals, reference signals, beam selection signals, or other control signals. For example, a receiving device may try multiple receive directions by receiving via different antenna subarrays, by processing received signals according to different antenna subarrays, by receiving according to different receive beamforming weight sets (e.g., different directional listening weight sets) applied to signals received at multiple antenna elements of an antenna array, or by processing received signals according to different receive beamforming weight sets applied to signals received at multiple antenna elements of an antenna array, any of which may be referred to as “listening” according to different receive configurations or receive directions. In some examples, a receiving device may use a single receive configuration to receive along a single beam direction (e.g., when receiving a data signal). The single receive configuration may be aligned in a beam direction determined based on listening according to different receive configuration directions (e.g., a beam direction determined to have a highest signal strength, highest signal-to-noise ratio (SNR), or otherwise acceptable signal quality based on listening according to multiple beam directions). The wireless communications system100may be a packet-based network that operates according to a layered protocol stack. In the user plane, communications at the bearer or Packet Data Convergence Protocol (PDCP) layer may be IP-based. A Radio Link Control (RLC) layer may perform packet segmentation and reassembly to communicate over logical channels. A Medium Access Control (MAC) layer may perform priority handling and multiplexing of logical channels into transport channels. The MAC layer may also use error detection techniques, error correction techniques, or both to support retransmissions at the MAC layer to improve link efficiency. In the control plane, the Radio Resource Control (RRC) protocol layer may provide establishment, configuration, and maintenance of an RRC connection between a UE115and a base station105or a core network130supporting radio bearers for user plane data. At the physical layer, transport channels may be mapped to physical channels. The UEs115and the base stations105may support retransmissions of data to increase the likelihood that data is received successfully. Hybrid automatic repeat request (HARQ) feedback is one technique for increasing the likelihood that data is received correctly over a communication link125. HARQ may include a combination of error detection (e.g., using a cyclic redundancy check (CRC)), forward error correction (FEC), and retransmission (e.g., automatic repeat request (ARQ)). HARQ may improve throughput at the MAC layer in poor radio conditions (e.g., low signal-to-noise conditions). In some examples, a device may support same-slot HARQ feedback, where the device may provide HARQ feedback in a specific slot for data received in a previous symbol in the slot. In other cases, the device may provide HARQ feedback in a subsequent slot, or according to some other time interval. In some instances, the base station105may perform joint channel estimation utilizing DMRS bundling and uplink continuity across multiple uplink transmissions from a UE115. In some cases, DMRS bundling may be performed under the assumption of one or more uplink continuity classifications. For example, the base station105may perform DMRS bundling and joint channel estimation if the UE115indicates an uplink continuity classification conducive to performing DMRS bundling of uplink communications. Specifically, the UE115may indicate (e.g., via a capability message) a capability to maintain a given classification of uplink continuity (e.g., small variations in phase continuity, amplitude continuity, constant timing advance). Based on the capability message, the base station105may perform joint channel estimation using DMRS bundling corresponding to uplink transmissions from the UE115. In some examples, the base station105may observe a quality associated with the UE115. That is, the base station105may monitor variations in one or more uplink parameters. For example, the base station105may monitor variations in phase continuity (e.g., phase drift). Additionally, or alternatively, the base station105may monitor uplink transmissions from the UE115for variations in amplitude continuity, timing advance, or both. In some cases, the base station105may send a feedback message to the UE115indicating a quality of uplink continuity, an indication of variations in one or more uplink parameters, or the like. In some instances, the base station105may determine that variations of the one or more uplink parameters associated with uplink continuity exceeds one or more thresholds, where the thresholds may correspond to classifications (e.g., quality) of uplink continuity. Based on the determinations at the base station105, the base station105may transmit the feedback message to the UE indicating a status of uplink continuity, the variation of one or more uplink parameters, one or more classifications, and the like. In some cases, the feedback message may be transmitted over an RRC message (e.g., encoded in a bitmap). In other cases, the base station105may send the feedback message over MAC-CE messaging, or indicate feedback information over downlink control information (DCI), or both (e.g., partial feedback via MAC-CE and remaining feedback via DCI). The feedback message may include other quantities, such as an indication of a presence of phase continuity, a presence of amplitude continuity, a length of time in which uplink continuity has been maintained, or any combination thereof. By utilizing techniques for indicating the quality of uplink continuity to the UE115, the UE115may have real time information to be able to address mismatches in uplink continuity (e.g., due to phase drift). In other words, the UE115may be informed when uplink continuity is not being maintained at a previously indicated (e.g., or assumed) level. Accordingly, the UE115may change future capability indications based on the feedback message from the base station105. Additionally, the base station105may update assumptions corresponding to the capability of the UE115based on the feedback message. The described techniques provide for updating the UE115of the quality of uplink continuity being maintained. By keeping the UE115informed of uplink continuity, overall network efficiency may increase. FIG.2illustrates an example of a wireless communications system200that supports network feedback for uplink continuity in accordance with aspects of the present disclosure. The wireless communications system200may implement or be implemented by one or more aspects of the wireless communications system100. For example, the wireless communications system200may include a UE115-aand a base station105-a, which may be examples of the UE115and the base station105as described with reference toFIG.1. It should be noted that, while examples are discussed below, any number of devices and device types may be used for implementations described in the present disclosure. The base station105-aand the UE115-amay communicate over one or more communication links. For example, the base station105-amay communicate with the UE115-aover a downlink235and the UE115-amay communicate with the base station105-aover an uplink205. In some examples, the UE115-amay indicate a capability of uplink continuity supported by the UE115-a, which may indicate the ability of the UE115-ato support a given classification (e.g., variation of one or more parameters relative to a threshold) for uplink transmissions, which the base station105-amay use to determine whether to perform DMRS bundling of uplink transmissions for joint channel estimation at the base station105-a. That is, the UE115-amay perform uplink transmissions such that the base station105-amay collectively utilize DMRS from different times or messages (e.g., using DMRS from different TTIs or uplink messages from the UE115-a) to estimate a channel (e.g., to perform joint channel estimation). Collectively utilizing DMRS across multiple TTIs or uplink messages may be referred to as DMRS bundling. In some cases, DMRS bundling may be used by the base station105-aacross multiple uplink transmissions or repetitions (e.g., PUCCH repetitions, PUSCH repetitions). Uplink continuity may be associated with one or more parameters such as phase continuity, amplitude continuity, and a timing advance continuity, which, if a UE115-ais capable of maintaining a certain level, classification, or degree of uplink continuity, may enable the base station105-ato use DMRS bundling (e.g., combining DMRS received from different TTIs or uplink messages) and perform joint channel estimation. The quality of uplink continuity may correspond to the degree of variation of one or more parameters and depending on the quality of uplink continuity, the base station105-amay determine whether to perform DMRS bundling as part of joint channel estimation. For instance, if the UE115-ais unable to maintain a given class or level of uplink continuity (e.g., a given (e.g., low) quality of uplink continuity), the base station105-amay determine not to perform DMRS bundling for joint channel estimation. If the UE115-amaintains a given class or level of uplink continuity (e.g., a given (e.g., high) quality of uplink continuity), the base station105-amay use DMRS bundling for joint channel estimation across multiple uplink transmissions from the UE115-a. The amount that the one or more parameters vary may be used to classify a quality of uplink continuity, where the quality of uplink continuity may have a direct correspondence to the ability to perform DMRS bundling at the base station105-a. For example, high quality uplink continuity may correspond to small variations of the one or more parameters (e.g., when a variation of a parameter is below a threshold variation). Small variations of the one or more parameters may translate to small variations in uplink transmission parameters over a number of TTIs, which may result in to more efficient DMRS bundling. That is, small variations of the one or more parameters may correspond to more consistent uplink transmissions, which may enable more efficient DMRS bundling and, therefore, joint channel estimation. Conversely, low quality uplink continuity (e.g., lack of uplink continuity) may be associated with large variations of the one or more parameters (e.g., when a variation of a parameter is above a threshold variation). Large variations may result in one or more discontinuities in DMRS between TTIs, which may hinder DMRS bundling and, therefore, joint channel estimation at the base station105-a. In some cases, the UE115-amay initially indicate a quality of uplink continuity to the base station105-abased on a capability to maintain small variations in the one or more parameters. For example, the UE115-amay initially determine that variations corresponding to the one or more parameters are small. Based on the variations of the one or more parameters being small, the UE115-amay indicate a capability to maintain high quality uplink continuity to the base station105-a. In some cases, the UE115-amay indicate a classification associated with the quality of uplink continuity. For example, the UE115-amay indicate an uplink continuity classification of “high” based at least in part on the capability to maintain high uplink continuity and one or more thresholds (e.g., less than 1 dB of amplitude variation across TTIs). Based on the indicated capability, the base station105-amay perform joint channel estimation of the uplink205by utilizing high quality uplink continuity to perform DMRS bundling. In some cases, however, uplink continuity (e.g., variations of the one or more parameters) at the UE115-amay differ from observed uplink continuity at the base station105-a. For example, a transmitted uplink message210-atransmitted at the UE115-amay correspond to one or more TTIs (e.g., TTI215-aand TTI215-b), where the UE115-amay determine that there are small or no variations of the one or more parameters (e.g., phase continuity, amplitude continuity, constant timing advance) across the TTI215-aand the TTI215-b. Small or no variations of the one or more parameters may correspond to uplink continuity across both the TTI215-aand the TTI215-b, as illustrated inFIG.2. However, due to one or more phenomena, such as channel conditions (e.g., fading, attenuation, blocking), there may be an uplink continuity mismatch between the transmitted uplink message210-atransmitted at the UE115-a, and a received uplink message210-breceived at the base station105-a. That is, the uplink continuity assumed at the UE115-a(e.g., based on small variations of the one or more parameters corresponding to the transmitted uplink message210-a) may differ from uplink continuity observed at the base station105-a(e.g., based on observed variations of the one or more parameters corresponding to the received uplink message210-b). Likewise, the classification of uplink continuity as indicated by the UE115-amay differ from observed classification of uplink continuity at the base station105-a. For example, the base station105-amay receive the uplink message210-b, which may span a TTI220-aand a TTI220-b. In some cases, the base station105-amay determine that a parameter of the one or more parameters corresponding to uplink continuity exhibits large variations between the TTI220-aand the TTI220-b. For example, the base station105-amay determine that an amplitude associated with the received uplink message210-bundergoes an amplitude variation225across the TTI220-aand the TTI220-b, as illustrated inFIG.2. Additionally, or alternatively, the base station105-amay determine that a phase associated with the received uplink message210-bundergoes a phase variation230(e.g., due to phase drift) across the TTI220-aand the TTI220-b. In other examples, the base station105-amay determine that timing advances associated with the received uplink message210-bundergoes variation (e.g., does not remain constant across TTI220-aand TTI220-b). Additionally, or alternatively, the base station105-amay perform one or more additional measurements to classify uplink continuity. For example, the base station105-amay determine a maximum length of time (e.g., a number of TTIs, a duration of time) where a classification of uplink continuity (e.g., high uplink continuity) was observed. Additionally, or alternatively, the base station105-amay determine a presence or absence of phase continuity, amplitude continuity, or both, corresponding to the received uplink message210-b. Based on the observations of the variations of the one or more parameters, or additional measurements performed, at the base station105-a, the base station105-amay determine to reclassify the ability of the UE115-ato maintain uplink continuity. For example, while the UE115-amay have indicated high uplink continuity (e.g., low variation of the one or more parameters and one or more thresholds), the base station105-amay reclassify the uplink continuity as “low,” based on large variations observed at the base station105-a. That is, there may exist a mismatch between assumptions of uplink continuity at the UE115-aand observed uplink continuity at the base station105-a. In some examples, the base station105-amay transmit a feedback message240to the UE115-aindicating the classification of uplink continuity (e.g., low uplink continuity) observed by the base station105-a. In some cases, the feedback message240may be transmitted as part of higher layer signaling (e.g., RRC). In some examples, the base station105-amay partially send the feedback message240over MAC-CE. In other examples, the base station105-amay utilize one or more bitfields in DCI to implicitly or explicitly indicate information about the quality of uplink continuity to the UE115-a. Additionally, or alternatively, the base station105-amay update assumptions made regarding the capability of the UE115-ato maintain uplink continuity based on the feedback message240that the base station105-atransmits to the UE115-a. In some cases, the base station105-amay change related procedures (e.g., whether to apply DMRS bundling for uplink reception or not) accordingly. For example, if the base station105-adetermines uplink continuity is low, the base station105-amay cease DMRS bundling for uplink reception. In some cases, the UE115-amay change future capability indications based on the feedback message240from the base station105-a. FIG.3illustrates an example of a bitmap300that supports network feedback for uplink continuity in accordance with aspects of the present disclosure. The bitmap300may be utilized by one or more aspects of the wireless communications system100and the wireless communications system200. For example, the bitmap300may correspond to communications between a UE115-band a base station105-b, which may be examples of the UE115and the base station105as described with reference toFIG.1. It should be noted that, while examples are discussed below, any number of devices and device types may be used to accomplish implementations described in the present disclosure. The base station105-band the UE115-bmay communicate over one or more communication links. For example, the base station105-bmay communicate with the UE115-bover a downlink320and the UE115-bmay communicate with the base station105-bover an uplink305. In some examples, the UE115-bmay indicate an uplink continuity capability (e.g., whether the UE115-bis capable of maintaining a class or level of uplink continuity that supports DMRS bundling of uplink transmissions for joint channel estimation at the base station105-b) via a capability message310. The capability message310may include one or more bitfields corresponding to one or more parameters associated with a quality of uplink continuity. For example, the capability message310may include a set of bits indicating a phase drift value, an amplitude variation value, a timing advance variation value, a length of time of uplink continuity, or any combination thereof. For example, a bit corresponding to phase drift may be set to 0, indicating a lack of phase drift and, therefore, the existence of phase continuity over a time period (e.g., one or more TTIs or uplink messages). Additionally, or alternatively, the capability message310may include an indication of a presence or absence of phase continuity, amplitude continuity, or both. For example, if a bit corresponding to a presence or absence of amplitude continuity is 0, amplitude continuity may exist over one or more TTIs or uplink messages. In some examples, the capability message310may include one or more bits corresponding to an uplink continuity classification315-a. The uplink continuity classification315-amay be based on the one or more parameters corresponding to uplink continuity and one or more thresholds. For example, the UE115-bmay indicate, via the capability message310that the UE115-bmay support a class 1 (e.g., high level) uplink continuity, which may correspond to one or more parameters being below respective thresholds. For instance, class 1 uplink continuity may correspond to a phase variation being below a phase variation threshold (e.g., 30 degrees), an amplitude variation being below an amplitude variation threshold (e.g., 1 dB), or a timing advance variation being below a timing advance threshold (e.g., 1 ms). Class 2 uplink continuity (e.g., low level) may correspond to a phase variation being in a phase range (e.g., between phase variation thresholds, such as between 30 and 45 degrees), an amplitude variation being in an amplitude range (e.g., between amplitude variation thresholds, such as between 1 dB and 2 dB), or a timing advance variation being in a timing advance range (e.g., between timing advance thresholds, such as between 1 ms and 2 ms). Based on the one or more parameters, the UE115-bmay classify the quality of uplink continuity as high or low. Other levels (e.g., medium, medium-high) or classes (class 3, class 4) may be considered and may correspond to different thresholds or ranges for one or more parameters. The classification may be indicated using one or more bitfields such as within an uplink continuity classification315-a. In some examples, such as those illustrated inFIG.3, a classification of high may correspond to a first bit in the uplink continuity classification315-abeing equal to zero, and a second bit in the uplink continuity classification315-abeing equal to 1. However, different values, number of bits, and structures of bitmaps may be utilized to accomplish techniques described herein. Additionally, or alternatively, the uplink continuity classification315-amay be determined based on variations of one or more parameters with respect to one or more thresholds. For example, the UE115-bmay determine a phase corresponding to uplink communications varies by plus or minus 5 degrees, amplitude varies by plus or minus 1 dB, and timing advance variation of 0-1 ms. Additionally, or alternatively, the UE115-bmay determine that the variations satisfy one or more thresholds. For example, a high uplink continuity classification may be based on whether the UE115-bis capable of phase variations are below 7 degrees, observed amplitude variations are below 2 dB, and observed timing advance variations are below 2 ms. Based on the variations and the thresholds, the UE115-bmay classify the quality of uplink continuity as high. In some cases, however, there may exist an uplink continuity mismatch between the capability message310and uplink continuity observed at the base station105-b. For example, the base station105-bmay observe larger or smaller variations (e.g., a different classification) corresponding to the one or more parameters than that which is indicated by the UE115-b. That is, while the UE115-bmay indicate a classification of high quality uplink continuity in the uplink continuity classification315-a, the base station105-bmay observe different variations that may correspond to a different uplink continuity classification. For example, the base station105-bmay observe different, larger phase variation (e.g., a phase variation of plus or minus 15 degrees) that may not satisfy one or more thresholds. Additionally, the base station105-bmay determine that there is an absence of phase continuity corresponding to uplink transmissions from the UE115-b. In such cases, the base station105-bmay determine a different quality of uplink continuity (e.g., low, very low). Based on the determination of the quality of uplink continuity being low, the base station105-bmay generate a feedback message325and transmit the feedback message325to the UE115-b. In some examples, the feedback message325may include one or more bitfields indicating one or more parameters corresponding to the quality of uplink continuity. For instance, the base station may determine that there is a mismatch between phase variation as indicated by the UE115-band the phase variation observed at the base station105-b. As such, the base station105-bmay indicate a larger phase variation to the UE115-bwithin the feedback message325. For example, a bitfield330within the feedback message325may indicate a presence of large phase variation (e.g., by being equal to 1). The base station105-bmay further indicate small variations in amplitude via a bitfield335(e.g., by being equal to 0). Additionally, the base station105-bmay indicate the absence of phase continuity via a bitfield340(e.g., if the bitfield340is equal to 1, there may be an absence of phase continuity). The base station105-bmay also indicate the presence of amplitude continuity by indicating, via a bitfield345(e.g., if the bitfield345is equal to 0, amplitude continuity may be present). Additionally, or alternatively, the base station105-bmay indicate other parameters within the feedback message325, such as timing advance variation, how long amplitude continuity has been observed, how long phase continuity has been observed, and the like. It should be noted that, while particular examples (e.g., bitmaps, bitfields) are discussed within the present disclosure, any arrangement of bits, bitmaps, bitfield mappings, and the like may be performed in any combination, and are not limited to the examples discussed herein. Based on the observed quality of uplink continuity at the base station105-band the one or more parameters indicated within the feedback message325, the base station105-bmay also indicate an uplink continuity classification315-bin the feedback message325. In some examples, such as the previous, the uplink continuity classification315-bmay differ from the uplink continuity classification315-aas indicated by the UE115-b. In the present example, the base station105a may determine a classification of low corresponding to uplink continuity. The base station may indicate a low classification of uplink continuity via the uplink continuity classification315-b(e.g., by a first bitfield of the uplink continuity classification315bbeing equal to 1, and a second bitfield being equal to 0). Additionally, or alternatively, the base station105-bmay select an uplink enhancement procedure for communicating with the UE115-bbased on the quality of uplink continuity observed at the base station105-b. The base station105-bmay transmit the feedback message325to the UE115-b(e.g., via RRC, MAC-CE, DCI, or a combination thereof), enabling real time resolution of uplink continuity quality mismatches between the UE115-band the base station105-b. For example, based on the feedback message325, the UE115-bmay perform one or more actions (e.g., modifications) to future capability messages to the base station105-b. For example, the UE115-bmay determine that the quality of uplink continuity as observed by the base station105-bis lower than previously indicated in the capability message310. Based on differing uplink continuity classifications, the UE115-bmay modify one or more aspects corresponding to a capability to maintain uplink continuity at the UE115-b. Additionally, or alternatively, the UE115-bmay transmit a second capability message indicating a modified capability to maintain uplink continuity to the base station105-b. FIG.4illustrates an example of a process flow400that supports network feedback for uplink continuity in accordance with aspects of the present disclosure. The process flow400may be implemented by one or more aspects of the wireless communications system100and the wireless communications system200. For example, the process flow400may correspond to communications between a UE115-cand a base station105-c, which may be examples of the UE115and the base station105as described with reference toFIG.1. It should be noted that, while examples are discussed below, any number of devices and device types may be used to accomplish implementations described in the present disclosure. In the following description of the process flow400, operations between the UE115-cand the base station105-cmay occur in a different order or at different times than as shown. Some operations may also be omitted from the process flow400, and other operations may be added to the process flow400. At405, the UE115-cmay transmit a capability message to the base station105-cindicating a quality of uplink continuity. For example, the UE115-cmay indicate an uplink continuity classification of high (e.g., low phase variation, low amplitude variation, constant timing advance), corresponding to high uplink continuity. The base station105-cmay determine, based on the capability message at405, to perform DMRS bundling by leveraging high uplink continuity. At410, the UE115-cmay send uplink transmissions to the base station105-c, where the base station105-cmay perform DMRS bundling over a set of TTIs corresponding to the uplink transmissions. At415, the base station105-cmay determine uplink continuity based on observing variations in one or more parameters corresponding to uplink continuity (e.g., phase continuity, amplitude continuity, timing advance). In some cases, the base station105-cmay further identify a presence or absence of phase continuity, a presence or absence of amplitude continuity, a length of time that uplink continuity has been maintained, and the like. At420, the base station105-cmay determine an uplink continuity classification based on observed variations of one or more parameters corresponding to uplink continuity. Additionally, or alternatively, the base station105-cmay determine the uplink continuity classification based on the presence or absence of phase continuity, the presence or absence of amplitude continuity, a length of time (e.g., multiple TTIs) uplink continuity has been maintained, or a combination thereof. At425, the base station105-cmay transmit a feedback message indicating the one or more parameters corresponding to uplink, the uplink continuity classification, or a combination thereof. In some cases, the uplink continuity classification may differ from an uplink continuity classification indicated by the UE115-c(e.g., via the capability message at405). Based on the feedback message, the UE115-cmay modify future capability messages (e.g., to prevent further uplink continuity mismatching between indicated uplink continuity and uplink continuity observed at the base station105-c). As such, at435, the UE115-cmay transmit a capability message to the base station105-cindicating a modified uplink continuity classification, modified one or more parameters, or a combination thereof, based on the feedback messaged received at425. FIG.5shows a block diagram500of a device505that supports network feedback for uplink continuity in accordance with aspects of the present disclosure. The device505may be an example of aspects of a UE115as described herein. The device505may include a receiver510, a transmitter515, and a communications manager520. The device505may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver510may provide a means for receiving information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to network feedback for uplink continuity). Information may be passed on to other components of the device505. The receiver510may utilize a single antenna or a set of multiple antennas. The transmitter515may provide a means for transmitting signals generated by other components of the device505. For example, the transmitter515may transmit information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to network feedback for uplink continuity). In some examples, the transmitter515may be co-located with a receiver510in a transceiver module. The transmitter515may utilize a single antenna or a set of multiple antennas. The communications manager520, the receiver510, the transmitter515, or various combinations thereof or various components thereof may be examples of means for performing various aspects of network feedback for uplink continuity as described herein. For example, the communications manager520, the receiver510, the transmitter515, or various combinations or components thereof may support a method for performing one or more of the functions described herein. In some examples, the communications manager520, the receiver510, the transmitter515, or various combinations or components thereof may be implemented in hardware (e.g., in communications management circuitry). The hardware may include a processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic, discrete hardware components, or any combination thereof configured as or otherwise supporting a means for performing the functions described in the present disclosure. In some examples, a processor and memory coupled with the processor may be configured to perform one or more of the functions described herein (e.g., by executing, by the processor, instructions stored in the memory). Additionally, or alternatively, in some examples, the communications manager520, the receiver510, the transmitter515, or various combinations or components thereof may be implemented in code (e.g., as communications management software or firmware) executed by a processor. If implemented in code executed by a processor, the functions of the communications manager520, the receiver510, the transmitter515, or various combinations or components thereof may be performed by a general-purpose processor, a DSP, a central processing unit (CPU), an ASIC, an FPGA, or any combination of these or other programmable logic devices (e.g., configured as or otherwise supporting a means for performing the functions described in the present disclosure). In some examples, the communications manager520may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the receiver510, the transmitter515, or both. For example, the communications manager520may receive information from the receiver510, send information to the transmitter515, or be integrated in combination with the receiver510, the transmitter515, or both to receive information, transmit information, or perform various other operations as described herein. The communications manager520may support wireless communications at a UE in accordance with examples as disclosed herein. For example, the communications manager520may be configured as or otherwise support a means for transmitting a set of multiple uplink messages to a base station over a set of TTIs. The communications manager520may be configured as or otherwise support a means for receiving, from the base station, a feedback message indicating a quality of uplink continuity of the UE based on the set of multiple uplink messages, where the uplink continuity corresponds to a variation of one or more uplink parameters associated with the set of multiple uplink messages over the set of TTIs. The communications manager520may be configured as or otherwise support a means for communicating with the base station based on the feedback message. By including or configuring the communications manager520in accordance with examples as described herein, the device505(e.g., a processor controlling or otherwise coupled to the receiver510, the transmitter515, the communications manager520, or a combination thereof) may support techniques for reduced processing and power consumption by reducing inaccurate channel estimation via network feedback of uplink continuity. Further, the described techniques support more efficient communications resource utilization and an overall reduction in signaling overhead by avoiding prolonged mismatches between indicated uplink continuity and observed uplink continuity, which may prevent additional signaling due to inaccurate channel estimation. FIG.6shows a block diagram600of a device605that supports network feedback for uplink continuity in accordance with aspects of the present disclosure. The device605may be an example of aspects of a device505or a UE115as described herein. The device605may include a receiver610, a transmitter615, and a communications manager620. The device605may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver610may provide a means for receiving information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to network feedback for uplink continuity). Information may be passed on to other components of the device605. The receiver610may utilize a single antenna or a set of multiple antennas. The transmitter615may provide a means for transmitting signals generated by other components of the device605. For example, the transmitter615may transmit information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to network feedback for uplink continuity). In some examples, the transmitter615may be co-located with a receiver610in a transceiver module. The transmitter615may utilize a single antenna or a set of multiple antennas. The device605, or various components thereof, may be an example of means for performing various aspects of network feedback for uplink continuity as described herein. For example, the communications manager620may include an uplink message transmitter625, a feedback message receiver630, a communication component635, or any combination thereof. The communications manager620may be an example of aspects of a communications manager520as described herein. In some examples, the communications manager620, or various components thereof, may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the receiver610, the transmitter615, or both. For example, the communications manager620may receive information from the receiver610, send information to the transmitter615, or be integrated in combination with the receiver610, the transmitter615, or both to receive information, transmit information, or perform various other operations as described herein. The communications manager620may support wireless communications at a UE in accordance with examples as disclosed herein. The uplink message transmitter625may be configured as or otherwise support a means for transmitting a set of multiple uplink messages to a base station over a set of TTIs. The feedback message receiver630may be configured as or otherwise support a means for receiving, from the base station, a feedback message indicating a quality of uplink continuity of the UE based on the set of multiple uplink messages, where the uplink continuity corresponds to a variation of one or more uplink parameters associated with the set of multiple uplink messages over the set of TTIs. The communication component635may be configured as or otherwise support a means for communicating with the base station based on the feedback message. FIG.7shows a block diagram700of a communications manager720that supports network feedback for uplink continuity in accordance with aspects of the present disclosure. The communications manager720may be an example of aspects of a communications manager520, a communications manager620, or both, as described herein. The communications manager720, or various components thereof, may be an example of means for performing various aspects of network feedback for uplink continuity as described herein. For example, the communications manager720may include an uplink message transmitter725, a feedback message receiver730, a communication component735, an uplink continuity indication component740, a capability message transmitter745, a capability modification component750, a feedback message receiver755, an uplink continuity component760, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses). The communications manager720may support wireless communications at a UE in accordance with examples as disclosed herein. The uplink message transmitter725may be configured as or otherwise support a means for transmitting a set of multiple uplink messages to a base station over a set of TTIs. The feedback message receiver730may be configured as or otherwise support a means for receiving, from the base station, a feedback message indicating a quality of uplink continuity of the UE based on the set of multiple uplink messages, where the uplink continuity corresponds to a variation of one or more uplink parameters associated with the set of multiple uplink messages over the set of TTIs. The communication component735may be configured as or otherwise support a means for communicating with the base station based on the feedback message. In some examples, to support receiving the feedback message, the uplink continuity indication component740may be configured as or otherwise support a means for receiving an indication of a phase drift, an amplitude variation, a timing advance variation, or a combination thereof observed by the base station over the set of TTIs based on the set of multiple uplink messages. In some examples, to support receiving the feedback message, the uplink continuity indication component740may be configured as or otherwise support a means for receiving an indication of a time length associated with uplink continuity of the UE observed by the base station over the set of TTIs based on the set of multiple uplink messages. In some examples, to support receiving the feedback message, the uplink continuity indication component740may be configured as or otherwise support a means for receiving an indication of presence or absence of phase continuity, amplitude continuity, or both observed by the base station over the set of TTIs based on the set of multiple uplink messages. In some examples, to support receiving the feedback message, the uplink continuity indication component740may be configured as or otherwise support a means for receiving downlink control information including a set of bits indicating a phase drift value, an amplitude variation value, a timing advance variation value, a length of time of uplink continuity, or any combination thereof. In some examples, the capability message transmitter745may be configured as or otherwise support a means for transmitting a UE capability message to the base station, the UE capability message indicating a capability of the UE to maintain uplink continuity for uplink transmissions. In some examples, the capability of the UE corresponds to an uplink continuity classification of a set of uplink continuity classifications associated with uplink continuity for the UE. In some examples, to support communicating with the base station, the capability modification component750may be configured as or otherwise support a means for modifying the capability of the UE to maintain uplink continuity for uplink transmissions based on the feedback message from the base station. In some examples, to support communicating with the base station, the capability message transmitter745may be configured as or otherwise support a means for transmitting a UE capability message indicating the modified capability of the UE to maintain uplink continuity for uplink transmissions. In some examples, to support receiving the feedback message, the feedback message receiver755may be configured as or otherwise support a means for receiving the feedback message via RRC signaling, a MAC-CE, DCI, or a combination thereof. In some examples, the one or more uplink parameters includes a phase, an amplitude, a timing advance, or any combination thereof associated with the set of multiple uplink messages. FIG.8shows a diagram of a system800including a device805that supports network feedback for uplink continuity in accordance with aspects of the present disclosure. The device805may be an example of or include the components of a device505, a device605, or a UE115as described herein. The device805may communicate wirelessly with one or more base stations105, UEs115, or any combination thereof. The device805may include components for bi-directional voice and data communications including components for transmitting and receiving communications, such as a communications manager820, an input/output (I/O) controller810, a transceiver815, an antenna825, a memory830, code835, and a processor840. These components may be in electronic communication or otherwise coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more buses (e.g., a bus845). The I/O controller810may manage input and output signals for the device805. The I/O controller810may also manage peripherals not integrated into the device805. In some cases, the I/O controller810may represent a physical connection or port to an external peripheral. In some cases, the I/O controller810may utilize an operating system such as iOS®, ANDROID®, MS-DOS®, MS-WINDOWS®, OS/2®, UNIX®, LINUX®, or another known operating system. Additionally, or alternatively, the I/O controller810may represent or interact with a modem, a keyboard, a mouse, a touchscreen, or a similar device. In some cases, the I/O controller810may be implemented as part of a processor, such as the processor840. In some cases, a user may interact with the device805via the I/O controller810or via hardware components controlled by the I/O controller810. In some cases, the device805may include a single antenna825. However, in some other cases, the device805may have more than one antenna825, which may be capable of concurrently transmitting or receiving multiple wireless transmissions. The transceiver815may communicate bi-directionally, via the one or more antennas825, wired, or wireless links as described herein. For example, the transceiver815may represent a wireless transceiver and may communicate bi-directionally with another wireless transceiver. The transceiver815may also include a modem to modulate the packets, to provide the modulated packets to one or more antennas825for transmission, and to demodulate packets received from the one or more antennas825. The transceiver815, or the transceiver815and one or more antennas825, may be an example of a transmitter515, a transmitter615, a receiver510, a receiver610, or any combination thereof or component thereof, as described herein. The memory830may include random access memory (RAM) and read-only memory (ROM). The memory830may store computer-readable, computer-executable code835including instructions that, when executed by the processor840, cause the device805to perform various functions described herein. The code835may be stored in a non-transitory computer-readable medium such as system memory or another type of memory. In some cases, the code835may not be directly executable by the processor840but may cause a computer (e.g., when compiled and executed) to perform functions described herein. In some cases, the memory830may contain, among other things, a basic I/O system (BIOS) which may control basic hardware or software operation such as the interaction with peripheral components or devices. The processor840may include an intelligent hardware device (e.g., a general-purpose processor, a DSP, a CPU, a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some cases, the processor840may be configured to operate a memory array using a memory controller. In some other cases, a memory controller may be integrated into the processor840. The processor840may be configured to execute computer-readable instructions stored in a memory (e.g., the memory830) to cause the device805to perform various functions (e.g., functions or tasks supporting network feedback for uplink continuity). For example, the device805or a component of the device805may include a processor840and memory830coupled to the processor840, the processor840and memory830configured to perform various functions described herein. The communications manager820may support wireless communications at a UE in accordance with examples as disclosed herein. For example, the communications manager820may be configured as or otherwise support a means for transmitting a set of multiple uplink messages to a base station over a set of TTIs. The communications manager820may be configured as or otherwise support a means for receiving, from the base station, a feedback message indicating a quality of uplink continuity of the UE based on the set of multiple uplink messages, where the uplink continuity corresponds to a variation of one or more uplink parameters associated with the set of multiple uplink messages over the set of TTIs. The communications manager820may be configured as or otherwise support a means for communicating with the base station based on the feedback message. By including or configuring the communications manager820in accordance with examples as described herein, the device805may support techniques for reducing latency, reducing signaling overhead, and increasing network efficiency by reducing occurrences of inaccurate channel estimation. Specifically, a feedback message may correct mismatches between an indicated uplink continuity and observed uplink continuity. By deconflicting a quality of uplink continuity between two or more wireless devices, joint channel estimation may be performed, which may result in more efficient network operations, more efficient utilization of power and network resources, and reduced latency. In some examples, the communications manager820may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the transceiver815, the one or more antennas825, or any combination thereof. Although the communications manager820is illustrated as a separate component, in some examples, one or more functions described with reference to the communications manager820may be supported by or performed by the processor840, the memory830, the code835, or any combination thereof. For example, the code835may include instructions executable by the processor840to cause the device805to perform various aspects of network feedback for uplink continuity as described herein, or the processor840and the memory830may be otherwise configured to perform or support such operations. FIG.9shows a block diagram900of a device905that supports network feedback for uplink continuity in accordance with aspects of the present disclosure. The device905may be an example of aspects of a base station105as described herein. The device905may include a receiver910, a transmitter915, and a communications manager920. The device905may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver910may provide a means for receiving information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to network feedback for uplink continuity). Information may be passed on to other components of the device905. The receiver910may utilize a single antenna or a set of multiple antennas. The transmitter915may provide a means for transmitting signals generated by other components of the device905. For example, the transmitter915may transmit information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to network feedback for uplink continuity). In some examples, the transmitter915may be co-located with a receiver910in a transceiver module. The transmitter915may utilize a single antenna or a set of multiple antennas. The communications manager920, the receiver910, the transmitter915, or various combinations thereof or various components thereof may be examples of means for performing various aspects of network feedback for uplink continuity as described herein. For example, the communications manager920, the receiver910, the transmitter915, or various combinations or components thereof may support a method for performing one or more of the functions described herein. In some examples, the communications manager920, the receiver910, the transmitter915, or various combinations or components thereof may be implemented in hardware (e.g., in communications management circuitry). The hardware may include a processor, a DSP, an ASIC, an FPGA or other programmable logic device, a discrete gate or transistor logic, discrete hardware components, or any combination thereof configured as or otherwise supporting a means for performing the functions described in the present disclosure. In some examples, a processor and memory coupled with the processor may be configured to perform one or more of the functions described herein (e.g., by executing, by the processor, instructions stored in the memory). Additionally, or alternatively, in some examples, the communications manager920, the receiver910, the transmitter915, or various combinations or components thereof may be implemented in code (e.g., as communications management software or firmware) executed by a processor. If implemented in code executed by a processor, the functions of the communications manager920, the receiver910, the transmitter915, or various combinations or components thereof may be performed by a general-purpose processor, a DSP, a CPU, an ASIC, an FPGA, or any combination of these or other programmable logic devices (e.g., configured as or otherwise supporting a means for performing the functions described in the present disclosure). In some examples, the communications manager920may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the receiver910, the transmitter915, or both. For example, the communications manager920may receive information from the receiver910, send information to the transmitter915, or be integrated in combination with the receiver910, the transmitter915, or both to receive information, transmit information, or perform various other operations as described herein. The communications manager920may support wireless communications at a base station in accordance with examples as disclosed herein. For example, the communications manager920may be configured as or otherwise support a means for receiving, from a UE, a set of multiple uplink messages over a set of TTIs. The communications manager920may be configured as or otherwise support a means for transmitting, to the UE, a feedback message indicating a quality of uplink continuity of the UE based on the set of multiple uplink messages, where the uplink continuity corresponds to a variation of one or more uplink parameters associated with the set of multiple uplink messages over the set of TTIs. The communications manager920may be configured as or otherwise support a means for communicating with the UE based on the feedback message. By including or configuring the communications manager920in accordance with examples as described herein, the device905(e.g., a processor controlling or otherwise coupled to the receiver910, the transmitter915, the communications manager920, or a combination thereof) may support techniques for reduced processing and power consumption by reducing inaccurate channel estimation via network feedback of uplink continuity. Further, the described techniques support more efficient communications resource utilization and an overall reduction in signaling overhead by avoiding prolonged mismatches between indicated uplink continuity and observed uplink continuity, which may prevent additional signaling due to inaccurate channel estimation. FIG.10shows a block diagram1000of a device1005that supports network feedback for uplink continuity in accordance with aspects of the present disclosure. The device1005may be an example of aspects of a device905or a base station105as described herein. The device1005may include a receiver1010, a transmitter1015, and a communications manager1020. The device1005may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver1010may provide a means for receiving information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to network feedback for uplink continuity). Information may be passed on to other components of the device1005. The receiver1010may utilize a single antenna or a set of multiple antennas. The transmitter1015may provide a means for transmitting signals generated by other components of the device1005. For example, the transmitter1015may transmit information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to network feedback for uplink continuity). In some examples, the transmitter1015may be co-located with a receiver1010in a transceiver module. The transmitter1015may utilize a single antenna or a set of multiple antennas. The device1005, or various components thereof, may be an example of means for performing various aspects of network feedback for uplink continuity as described herein. For example, the communications manager1020may include an uplink message receiver1025, a feedback message transmitter1030, a communication component1035, or any combination thereof. The communications manager1020may be an example of aspects of a communications manager920as described herein. In some examples, the communications manager1020, or various components thereof, may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the receiver1010, the transmitter1015, or both. For example, the communications manager1020may receive information from the receiver1010, send information to the transmitter1015, or be integrated in combination with the receiver1010, the transmitter1015, or both to receive information, transmit information, or perform various other operations as described herein. The communications manager1020may support wireless communications at a base station in accordance with examples as disclosed herein. The uplink message receiver1025may be configured as or otherwise support a means for receiving, from a UE, a set of multiple uplink messages over a set of TTIs. The feedback message transmitter1030may be configured as or otherwise support a means for transmitting, to the UE, a feedback message indicating a quality of uplink continuity of the UE based on the set of multiple uplink messages, where the uplink continuity corresponds to a variation of one or more uplink parameters associated with the set of multiple uplink messages over the set of TTIs. The communication component1035may be configured as or otherwise support a means for communicating with the UE based on the feedback message. FIG.11shows a block diagram1100of a communications manager1120that supports network feedback for uplink continuity in accordance with aspects of the present disclosure. The communications manager1120may be an example of aspects of a communications manager920, a communications manager1020, or both, as described herein. The communications manager1120, or various components thereof, may be an example of means for performing various aspects of network feedback for uplink continuity as described herein. For example, the communications manager1120may include an uplink message receiver1125, a feedback message transmitter1130, a communication component1135, an uplink continuity indication transmitter1140, an uplink enhancement procedure component1145, a capability message receiver1150, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses). The communications manager1120may support wireless communications at a base station in accordance with examples as disclosed herein. The uplink message receiver1125may be configured as or otherwise support a means for receiving, from a UE, a set of multiple uplink messages over a set of TTIs. The feedback message transmitter1130may be configured as or otherwise support a means for transmitting, to the UE, a feedback message indicating a quality of uplink continuity of the UE based on the set of multiple uplink messages, where the uplink continuity corresponds to a variation of one or more uplink parameters associated with the set of multiple uplink messages over the set of TTIs. The communication component1135may be configured as or otherwise support a means for communicating with the UE based on the feedback message. In some examples, to support transmitting the feedback message, the uplink continuity indication transmitter1140may be configured as or otherwise support a means for transmitting an indication of a phase drift, an amplitude variation, a timing advance variation, or a combination thereof observed by the base station over the set of TTIs based on the set of multiple uplink messages. In some examples, to support transmitting the feedback message, the uplink continuity indication transmitter1140may be configured as or otherwise support a means for transmitting an indication of a time length associated with uplink continuity of the UE observed by the base station over the set of TTIs based on the set of multiple uplink messages. In some examples, to support transmitting the feedback message, the uplink continuity indication transmitter1140may be configured as or otherwise support a means for transmitting an indication of presence or absence of phase continuity, amplitude continuity, or both observed by the base station over the set of TTIs based on the set of multiple uplink messages. In some examples, to support transmitting the feedback message, the uplink continuity indication transmitter1140may be configured as or otherwise support a means for transmitting downlink control information including a set of bits indicating a phase drift value, an amplitude variation value, a timing advance variation value, a length of time of uplink continuity, or any combination thereof. In some examples, to support communicating with the UE, the uplink enhancement procedure component1145may be configured as or otherwise support a means for selecting an uplink enhancement procedure for communicating with the UE based on the quality of the uplink continuity of the UE. In some examples, the uplink enhancement procedure includes bundling of reference signals across one or more uplink transmissions. In some examples, the capability message receiver1150may be configured as or otherwise support a means for receiving a UE capability message from the UE, the UE capability message indicating a capability of the UE to maintain uplink continuity for uplink transmissions. In some examples, the capability of the UE corresponds to an uplink continuity classification of a set of uplink continuity classifications associated with uplink continuity for the UE. In some examples, the uplink continuity indication transmitter1140may be configured as or otherwise support a means for transmitting an indication of a second uplink continuity classification of the set of uplink continuity classifications based on the feedback message, where the second uplink continuity classification is different from the uplink continuity classification. In some examples, the capability message receiver1150may be configured as or otherwise support a means for receiving a second UE capability message indicating a modified capability of the UE to maintain uplink continuity for uplink transmissions based on the feedback message, the modified capability of the UE based on the variation. In some examples, to support transmitting the feedback message, the feedback message transmitter1130may be configured as or otherwise support a means for transmitting the feedback message via RRC signaling, a MAC-CE, DCI, or a combination thereof. In some examples, the one or more uplink parameters includes a phase, an amplitude, a timing advance, or any combination thereof associated with the set of multiple uplink messages. FIG.12shows a diagram of a system1200including a device1205that supports network feedback for uplink continuity in accordance with aspects of the present disclosure. The device1205may be an example of or include the components of a device905, a device1005, or a base station105as described herein. The device1205may communicate wirelessly with one or more base stations105, UEs115, or any combination thereof. The device1205may include components for bi-directional voice and data communications including components for transmitting and receiving communications, such as a communications manager1220, a network communications manager1210, a transceiver1215, an antenna1225, a memory1230, code1235, a processor1240, and an inter-station communications manager1245. These components may be in electronic communication or otherwise coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more buses (e.g., a bus1250). The network communications manager1210may manage communications with a core network130(e.g., via one or more wired backhaul links). For example, the network communications manager1210may manage the transfer of data communications for client devices, such as one or more UEs115. In some cases, the device1205may include a single antenna1225. However, in some other cases the device1205may have more than one antenna1225, which may be capable of concurrently transmitting or receiving multiple wireless transmissions. The transceiver1215may communicate bi-directionally, via the one or more antennas1225, wired, or wireless links as described herein. For example, the transceiver1215may represent a wireless transceiver and may communicate bi-directionally with another wireless transceiver. The transceiver1215may also include a modem to modulate the packets, to provide the modulated packets to one or more antennas1225for transmission, and to demodulate packets received from the one or more antennas1225. The transceiver1215, or the transceiver1215and one or more antennas1225, may be an example of a transmitter915, a transmitter1015, a receiver910, a receiver1010, or any combination thereof or component thereof, as described herein. The memory1230may include RAM and ROM. The memory1230may store computer-readable, computer-executable code1235including instructions that, when executed by the processor1240, cause the device1205to perform various functions described herein. The code1235may be stored in a non-transitory computer-readable medium such as system memory or another type of memory. In some cases, the code1235may not be directly executable by the processor1240but may cause a computer (e.g., when compiled and executed) to perform functions described herein. In some cases, the memory1230may contain, among other things, a BIOS which may control basic hardware or software operation such as the interaction with peripheral components or devices. The processor1240may include an intelligent hardware device (e.g., a general-purpose processor, a DSP, a CPU, a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some cases, the processor1240may be configured to operate a memory array using a memory controller. In some other cases, a memory controller may be integrated into the processor1240. The processor1240may be configured to execute computer-readable instructions stored in a memory (e.g., the memory1230) to cause the device1205to perform various functions (e.g., functions or tasks supporting network feedback for uplink continuity). For example, the device1205or a component of the device1205may include a processor1240and memory1230coupled to the processor1240, the processor1240and memory1230configured to perform various functions described herein. The inter-station communications manager1245may manage communications with other base stations105, and may include a controller or scheduler for controlling communications with UEs115in cooperation with other base stations105. For example, the inter-station communications manager1245may coordinate scheduling for transmissions to UEs115for various interference mitigation techniques such as beamforming or joint transmission. In some examples, the inter-station communications manager1245may provide an X2 interface within an LTE/LTE-A wireless communications network technology to provide communication between base stations105. The communications manager1220may support wireless communications at a base station in accordance with examples as disclosed herein. For example, the communications manager1220may be configured as or otherwise support a means for receiving, from a UE, a set of multiple uplink messages over a set of TTIs. The communications manager1220may be configured as or otherwise support a means for transmitting, to the UE, a feedback message indicating a quality of uplink continuity of the UE based on the set of multiple uplink messages, where the uplink continuity corresponds to a variation of one or more uplink parameters associated with the set of multiple uplink messages over the set of TTIs. The communications manager1220may be configured as or otherwise support a means for communicating with the UE based on the feedback message. By including or configuring the communications manager1220in accordance with examples as described herein, the device1205may support techniques for reducing latency, reducing signaling overhead, and increasing network efficiency by reducing occurrences of inaccurate channel estimation. Specifically, a feedback message may correct mismatches between an indicated uplink continuity and observed uplink continuity. By deconflicting a quality of uplink continuity between two or more wireless devices, joint channel estimation may be performed, which may result in more efficient network operations, more efficient utilization of power and network resources, and reduced latency. In some examples, the communications manager1220may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the transceiver1215, the one or more antennas1225, or any combination thereof. Although the communications manager1220is illustrated as a separate component, in some examples, one or more functions described with reference to the communications manager1220may be supported by or performed by the processor1240, the memory1230, the code1235, or any combination thereof. For example, the code1235may include instructions executable by the processor1240to cause the device1205to perform various aspects of network feedback for uplink continuity as described herein, or the processor1240and the memory1230may be otherwise configured to perform or support such operations. FIG.13shows a flowchart illustrating a method1300that supports network feedback for uplink continuity in accordance with aspects of the present disclosure. The operations of the method1300may be implemented by a UE or its components as described herein. For example, the operations of the method1300may be performed by a UE115as described with reference toFIGS.1through8. In some examples, a UE may execute a set of instructions to control the functional elements of the UE to perform the described functions. Additionally, or alternatively, the UE may perform aspects of the described functions using special-purpose hardware. At1305, the method may include transmitting a set of multiple uplink messages to a base station over a set of TTIs. The operations of1305may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1305may be performed by an uplink message transmitter725as described with reference toFIG.7. At1310, the method may include receiving, from the base station, a feedback message indicating a quality of uplink continuity of the UE based on the set of multiple uplink messages, where the uplink continuity corresponds to a variation of one or more uplink parameters associated with the set of multiple uplink messages over the set of TTIs. The operations of1310may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1310may be performed by a feedback message receiver730as described with reference toFIG.7. At1315, the method may include communicating with the base station based on the feedback message. The operations of1315may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1315may be performed by a communication component735as described with reference toFIG.7. FIG.14shows a flowchart illustrating a method1400that supports network feedback for uplink continuity in accordance with aspects of the present disclosure. The operations of the method1400may be implemented by a UE or its components as described herein. For example, the operations of the method1400may be performed by a UE115as described with reference toFIGS.1through8. In some examples, a UE may execute a set of instructions to control the functional elements of the UE to perform the described functions. Additionally, or alternatively, the UE may perform aspects of the described functions using special-purpose hardware. At1405, the method may include transmitting a set of multiple uplink messages to a base station over a set of TTIs. The operations of1405may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1405may be performed by an uplink message transmitter725as described with reference toFIG.7. At1410, the method may include receiving, from the base station, a feedback message indicating a quality of uplink continuity of the UE based on the set of multiple uplink messages, where the uplink continuity corresponds to a variation of one or more uplink parameters associated with the set of multiple uplink messages over the set of TTIs. The operations of1410may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1410may be performed by a feedback message receiver730as described with reference toFIG.7. At1415, the method may include receiving an indication of a phase drift, an amplitude variation, a timing advance variation, or a combination thereof observed by the base station over the set of TTIs based on the set of multiple uplink messages. The operations of1415may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1415may be performed by an uplink continuity indication component740as described with reference toFIG.7. At1420, the method may include communicating with the base station based on the feedback message. The operations of1420may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1420may be performed by a communication component735as described with reference toFIG.7. FIG.15shows a flowchart illustrating a method1500that supports network feedback for uplink continuity in accordance with aspects of the present disclosure. The operations of the method1500may be implemented by a base station or its components as described herein. For example, the operations of the method1500may be performed by a base station105as described with reference toFIGS.1through4and9through12. In some examples, a base station may execute a set of instructions to control the functional elements of the base station to perform the described functions. Additionally, or alternatively, the base station may perform aspects of the described functions using special-purpose hardware. At1505, the method may include receiving, from a UE, a set of multiple uplink messages over a set of TTIs. The operations of1505may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1505may be performed by an uplink message receiver1125as described with reference toFIG.11. At1510, the method may include transmitting, to the UE, a feedback message indicating a quality of uplink continuity of the UE based on the set of multiple uplink messages, where the uplink continuity corresponds to a variation of one or more uplink parameters associated with the set of multiple uplink messages over the set of TTIs. The operations of1510may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1510may be performed by a feedback message transmitter1130as described with reference toFIG.11. At1515, the method may include communicating with the UE based on the feedback message. The operations of1515may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1515may be performed by a communication component1135as described with reference toFIG.11. FIG.16shows a flowchart illustrating a method1600that supports network feedback for uplink continuity in accordance with aspects of the present disclosure. The operations of the method1600may be implemented by a base station or its components as described herein. For example, the operations of the method1600may be performed by a base station105as described with reference toFIGS.1through4and9through12. In some examples, a base station may execute a set of instructions to control the functional elements of the base station to perform the described functions. Additionally, or alternatively, the base station may perform aspects of the described functions using special-purpose hardware. At1605, the method may include receiving, from a UE, a set of multiple uplink messages over a set of TTIs. The operations of1605may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1605may be performed by an uplink message receiver1125as described with reference toFIG.11. At1610, the method may include transmitting, to the UE, a feedback message indicating a quality of uplink continuity of the UE based on the set of multiple uplink messages, where the uplink continuity corresponds to a variation of one or more uplink parameters associated with the set of multiple uplink messages over the set of TTIs. The operations of1610may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1610may be performed by a feedback message transmitter1130as described with reference toFIG.11. At1615, the method may include communicating with the UE based on the feedback message. The operations of1615may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1615may be performed by a communication component1135as described with reference toFIG.11. At1620, the method may include receiving a UE capability message from the UE, the UE capability message indicating a capability of the UE to maintain uplink continuity for uplink transmissions. The operations of1620may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1620may be performed by a capability message receiver1150as described with reference toFIG.11. At1625, the method may include receiving a second UE capability message indicating a modified capability of the UE to maintain uplink continuity for uplink transmissions based on the feedback message, the modified capability of the UE based on the variation. The operations of1625may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1625may be performed by a capability message receiver1150as described with reference toFIG.11. The following provides an overview of aspects of the present disclosure: Aspect 1: A method for wireless communications at a UE, comprising: transmitting a plurality of uplink messages to a base station over a set of transmission time intervals receiving, from the base station, a feedback message indicating a quality of uplink continuity of the UE based at least in part on the plurality of uplink messages, wherein the uplink continuity corresponds to a variation of one or more uplink parameters associated with the plurality of uplink messages over the set of transmission time intervals communicating with the base station based at least in part on the feedback message Aspect 2: The method of aspect 1, wherein receiving the feedback message comprises: receiving an indication of a phase drift, an amplitude variation, a timing advance variation, or a combination thereof observed by the base station over the set of transmission time intervals based at least in part on the plurality of uplink messages Aspect 3: The method of any of aspects 1 through 2, wherein receiving the feedback message comprises: receiving an indication of a time length associated with uplink continuity of the UE observed by the base station over the set of transmission time intervals based at least in part on the plurality of uplink messages Aspect 4: The method of any of aspects 1 through 3, wherein receiving the feedback message comprises: receiving an indication of presence or absence of phase continuity, amplitude continuity, or both observed by the base station over the set of transmission time intervals based at least in part on the plurality of uplink messages Aspect 5: The method of any of aspects 1 through 4, wherein receiving the feedback message comprises: receiving downlink control information comprising a set of bits indicating a phase drift value, an amplitude variation value, a timing advance variation value, a length of time of uplink continuity, or any combination thereof Aspect 6: The method of any of aspects 1 through 5, further comprising: transmitting a UE capability message to the base station, the UE capability message indicating a capability of the UE to maintain uplink continuity for uplink transmissions Aspect 7: The method of aspect 6, wherein the capability of the UE corresponds to an uplink continuity classification of a set of uplink continuity classifications associated with uplink continuity for the UE Aspect 8: The method of any of aspects 1 through 7, wherein communicating with the base station comprises: modifying the capability of the UE to maintain uplink continuity for uplink transmissions based at least in part on the feedback message from the base station; and transmitting a UE capability message indicating the modified capability of the UE to maintain uplink continuity for uplink transmissions Aspect 9: The method of any of aspects 1 through 8, wherein receiving the feedback message comprises: receiving the feedback message via RRC signaling, a MAC-CE, DCI, or a combination thereof. Aspect 10: The method of any of aspects 1 through 9, wherein the one or more uplink parameters comprises a phase, an amplitude, a timing advance, or any combination thereof associated with the plurality of uplink messages Aspect 11: A method for wireless communications at a base station, comprising: receiving, from a UE, a plurality of uplink messages over a set of transmission time intervals transmitting, to the UE, a feedback message indicating a quality of uplink continuity of the UE based at least in part on the plurality of uplink messages, wherein the uplink continuity corresponds to a variation of one or more uplink parameters associated with the plurality of uplink messages over the set of transmission time intervals communicating with the UE based at least in part on the feedback message Aspect 12: The method of aspect 11, wherein transmitting the feedback message comprises: transmitting an indication of a phase drift, an amplitude variation, a timing advance variation, or a combination thereof observed by the base station over the set of transmission time intervals based at least in part on the plurality of uplink messages Aspect 13: The method of any of aspects 11 through 12, wherein transmitting the feedback message comprises: transmitting an indication of a time length associated with uplink continuity of the UE observed by the base station over the set of transmission time intervals based at least in part on the plurality of uplink messages Aspect 14: The method of any of aspects 11 through 13, wherein transmitting the feedback message comprises: transmitting an indication of presence or absence of phase continuity, amplitude continuity, or both observed by the base station over the set of transmission time intervals based at least in part on the plurality of uplink messages Aspect 15: The method of any of aspects 11 through 14, wherein transmitting the feedback message comprises: transmitting downlink control information comprising a set of bits indicating a phase drift value, an amplitude variation value, a timing advance variation value, a length of time of uplink continuity, or any combination thereof Aspect 16: The method of any of aspects 11 through 15, wherein communicating with the UE comprises: selecting an uplink enhancement procedure for communicating with the UE based at least in part on the quality of the uplink continuity of the UE Aspect 17: The method of aspect 16, wherein the uplink enhancement procedure comprises bundling of reference signals across one or more uplink transmissions Aspect 18: The method of any of aspects 11 through 17, further comprising: receiving a UE capability message from the UE, the UE capability message indicating a capability of the UE to maintain uplink continuity for uplink transmissions Aspect 19: The method of aspect 18, wherein the capability of the UE corresponds to an uplink continuity classification of a set of uplink continuity classifications associated with uplink continuity for the UE Aspect 20: The method of aspect 19, further comprising: transmitting an indication of a second uplink continuity classification of the set of uplink continuity classifications based at least in part on the feedback message, wherein the second uplink continuity classification is different from the uplink continuity classification Aspect 21: The method of any of aspects 18 through 20, further comprising: receiving a second UE capability message indicating a modified capability of the UE to maintain uplink continuity for uplink transmissions based at least in part on the feedback message, the modified capability of the UE based at least in part on the variation Aspect 22: The method of any of aspects 11 through 21, wherein transmitting the feedback message comprises: transmitting the feedback message via RRC signaling, a MAC-CE, DCI, or a combination thereof. Aspect 23: The method of any of aspects 11 through 22, wherein the one or more uplink parameters comprises a phase, an amplitude, a timing advance, or any combination thereof associated with the plurality of uplink messages Aspect 24: An apparatus for wireless communications at a UE, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform a method of any of aspects 1 through 10. Aspect 25: An apparatus for wireless communications at a UE, comprising at least one means for performing a method of any of aspects 1 through 10. Aspect 26: A non-transitory computer-readable medium storing code for wireless communications at a UE, the code comprising instructions executable by a processor to perform a method of any of aspects 1 through 10. Aspect 27: An apparatus for wireless communications at a base station, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform a method of any of aspects 11 through 23. Aspect 28: An apparatus for wireless communications at a base station, comprising at least one means for performing a method of any of aspects 11 through 23. Aspect 29: A non-transitory computer-readable medium storing code for wireless communications at a base station, the code comprising instructions executable by a processor to perform a method of any of aspects 11 through 23. It should be noted that the methods described herein describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Further, aspects from two or more of the methods may be combined. Although aspects of an LTE, LTE-A, LTE-A Pro, or NR system may be described for purposes of example, and LTE, LTE-A, LTE-A Pro, or NR terminology may be used in much of the description, the techniques described herein are applicable beyond LTE, LTE-A, LTE-A Pro, or NR networks. For example, the described techniques may be applicable to various other wireless communications systems such as Ultra Mobile Broadband (UMB), Institute of Electrical and Electronics Engineers (IEEE) 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, Flash-OFDM, as well as other systems and radio technologies not explicitly mentioned herein. Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. The various illustrative blocks and components described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, a CPU, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described herein may be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that may be accessed by a general-purpose or special-purpose computer. By way of example, and not limitation, non-transitory computer-readable media may include RAM, ROM, electrically erasable programmable ROM (EEPROM), flash memory, compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that may be used to carry or store desired program code means in the form of instructions or data structures and that may be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of computer-readable medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media. As used herein, including in the claims, “or” as used in a list of items (e.g., a list of items prefaced by a phrase such as “at least one of” or “one or more of”) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an example step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.” The term “determine” or “determining” encompasses a wide variety of actions and, therefore, “determining” can include calculating, computing, processing, deriving, investigating, looking up (such as via looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” can include receiving (such as receiving information), accessing (such as accessing data in a memory) and the like. Also, “determining” can include resolving, selecting, choosing, establishing and other such similar actions. In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label, or other subsequent reference label. The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “example” used herein means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details for the purpose of providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described examples. The description herein is provided to enable a person having ordinary skill in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to a person having ordinary skill in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein.
133,146
11943071
DETAILED DESCRIPTION Example embodiments are provided so that this disclosure will be thorough, and will fully convey the scope to those who are skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods, to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms and that neither should be construed to limit the scope of the disclosure. In some example embodiments, well-known processes, well-known device structures, and well-known technologies are not described in detail. The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms “comprises,” “comprising,” “including,” and “having,” are inclusive and therefore specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The method steps, processes, and operations described herein are not to be construed as necessarily requiring their performance in the particular order discussed or illustrated, unless specifically identified as an order of performance. It is also to be understood that additional or alternative steps may be employed. Although the terms first, second, third, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms may be only used to distinguish one element, component, region, layer or section from another region, layer or section. Terms such as “first,” “second,” and other numerical terms when used herein do not imply a sequence or order unless clearly indicated by the context. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the example embodiments. Spatially relative terms, such as “inner,” “outer,” “beneath,” “below,” “lower,” “above,” “upper,” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. Spatially relative terms may be intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below” or “beneath” other elements or features would then be oriented “above” the other elements or features. Thus, the example term “below” can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly. Example embodiments will now be described more fully with reference to the accompanying drawings. In some cases, personal information (e.g., an individual calendar, email, contacts, etc.) on a mobile computing device that a user brings into a videoconferencing room may be different than information that is available to videoconferencing components in the room (e.g., a videoconferencing codec, etc.). This may create a burdensome situation for the user in setting up a conference call based on the user's personal information, because the user may have to locate conference call information on the mobile computing device, manually enter conference call information in the videoconferencing codec, etc. In some embodiments of the present disclosure, a videoconferencing system may allow for joining a scheduled conference call using simple user input from a mobile computing device, and the same user interface may be used for multiple videoconferencing rooms, videoconferencing components, etc. For example, the user may open a control application on the mobile device and initiate a scheduled conference call by clicking a button, etc. Videoconferencing call control, information, etc. may be handled in the background with respect to the user. The user may initiate the scheduled conference call without having to locate a dialer for a meeting, without having to dial a phone number for the conferencing call or input an ID or PIN for the meeting, etc. A videoconferencing system according to one example embodiment of the present disclosure is illustrated inFIG.1, and indicated generally by reference number100. As shown inFIG.1, the system100includes a server102, a videoconferencing codec104, and a system controller106in communication with the videoconferencing codec104via one or more wired and/or wireless communication networks112to control the videoconferencing codec104. The system controller106is also in communication with the server102via the one or more wired and/or wireless communication networks112. The system100further includes a mobile computing device108, such as a smartphone, a tablet computer, a laptop computer, etc. AlthoughFIG.1illustrates the mobile computing device108as a smartphone, other mobile computing devices can be used similarly. The mobile computing device108includes a user interface (e.g., a keypad, a touchscreen, etc.) to receive input from a user110, and the mobile computing device108is adapted to communicate with the server102via the one or more wired/and or wireless communication networks112. The mobile computing device108is configured to parse information (e.g., calendar events, emails, etc.) stored on the mobile computing device108according to a parsing protocol to obtain information regarding a scheduled conference call, and to transmit at least a portion of the information regarding the scheduled conference call to the server102in response to user input. The server102is configured to transmit the information regarding the scheduled conference call to the system controller106, and the system controller106is configured to initiate the scheduled conference call via the videoconferencing codec104. The mobile computing device108may include a control application such as a smartphone app, a computer program, etc., which may be stored in a memory of the mobile computing device108. AlthoughFIG.1illustrates a smartphone108including a smartphone app, other control applications can be used similarly. One example smartphone app is ROOMREADY CONNECT distributed by Zdi, Inc. of Normal, IL. The smartphone app may use one or more libraries to pull information (e.g., calendar event data, emails, etc.) from the smartphone108, and parse the information to obtain details regarding a scheduled conference call. For example, calendar invites, email invites, etc. from videoconferencing services typically have a standardized format that may easily be gathered and processed by the smartphone app. For example, calendar events, emails, etc., stored on the smartphone108may be parsed by the smartphone app to search for indicators of a scheduled conference call. In some embodiments, the smartphone app may search multiple stored calendar events for one or more text and/or numeric strings indicative of the scheduled conference call. For example, a calendar event that includes a session initiation protocol (SIP) uniform resource identifier (URI), a videoconferencing phone number, a videoconferencing meeting number, a personal identification (PIN) number, a passcode, etc., may be indicative of a scheduled conference call. The smartphone app may search multiple stored calendar events, emails, etc. to identify text and/or numeric strings including a SIP URI, videoconferencing phone number, meeting number, PIN number, passcode, etc., to identify an upcoming scheduled conference call. In some cases, videoconferencing providers may use a standard textual format for invites to conference calls, and the smartphone app may search the multiple calendar events to identify formatted text that corresponds to a specific videoconferencing provider invite, which indicates an upcoming scheduled conference call. An example calendar invite216is illustrated inFIG.2. The calendar invite includes a body218including details of a scheduled conference call. The details may be provided in a text format from an application programming interface (API) of a calendar application (e.g., MICROSOFT Office 365, GOOGLE calendar, etc.). The calendar invite216may be parsed by the smartphone app to search for specified profiles, formats, etc. indicative of scheduled conference calls. InFIG.2, a keyword220and222in the body218indicate a WEBEX conference call is scheduled for a “video conferencing system.” In response to matching the keywords220and222, the smartphone app may search for “sip:[email protected]” at224, to determine information needed to initiate a call at the videoconferencing codec104. The smartphone app may also search for an access code at226, and transmit the obtained access code to the videoconferencing codec104via the server102to initiate the conference call. AlthoughFIG.2illustrates a WEBEX example, the smartphone app may similarly search calendar invites for conference calls scheduled by other videoconferencing providers, in other invite scheduling formats, etc. The smartphone app may parse calendar events within a defined time period to search for scheduled conference calls. For example, the smartphone app could search calendar events within an hour from a current time, within a day, within a week, etc. Referring back toFIG.1, once the smartphone app obtains the information regarding the scheduled conference call, the smartphone108transmits (e.g., in response to user input) at least a portion of the obtained information regarding the scheduled conference call to the server102through the one or more wired and/or wireless communication networks112. For example, the communication network(s)112may include a WiFi network, a cellular network, a public communications network such as the Internet, a private proprietary network, network cables, etc. The server102may communicate with the system controller106and the smartphone108on the same network. The smartphone108may transmit the information regarding the scheduled conference call to the server102using a secure encrypted transport layer security (TLS) channel. For example, the smartphone app may organize the information regarding the scheduled conference call into a specified format (e.g., JavaScript Object Notation (JSON) format, etc.) for transmission to the server102. The transmitted information may not include personal data of the smartphone108, so personal data of the smartphone108may not be stored on the server102or the system controller106to provide privacy to a user of the smartphone108. Similarly, local videoconferencing room data of the system controller106may not be stored on the smartphone108, to protect privacy of local videoconferencing room data. Once the server102receives the information regarding the scheduled conference call from the smartphone108, the server may send some or all of the information to the system controller106. The server may redirect the information without any changes, or may revise, augment or update the information before sending to the system controller106. For example, the server202may redirect the information regarding the scheduled conference call received from the smartphone108to the system controller106without translating the information received from the smartphone108. AlthoughFIG.1illustrates the server102in communication between the smartphone108and the system controller106, in other embodiments the smartphone108could communicate directly with the system controller106without the server102. For example, the smartphone108could communicate with the system controller106via the one or more wired and/or wireless communication networks112to transmit information regarding a scheduled conference call directly to the system controller106to initiate the scheduled conference call. The server102may be a remote server based in a cloud (e.g., an AMAZON Web Services (AWS) cloud, etc.) at a separate location from the system controller106, such as a corporate data center server. Alternatively, or in addition a server102may be located on the same premises (e.g., on-site) as the system controller106, etc. For example, the server102could be a gateway server that acts as a proxy to direct the scheduled conference call information to the correct system controller106. The system controller106controls the videoconferencing codec104to initiate the scheduled conference call according to the information received from the server102. For example, the system controller106may contain appropriate application programming interfaces (APIs) to control the videoconferencing codec104. One example system controller is the ROOMREADY GO controller distributed by Zdi, Inc. of Normal, IL. The system controller106may call a conference phone number according to the scheduled conference call information received from the server102, according to a session initiation protocol (SIP) uniform resource identifier (URI) associated with the scheduled conference call information, a dial-in phone number associated with the scheduled conference call information, etc. For example, the system controller106may transmit commands to the videoconferencing codec104using a local secure shell (SSH) internet protocol (IP) connection from the system controller106to the videoconferencing codec104, using an API of the videoconferencing codec104, etc. In some embodiments, the smartphone108may display a join button328on the user interface330according to the obtained information regarding the scheduled conference call, as shown inFIG.3. The smartphone108may then transmit at least a portion of the obtained information regarding the scheduled conference call to the server102in response to activation of the join button328by the user110. In some embodiments, the smartphone app determines a start time of the scheduled conference call according to the obtained information, and displays, highlights, etc. the join button328on the user interface330a defined time period before the determined start time of the scheduled conference call. As shown inFIG.3, the join button328may be highlighted five minutes prior to the scheduled conference call associated with the join button328, while another join button332is not highlighted because a current time is more than five minutes prior to the scheduled conference call associated with the join button332. For example, the smartphone app could display the join button328on the user interface330at least one minute before the scheduled conference call, at least five minutes before the scheduled conference call, at least one hour before the scheduled conference call, etc. Alternatively, or in addition, the smartphone app may display the join button328on the user interface330a defined time period after the determined start time of the scheduled conference call. For example, the smartphone app could display the join button328on the user interface330at least one minute after the scheduled conference call, at least five minutes after the scheduled conference call, at least one hour after the scheduled conference call, etc. The join button328may include any suitable icon, toggle, switch, field, etc., such as the highlighted green square on the user interface330inFIG.3. The join button328may be displayed on a touchscreen of the smartphone108, etc. In other embodiments, the smartphone108may not display a join button and the user may initiate the scheduled conference call by providing input other than activation of a join button. As described above, the system100may allow for joining a conference call using a simple (e.g., one-button) connection from the smartphone108. For example, as shown inFIG.3, the user110may simply enter a videoconferencing room and the smartphone app on the smartphone108, then press the join button328on the user interface330of the smartphone108to start a scheduled conference call. Videoconferencing call control, information, etc. may be handled in the background with respect to the user110. For example, a user may not have to locate a dialer for a meeting, may not have to dial a phone number for the conferencing service (e.g., using WEBEX, GOTOMEETING, ZOOM, BLUE JEAN NETWORKS, SKYPE, CISCO, etc.), may not have to input an ID or PIN for the meeting, etc. Referring back toFIG.1, the videoconferencing codec104may include a room-based videoconferencing codec, a mobile cart-based videoconferencing codec, a mobile phone-based videoconferencing codec, a mobile tablet-based videoconferencing codec, etc. The videoconferencing codec includes a coder/decoder for audio and for video, and may be used for a conference call including both audio and video, for a call involving only audio if desired, etc. Because the smartphone app parses information (e.g., calendar events, emails, etc.) stored on the smartphone108to obtain information regarding the scheduled conference call, the system100is not limited to any specific proprietary hardware and software products and/or services (e.g., CISCO, etc.). Therefore, the system100may include any suitable videoconferencing codec104, may operate with any suitable videoconferencing service provider, etc. A public communications network such as the Internet may be used instead of, or in addition to, a proprietary cloud. Preferably, there is no need to register the smartphone app of the smartphone108with the same cloud-based service (e.g., CISCO SPARK) as the videoconferencing codec104. Therefore, room-based conferencing components may not have to be registered to a proprietary cloud-based solution, and may not have to use the same proprietary hardware and software. This may allow the smartphone108to operate with a variety of different videoconferencing codecs, a variety of different videoconferencing service providers, etc. Obtaining scheduled conference call information from a personal smartphone108as the “source-of-truth” for initiating the conferencing call may eliminate the need to store scheduled conference call information in a central management server, etc. Therefore, in some embodiments there may not be a need to connect or integrate with calendars, an account, etc., outside of the smartphone108. Information that is required for the conference call such as a conference pin, login credentials, passcode, meeting number, etc., may be passed from the smartphone108to the server102, to the system controller106, to the videoconferencing codec104. In some embodiments, there is no requirement to schedule a videoconferencing room through an infrastructure (e.g., Telepresence Management Suite (TMS), etc.) that is connected to a scheduling system (e.g., OFFICE 365, EXCHANGE, GOOGLE Calendar, etc.). Therefore, no integration may be required between a TMS software scheduling system and the videoconferencing codec104. As shown inFIG.1, the system100includes an optional videoconferencing component114. The system controller106is in communication with the videoconferencing component114via the one or more wired and/or wireless communication networks112to control the videoconferencing component114. The videoconferencing component114may include any suitable device for a videoconferencing system, such as a television, a monitor, an integrated computer display, a camera, an audio digital signal processor (DSP), a speaker, an audio-visual (AV) switch, and a cable box, etc. AlthoughFIG.1illustrates a single videoconferencing component114, other embodiments may include more than one videoconferencing component114, no videoconferencing components114other than the videoconferencing codec104, etc. In some embodiments, the smartphone108may only display the join button328, may only transmit the scheduled conferencing call information to the server102, etc. when the smartphone108is within a vicinity of the system controller106. For example, the smartphone app may require the smartphone108to be within a specified proximity threshold distance of the system controller106before displaying the join button328, before transmitting the scheduled conferencing call information to the server102, etc. The smartphone108may use any suitable technology to determine whether the system controller106is within a vicinity of the smartphone108. For example, the smartphone108may determine signal strengths of beacons that each have an identifier associated with a unique system controller106or videoconferencing room housing the system controller106at a location of the beacon, the smartphone108may compare its global positioning system (GPS) location with a GPS location of the videoconferencing room, the smartphone108may compare received signal strengths of wireless access points with a predetermined heat map of wireless access point channels, etc. In order to leave a room, a “Leave Room” button334displayed on user interface330of the smartphone108may automatically disconnect the initiated conference call when the smartphone108leaves the physical proximity of the conferencing room (e.g., leaves a proximity distance threshold of a beacon associated with the system controller106, etc.), as shown inFIG.3. The system controller106may disconnect communication with the smartphone108via the server102and shut down when the conferencing room is determined to be vacant. The system controllers, mobile computing devices, servers, etc. described herein may be configured to perform operations using any suitable combination of hardware and software. For example, the system controllers, mobile computing devices, servers, etc., may include any suitable circuitry, logic gates, microprocessor(s), computer-executable instructions stored in memory, etc. operable to cause the system controllers, mobile computing devices, servers, etc., to perform actions described herein. According to another example embodiment, a mobile computing device is adapted to communicate with a server via the one or more wired/and or wireless communication networks, and the mobile computing device includes a user interface. The mobile device is configured to parse information stored on the mobile computing device according to a parsing protocol to obtain information regarding a scheduled conference call, and to transmit at least a portion of the information regarding the scheduled conference call to the server in response to user input, to transmit the information regarding the scheduled conference call from the server to a system controller to initiate the scheduled conference call via a videoconferencing codec. According to a further example embodiment, a method of initiating a videoconferencing call is disclosed. The method includes parsing information stored on a mobile computing device according to a parsing protocol to obtain information regarding a scheduled conference call, and transmitting at least a portion of the information regarding the scheduled conference call to a server in response to user input, to transmit the information regarding the scheduled conference call from the server to a system controller to initiate the scheduled conference call via a videoconferencing codec. According to yet another example embodiment, a system controller is configured to receive information regarding a scheduled conference call from a mobile computing device via a server, the mobile computing device parsing information stored on the mobile computing device according to a parsing protocol to obtain the information regarding the scheduled conference call. The system controller is configured to control a videoconferencing codec to initiate the scheduled conference call according to the received information. According to a further example embodiment, a videoconferencing system includes a server, a videoconferencing codec, and a system controller in communication with the videoconferencing codec via one or more wired and/or wireless communication networks to control the videoconferencing codec. The system controller is in communication with the server via the one or more wired and/or wireless communication networks. The system also includes a mobile computing device adapted to communicate with the server via the one or more wired/and or wireless communication networks. The mobile computing device includes a user interface. The mobile device is configured to receive information regarding a conference call from user input, and to transmit at least a portion of the information regarding the conference call to the server. The server is configured to transmit the information regarding the conference call to the system controller, and the system controller configured to initiate the scheduled conference call via the videoconferencing codec. The mobile device may receive and transmit the information regarding the conference call without parsing information stored on the mobile computing device. The foregoing description of the embodiments has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the disclosure.
26,000
11943072
DETAILED DESCRIPTION A virtual conferencing system provides for the reception and transmission of audio and video data between devices, for communication between device users in real-time. A virtual conferencing system allows a user to design or select a virtual space with multiple rooms for real-time communication. Participants may switch between the different rooms of the virtual space, for example, to engage in different conversations, events, seminars, and the like. In some cases, participants within one room may wish to preview another room in real-time, without being required to switch to the other room. The disclosed embodiments provide for a room preview element for previewing a room. In particular, the room preview element displays a live preview of a second room as a window or frame within a first room. In addition to displaying the live preview, the virtual conferencing system provides audio output associated with the second room, at a reduced audio level relative to audio output associated with the first room. The live video-audio feed of the room preview element allows participants within the first room to preview or otherwise observe a the second room, without requiring participants in the first room to navigate to the second room. FIG.1is a block diagram showing an example virtual conferencing system100for exchanging data over a network. The virtual conferencing system100includes multiple instances of a client device102, each of which hosts a number of applications, including a virtual conference client104and other application(s)106. Each virtual conference client104is communicatively coupled to other instances of the virtual conference client104(e.g., hosted on respective other client devices102), a virtual conference server system108and third-party servers110via a network112(e.g., the Internet). A virtual conference client104can also communicate with locally-hosted applications106using Applications Program Interfaces (APIs). The virtual conferencing system100provides for the reception and transmission of audio, video, image, text and/or other signals by user devices (e.g., at different locations), for communication between users in real-time. In some cases, two users may utilize virtual conferencing to communicate with each other in one-tip-one communication at their respective devices. In other cases, multiway virtual conferencing may be utilized by more than two users to participate in a real-time, group conversation. Thus, multiple client devices102may participate in virtual conferencing, for example, with the client devices102participating in a group conversation in which audio-video content streams and/or message content (e.g., text, images) are transmitted between the participant devices. A virtual conference client104is able to communicate and exchange data with other virtual conference clients104and with the virtual conference server system108via the network112. The data exchanged between virtual conference clients104, and between a virtual conference client104and the virtual conference server system108, includes functions (e.g., commands to invoke functions) as well as payload data (e.g., video, audio, other multimedia data, text). The virtual conference server system108provides server-side functionality via the network112to a particular virtual conference client104. For example, with respect to transmitting audio and/or video streams, the virtual conference client104(e.g., installed on a first client device102) may facilitate in transmitting streaming content to the virtual conference server system108for subsequent receipt by other participant devices (e.g., one or more second client devices102) running respective instances of the virtual conference client104. The streaming content can correspond to audio and/or video content captured by sensors (e.g., microphones, video cameras) on the client devices102, for example, corresponding to real-time video and/or audio capture of the users (e.g., faces) and/or other sights and sounds captured by the respective device. The streaming content may be supplemented with other audio/visual data (e.g., animations, overlays, emoticons and the like) and/or message content (e.g., text, stickers, emojis, other image/video data), for example, in conjunction with extension applications and/or widgets associated with the virtual conference client104. While certain functions of the virtual conferencing system100are described herein as being performed by either a virtual conference client104or by the virtual conference server system108, the location of certain functionality either within the virtual conference client104or the virtual conference server system108may be a design choice. For example, it may be technically preferable to initially deploy certain technology and functionality within the virtual conference server system108but to later migrate this technology and functionality to the virtual conference client104where a client device102has sufficient processing capacity. The virtual conference server system108supports various services and operations that are provided to the virtual conference client104. Such operations include transmitting data to, receiving data from, and processing data generated by the virtual conference client104. This data may include the above-mentioned streaming content and/or message content, client device information, and social network information, as examples. Data exchanges within the virtual conferencing system100are invoked and controlled through functions available via user interfaces (UIs) of the virtual conference client104. Turning now specifically to the virtual conference server system108, an Application Program Interface (API) server114is coupled to, and provides a programmatic interface to, application servers118. The application servers118are communicatively coupled to a database server124, which facilitates access to a database126that stores data associated with virtual conference content processed by the application servers118. Similarly, a web server116is coupled to the application servers118, and provides web-based interfaces to the application servers118. To this end, the web server116processes incoming network requests over the Hypertext Transfer Protocol (HTTP) and several other related protocols. The Application Program Interface (API) server114receives and transmits virtual conference data (e.g., commands, audio/video payloads) between the client device102and the application servers118. Specifically, the Application Program Interface (API) server114provides a set of interfaces (e.g., routines and protocols) that can be called or queried by the virtual conference client104in order to invoke functionality of the application servers118. The Application Program Interface (API) server114exposes various functions supported by the application servers118, including account registration, login functionality, the streaming of audio and/or video content, and/or the sending and retrieval of message content, via the application servers118, from a particular virtual conference client104to another virtual conference client104, the retrieval of a list of contacts of a user of a client device102, the addition and deletion of users (e.g., contacts) to a user graph (e.g., a social graph), and opening an application event (e.g., relating to the virtual conference client104). The application servers118host a number of server applications and subsystems, including for example a virtual conference server120and a social network server122. The virtual conference server120implements a number of virtual conference processing technologies and functions, particularly related to the aggregation and other processing of content (e.g., streaming content) included in audio-video feeds received from multiple instances of the virtual conference client104. Other processor and memory intensive processing of data may also be performed server-side by the virtual conference server120, in view of the hardware requirements for such processing. The social network server122supports various social networking functions and services and makes these functions and services available to the virtual conference server120. To this end, the social network server122maintains and accesses a user graph304(as shown inFIG.3) within the database126. Examples of functions and services supported by the social network server122include the identification of other users of the virtual conferencing system100with which a particular user has relationships (e.g., contacts such as friends, colleagues, teachers, students, and the like). In one or more embodiments, a user interacting via the virtual conference client104running on a first client device102may select and invite participant(s) to a virtual conference. For example, the participants may be selected from contacts maintained by the social network server122. In another example, the participants may be selected from contacts included within a contact address book stored in association with the first client device102(e.g., in local memory or in a cloud-based user account). In another example, the participants may be selected by the user manually entering email addresses and/or phone numbers of the participants. The user at the first client device102may initiate the virtual conference by selecting an appropriate user interface element provided by the virtual conference client104, thereby prompting the invited participants, at their respective devices (e.g., one or more second client devices102), to accept or decline participation in the virtual conference. When the participant(s) have accepted the invitation (e.g., via the prompt), the virtual conference server system108may perform an initialization procedure in which session information is published between the participant client devices102, including the user who provided the invite. Each of the participant client devices102may provide respective session information to the virtual conference server system108, which in turn publishes the session information to the other participant client devices102. The session information for each client device102may include content stream(s) and/or message content that is made available by the client device102, together with respective identifiers for the content stream(s) and/or message content. As described below with respect toFIG.2, the virtual conference may correspond to a virtual space which includes one or more rooms (e.g., virtual rooms). The virtual space and its corresponding rooms may have been created at least in part by the inviting user and/or by other users. In this manner, an end user may act as an administrator, who creates their own virtual spaces with rooms, and/or designs a virtual space based on preset available rooms. FIG.2is a block diagram illustrating further details regarding the virtual conferencing system100, according to some examples. Specifically, the virtual conferencing system100is shown to comprise the virtual conference client104and the application servers118. The virtual conferencing system100embodies a number of subsystems, which are supported on the client-side by the virtual conference client104and on the server-side by the application servers118. These subsystems include, for example, a virtual space creation system202which implements a virtual space design interface204, and a virtual space participation system206which implements a virtual space navigation interface208. The virtual space creation system202provides for a user to design one or more virtual space(s) in which participants may engage in virtual conferencing. In one or more embodiments, a virtual space corresponds to an environment with one or more rooms configured to accommodate virtual conferencing. The virtual space may be created and/or selected (e.g., from among a set of predefined virtual spaces with rooms) by an end user who wishes to invite other users for virtual conferencing. In addition, the individual rooms of a virtual space may be newly-created and/or selected (e.g., from among a set of predefined rooms) by the end user. In one or more embodiments, the virtual space creation system202includes a virtual space design interface204, which is usable by the end user to design a virtual space, including creating and/or selecting rooms for including in the virtual space. As discussed below with respect toFIG.4, the virtual space design interface204enables an end user (e.g., acting as an administrator) to select and/or position multiple elements within in a room. Examples of elements include, but are not limited to, participant video elements (e.g., for displaying the respective video feeds of participants), chat interfaces (e.g., for participants to provide text-based messages, stickers and/or reactions within a room), breakout buttons (e.g., for shuffling from a first room to one or more second rooms), and/or other user-definable elements for performing certain actions (e.g., speaking into a virtual microphone, querying an administrator via a button, and the like). The virtual space participation system206is configured to perform virtual conferencing among participants within a virtual space. The participants may include the end user (e.g., administrator) who created the virtual space, as well as those users who were invited to participate in virtual conferencing with respect to the virtual space created/selected by the end user. The virtual space participation system206includes a virtual space navigation interface208(e.g., discussed below with respect toFIG.5) that allows participants to navigate between the rooms of a virtual space, and to participate in virtual conferencing with respect to the rooms. In one or more embodiments, the virtual space creation system202and the virtual space participation system206provide for an end user (e.g., an administrator) to create different types of environments (e.g., virtual spaces with rooms) for virtual conferencing, and for participants to engage in virtual conferencing within such environments. Examples of such virtual conferencing include, but are not limited to: business meetings, seminars, presentations, classroom lectures, teacher office hours, concerts, reunions, virtual dinners, escape rooms, and the like. FIG.3is a schematic diagram illustrating data structures300, which may be stored in the database126of the virtual conference server system108, according to certain examples. While the content of the database126is shown to comprise a number of tables, it will be appreciated that the data could be stored in other types of data structures (e.g., as an object-oriented database). The database126includes profile data302, a user graph304and a user table306relating to the users (participants) of the virtual conferencing system100. The user table306stores user data, and is linked (e.g., referentially) to the user graph304and the profile data302. Each user of the virtual conferencing system100is associated with a unique identifier (email address, telephone number, social network identifier, etc.). The user graph304stores (e.g., in conjunction with the social network server122) information regarding relationships and associations between users. Such relationships may be social, professional (e.g., work at a common corporation or organization) interested-based or activity-based, merely for example. As noted above, the user graph304may be maintained and accessed at least in part by the social network server122. The profile data302stores multiple types of profile data about a particular user. The profile data302may be selectively used and presented to other users of the virtual conferencing system100, based on privacy settings specified by a particular user. The profile data302includes, for example, a user name, telephone number, email address, and/or settings (e.g., notification and privacy settings), as well as a user-selected avatar representation. The database126further includes a virtual spaces table308. As noted above, a virtual space corresponds to an environment with one or more rooms configured to accommodate virtual conferencing. A virtual space may be newly-created by a user, or may be included within one or more sets of public virtual spaces made available (e.g., by other users, system administrators, and the like) for virtual conferencing. The virtual spaces table308stores information representing the one or more sets of public virtual spaces, as well as any private virtual space(s) created by a user (e.g., in a case where the particular user did not make such virtual space(s) public). In one or more embodiments, the virtual spaces table308stores associations between its virtual spaces and users (e.g., within the user table306) who selected those virtual spaces. In this manner, it is possible for a particular user to have one or more virtual spaces associated therewith. Moreover, the database126includes a rooms table310which may be associated with the virtual spaces within the virtual spaces table308. As noted above, a room may be newly-created by a user, or may be included within one or more sets (e.g., galleries) of public rooms made available for user selection. The rooms table310stores information representing the one or more sets of rooms, as well as any private rooms(s) created by the user (e.g., in a case where the particular user did not make such room(s) public). The stored information is usable by the virtual conferencing system100to create the corresponding rooms for use in a virtual space. In one or more embodiments, the stored information may further include recordings (e.g., audio and/or video recordings) of a particular virtual conference, for subsequent playback by corresponding participants. FIG.4illustrates a virtual space design interface204with interface elements for designing a virtual space, in accordance with some example embodiments. Designing the virtual space may include creation and/or selection of rooms for including in the virtual space. The virtual space design interface204includes a menu interface402, a room elements interface404, an element properties interface406, a controls interface408, a room list interface410, a room canvas interface412, and an administrator name414. It is noted that elements402-414correspond to an example of interface elements for the virtual space design interface204, and that additional, fewer and/or different interface elements may be used. An administrator (e.g., corresponding to administrator name414) may use the various interface elements to design a virtual space. In one or more embodiments, the menu interface402includes user-selectable categories (e.g., menu headings) relating to a virtual space (e.g., “workspace”), rooms within the virtual space, and/or elements within a room. For example, the workspace category is user-selectable for presenting options (e.g., via a drop-down list) to manage settings for the virtual space, manage invites for the virtual space, manage versions of a virtual space, publish the virtual space (e.g., for future use by users), manage virtual space publications, and/or to start/manage recordings (e.g., audio and/or video recordings) with respect to the virtual space. The room category of the menu interface402is user-selectable for presenting options (e.g., via a drop-down list) to manage settings for a room within the virtual space, set a room background, set an order for the rooms listed in the room list interface410, create a new room, import a room from a set of available rooms, remove a room, publish a room, manage room publications, and/or to start/manage recordings with respect to a room. In addition, the element category is user-selectable for presenting options (e.g., via a drop-down list) to insert elements into a room, insert shapes into a room, foreground/background elements, arrange/position elements, and/or group elements. Examples of elements include, but are not limited to: an action button, analog clock, audience question board, backpack item, breakout button, chat, closed caption display, closed caption input, countdown, clock, digital clock, doorbell, double-sided image, feedback, image, multiuser video chat, music, participant audio mixer, participant count, participant video, picture strip, poll, random source, room preview, scheduled time, sound effect, stopwatch, take picture, text, timer, user search, video, waiting list, web media, website. Examples of shapes include, but are not limited to, a circle, rectangle and triangle. The users category of the menu interface402is user-selectable for presenting options (e.g., via a drop-down list) to manage users/participants of the virtual space (e.g., adding tags for participants, so as to distinguish between roles such as an administrator or an attendee/participant). In addition, the edit category is user-selectable for performing edit operations (e.g., undo, redo, cut, copy, paste), and the help category is user-selectable for performing help operations (e.g., getting started, discord, live help, submitting feedback). In one or more embodiments, the room elements interface404includes user-selectable icons for inserting elements (e.g., corresponding to a subset of those available via the above-mentioned element category) into a current room. For example, the elements may be added and/or positioned within the current room by selecting the element and dragging the selected element onto the room canvas interface412, which represents the layout of the current room. In one or more embodiments, the room elements interface404include icons including but not limited to: a text icon for adding text to a room; a participant video icon for adding a single participant video element (e.g., an interface element which is selectable by a single participant for displaying that participant's video feed) to a room; a muitiuser video icon for adding a multiple participant video element (e.g., an interface element which is selectable by one or more participants for displaying the video feeds for those participants) to a room; a chat icon for adding a chat interface (e.g., for messaging using text, stickers, emojis, etc.) to a room; a video playback icon for adding a video playback element (e.g., screen) to a room for playback of a selected video; a background icon for selecting a background color/gradient, image or video to a room; an action icon for adding an action element (e.g., button) to a room for performing a user-defined action (e.g., speaking into a virtual microphone, querying an administrator via a button, and the like); and/or a breakout button for adding a breakout element (e.g., button) for shuffling selected participants between the current room and one or more other rooms. In one or more embodiments, the element properties interface406include various fields for setting configuration properties for above-described room elements. For example, with respect to elements in general (e.g., text, single participant video element, multi participant video element, chat interface, video element, background image, action element, breakout button), the element properties interface406includes fields for setting the element title/name, opacity, gradient, style, layout, borders/corners, shadows, interaction (e.g., to what extent participant(s) may delete, modify, resize the element), filtering, full screen status, conditions, accessibility and actions for the element. For the single participant video element, the element properties interface406includes further fields for setting the manner in which users are placed into the single participant video element during virtual conferencing (e.g., automatically, manually by the participant and/or the administrator end user). In addition, for the chat interface, the element properties interface406includes further properties for setting who (e.g., administrator and/or participants) can provide chat input, and/or which types of input (e.g., text, stickers, emojis, etc.) are available. For the action element, the element properties interface406includes further properties for setting what type of action is to be performed in response to user selection of the action element (e.g., button). Moreover, for the breakout element, the element properties interface406includes further properties for selecting participants and/or breakout rooms. In one or more embodiments, the element properties interface406further includes fields for setting configuration properties for the room canvas interface412. For example, the element properties interface406includes fields for selecting a number of fake participants (e.g., simulated video feeds) in order to visualize multiple users, selecting music (e.g., background music), and/or selecting reaction buttons for participants to indicate real-time reactions with respect to virtual conferencing within a room. In one or more embodiments, the controls interface408includes user-selectable icons corresponding to controls (e.g., administrative controls) for the virtual space. For example, the controls interface408include icons including but not limited to: a director mode icon for toggling between a director mode for designing a room and a user mode for viewing the room within the virtual space design interface204(e.g., with the director mode including the room elements interface404and the element properties interface406while the user mode does not); a view icon for viewing the room within the virtual space navigation interface208; a share screen icon (e.g., for collaborative design with other user(s) such as co-administrators); a microphone icon for enabling or disabling the microphone; a help icon (e.g., getting started, discord, live help, submitting feedback); an invite icon (e.g., for displaying an invite link for sending to participants to visit the virtual space); a settings icon (e.g., for selecting the end user's video and audio devices for the virtual conferencing, and for selecting a user avatar); and/or an exit icon for exiting the virtual space design interface204. In one or more embodiments, the room list interface410displays the list of rooms for the virtual space. Each listed room is user selectable to switch to edit (e.g., in director mode) and/or view (e.g., in user mode) the selected room. As noted above, the list of rooms may be modified (e.g., by adding, importing and/or removing rooms) via the options within the room category of the menu interface402. FIG.5illustrates a virtual space navigation interface208with interface elements to navigate between the rooms of a virtual space and to participate in virtual conferencing with respect to the rooms, in accordance with some example embodiments. The virtual space navigation interface208includes a controls interface502, a room list interface504, a current room interface506, a participant video element508and a participant video element510. It is noted that elements502-512correspond to an example of interface elements for the virtual space navigation interface208, and that additional, fewer and/or different interface elements may be used. In one or more embodiments, the controls interface502includes user-selectable icons corresponding to controls (e.g., administrative controls) for the virtual space. For example, the controls interface408include icons including but not limited to: an edit icon for redirecting to the virtual space design interface204to edit the current room; a volume icon for adjusting a volume level for the current room; a share screen icon (e.g., for allowing others to view the room without necessarily joining the room); a microphone icon for muting and unmuting the microphone; a help icon (e.g., getting started, discord, live help, submitting feedback); an invite icon (e.g., for displaying an invite link for participants to visit the virtual space); a settings icon (e.g., for selecting the end user's video and audio devices for the virtual conferencing, and for selecting a user avatar); and/or an exit icon for exiting the virtual space design interface204. In one or more embodiments, the room list interface504displays the list of rooms for the virtual space. Each listed room is user selectable to switch to the selected room (e.g., for virtual conferencing). The selected room is presented as a current room within the current room interface506. In this manner, a participant may navigate among the multiple rooms available within the virtual space. Alternatively or in addition, navigation between rooms is possible via a virtual space map interface (not shown) which depicts a map view of the virtual space (e.g., a floor plan) and its corresponding rooms, with each room being user selectable to navigate thereto. Alternatively or in addition, navigation between rooms is further possible by positioning a navigation button (not shown) within a room, where user selection of the button results in navigating to another room (e.g., a predefined room). As noted above, the virtual space design interface204allows for the design of a virtual space and its corresponding rooms. As such, navigation between rooms is based at least in part on the design of the virtual space (e.g., a virtual space may include one or more of the above-mentioned room list interface504, the virtual space map/floor plan interface and/or the navigation button). With respect to the current room interface506, each participant is represented as a respective participant video element. As noted above, a participant video element corresponds to an interface element (e.g., a box) which is selectable by a single participant for displaying that participant's video feed. The example ofFIG.5includes a first participant associated with the participant video element508and a second participant associated with the participant video element510. In one or more embodiments, with respect to the perspective of the first participant, the participant video element510showing the feed of the second participant may include participant button(s)512. For example, the participant button(s)512are selectable by the first participant so as to perform a predefined action (e.g., initiate a side conversation, designate the second participant to follow the first participant when the first participant moves rooms) with respect to the second participant. While the example ofFIG.5illustrates two participants, it is possible for the current room interface506to accommodate additional participants for virtual conferencing. The additional participants may be positioned (e.g., automatically and/or manually by dragging) based on the positioning of participant video elements (e.g., boxes) as designed by the virtual space design interface204. In one or more embodiments, the virtual space navigation interface208may vary based on whether a given participant is an administrator or another participant (e.g., an attendee). For example, some participant video elements may be designated (e.g., via the virtual space design interface204) for administrators, while other participant video elements are designated for other participants. The virtual conference server system108is configured to distinguish between these administrator or other participant roles, for example, based on the above-described tags assigned to participants via the users category of the menu interface402provided by the virtual space design interface204. FIG.6is an interaction diagram illustrating a process600for providing a room preview within a virtual conferencing system, in accordance with some example embodiments. For explanatory purposes, the process600is described herein with reference to a first client device602, one or more second client device(s)604, and the virtual conference server system108. Each of the first client device602and the second client device(s)604may correspond to a respective client device102. The process600is not limited to the first client device602, the second client device(s)604and the virtual conference server system108. Moreover, one or more blocks (or operations) of the process600may be performed by one or more other components of the first client device602, the second client device(s)604or the virtual conference server system108, and/or by other suitable devices. Further for explanatory purposes, the blocks (or operations) of the process600are described herein as occurring in serial, or linearly. However, multiple blocks (or operations) of the process600may occur in parallel or concurrently. In addition, the blocks (or operations) of the process600need not be performed in the order shown and/or one or more blocks (or operations) of the process600need not be performed and/or can be replaced by other operations. The process600may be terminated when its operations are completed. In addition, the process600may correspond to a method, a procedure, an algorithm, etc. Each of the first client device602and the second client device(s)604have instances of the virtual conference client104installed thereon. In the example ofFIG.6, the first client device602and the one or more second client device(s)604are associated with a respective first participant and respective one or more second participant(s) of the virtual conference server system108. For example, the first participant may be associated with a first user account of the virtual conference server system108, and the second participant(s) may be associated with second user account(s) of the virtual conference server system108. As noted above, the first participant and second participant(s) are identifiable by the virtual conference server system108based on unique identifiers (e.g., email addresses, telephone numbers) associated with respective user accounts for the first participant and second participant(s). In one or more embodiments, the virtual conference server system108implements and/or works in conjunction with a social network server122which is configured to identify contacts with which a particular user has relationships. For example, the first participant and second participant(s) may be contacts with respect to the virtual conference server system108. As described herein, the virtual conferencing system100provides for presenting a live preview to another room, for participants within a current room of a virtual space. At operations606-608, the virtual conference server system108provides for real-time communication between the participant devices, which include the first client device602and the one or more second client device(s)604. The real-time communication may occur within a first room (e.g., a current room) of plural rooms included within a virtual space. Each participant within the first room may be represented by a respective participant video element (e.g., elements508-510inFIG.5, corresponding to respective video feeds). In one or more embodiments, the virtual conference server system108provides for transmitting audio to the first client device602and the second client device(s)604via a first audio channel. For example, the first audio channel corresponds to a primary communication stream between the plural participants within the first room. The communication may be one way (e.g., from a presenter to the remaining participants) or multi-way (e.g., between two or more of the participants), based on the configuration/design of the first room (e.g., as set via the virtual space design interface204). As operations610-operation612, the virtual conference server system108provides, to the first client device602and the second client device(s)604, display of a room preview element within the first room. The room preview element corresponds to a live video-audio teed of a second room which is separate from the first room. In one or more embodiments, the virtual conference server system108causes the room preview element to be displayed at a predefined position (e.g., as a window or frame) in the first room. For example, a user (e.g., administrator) may have configured the room preview element during design, via the virtual space design interface204. As noted above, the menu interface402of the virtual space design interface204includes an elements category for selecting which elements to include in a room, where the available list of elements includes a room preview element. Thus, the user may have included the room preview element in the first room. In addition, the user may have specified the position/size of the room preview element, and which room (e.g., the second room, specified by an identifier for the second room) is associated with the room preview element. During virtual conferencing within the first room (e.g., as presented via the virtual space navigation interface208), the second room may be occupied by other participants (e.g., third participants, not shown) who are engaging in a virtual conference in the second room. The room preview element positioned within the first room allows the first participant (e.g., at the first client device602) and the second participant(s) (e.g., at the second client device(s)604) to observe the separate virtual conference within the second room, without requiring the first and second participants to navigate to the second room. In one or more embodiments, the room preview element may correspond to a viewport (e.g., based on configuration properties set by the administrator via the virtual space design interface204). The viewport may correspond to a live feed for a portion of the second room. In this manner, the preview of the second room provided by the room preview element may be smaller than the entire view of the second room when switching thereto. In addition, the room preview element may include interface elements (e.g., arrow controls, zoom controls, and the like) for adjusting the viewport to show the live teed with respect to another portion of the second room. Alternatively or in addition, the virtual conference server system108may configure the viewport to reposition and/or resize the live feed with respect to the second room via a predefined gesture (e.g., a drag gesture, a pinch gesture, and the like). At operations614-616, the virtual conference server system108provides a second audio channel, associated with audio output of the second room, to the first client device602and the second client device(s)604. For example, the second audio channel corresponds to a primary communication stream between the participants (e.g., the third participants) within the second room. The communication may be one way (e.g., from a presenter to the remaining third participants) or multi-way (e.g., between two or more of the third participants), based on the configuration/design of the second room. In one or more embodiments, the virtual conference server system108provides for adjusting audio levels of the first audio channel relative to the second audio channel within the first room. For example, the virtual conference server system108accesses a user-specified parameter corresponding to an audio mix for the first room. The parameter may have been specified by a user (e.g., an administrator) with respect to designing the room via the virtual space design interface204. In one example, the parameter may indicate to maintain the audio level for the first audio channel (e.g., at 100%) and to set the audio level for the second audio channel to be comparatively lower (e.g., at 10%). In one or more embodiments, the virtual conference server system108provides for increasing the audio level for the second audio channel in response to a predefined gesture (e.g., a hover gesture). For example, in response to a participant performing a hover gesture over the room preview element, the virtual conference server system108provides for adjusting the audio level from a default level (e.g., 10% as noted above) to an increased level (e.g., 50%) for that participant's device. In this manner, the participant is able to further observe the virtual conference in the second room, without being required to navigate from the first room to the second room. In one or more embodiments, the room preview element is selectable by the user to navigate from the first room to the second room. As noted above, based on design via the virtual space design interface204, the virtual space navigation interface208provides for navigating between rooms in different manners (e.g., via the room list interface504, the virtual space map/floor plan interface, and/or a navigation button). In this regard, the room preview element (e.g., the window or frame itself) may be configured as a navigation button which is selectable to navigate from the first room to the second room. In the example ofFIG.6, the first participant at the first client device602performs a predefined gesture (e.g., press) with respect to the room preview element, the predefined gesture for navigating to the second room (block618). The first client device602sends an indication of the user selection to the virtual conference server system108(operation620). In response, the virtual conference server system108provides for switching the first participant from the first room to the second room of the virtual space (operation622). In doing so, the virtual conference server system108also updates the participant video element for the first participant (operation624). For example, the participant video element of the first participant is removed from the first room. Further, a participant video element for the first participant is added to the second room. In this manner, the room preview element displayed within the first room may show the first participant as being in the second room. Although the example ofFIG.6is described with respect to a single room preview element, the virtual conferencing system100is not limited to such. As discussed below with respect toFIG.7, the first room may include multiple room preview elements (e.g., based on the design of the first room via the virtual space design interface204), each of which provides a live preview to a respective room within the virtual space. Audio output from each of the respective rooms may be mixed in, at reduced level, with an audio channel corresponding to the first room. Each of the rooms may be user selectable to switch a participant from the first room to the selected room. Thus, the virtual conferencing system100as described herein provides for room preview element(s) with live video-audio feeds to preview or otherwise observe other rooms. Without such room preview elements, participants may otherwise be required to manually switch from a current room to another room in order to observe the other room. The virtual conferencing system100reduces such additional efforts by the participants, thereby saving time for the participants, and reducing computational resources/processing power for the virtual conferencing system100. FIG.7illustrates a current room interface700with room preview elements for previewing multiple rooms, in accordance with some example embodiments. As noted above, the current room interface700may be included within a virtual space navigation interface (e.g., perFIG.5). In the example ofFIG.7, the current room interface700corresponds to a room with a single participant whose video feed is included in the participant video element702. While not shown in the example ofFIG.7, it is possible for the room to include more than one participant. In addition, the current room interface700depicts three room preview elements704-708. Each of the room preview elements704-708corresponds to a live video-audio feed (e.g., including participants therein) of a respective room within a virtual space which is separate from the current room. One or more of the room preview elements704-708may correspond to a viewport which shows only a portion of its respective room. Alternatively, one or more of the room preview elements704-708may show an entirety of its respective room (e.g., similar to the view of navigating to that room). Audio output from each of the respective rooms corresponding to the room preview elements704-708may be mixed in, at reduced level, with an audio channel corresponding to the current room. In response to a predefined gesture (e.g., a hover gesture over one of the room preview elements704-708), audio output from the corresponding room may be increased. Moreover, in response to user selection of one of the room preview elements704-708(e.g., by the participant corresponding to the participant video element702), the virtual conference server system108provides for switching the participant from the current room to the room associated with the selection. FIG.8is a flowchart illustrating a process800for providing a room preview within a virtual conferencing system, in accordance with some example embodiments. For explanatory purposes, the process800is primarily described herein with reference to the first client device602, the second client device(s)604and the virtual conference server system108ofFIG.1andFIG.2. However, one or more blocks (or operations) of the process800may be performed by one or more other components, and/or by other suitable devices. Further for explanatory purposes, the blocks (or operations) of the process800are described herein as occurring in serial, or linearly. However, multiple blocks (or operations) of the process800may occur in parallel or concurrently. In addition, the blocks (or operations) of the process800need not be performed in the order shown and/or one or more blocks (or operations) of the process800need not be performed and/or can be replaced by other operations. The process800may be terminated when its operations are completed. In addition, the process800may correspond to a method, a procedure, an algorithm, etc. The virtual conference server system108provides for virtual conferencing between plural participants within a first room of plural rooms, the plural rooms being included within a virtual space for virtual conferencing (block802). The virtual conference server system108may provide, for each of the plural participants, display of a participant video element which corresponds to the participant and which includes a video feed for the participant. The virtual conference server system108provides, for each of the plural participants, a first audio channel associated with audio output of the first room (block804). The first audio channel may correspond to multi-way communication between the plural participants. The virtual conference server system108provides, for each of the plural participants, display of a room preview element within the first room, the room preview element corresponding to a live preview of a second room of the plural rooms (block806). The room preview element may include a viewport with a live feed for a portion of the second room. The room preview element may include a user-selectable interface element for adjusting the viewport to show the live feed for another portion of the second room. The virtual conference server system108provides, for each of the plural participants, a second audio channel associated with audio output of the second room (block808). The second audio channel may correspond to multi-way communication between second plural participants within the second room. The virtual conference server system108may access a user-specified parameter corresponding to an audio mix for the first room, and adjust, based on the user-specified parameter, respective audio levels for the first audio channel and the second audio channel. The virtual conference server system108may receive indication of a predefined gesture (e.g., a hover gesture), by a first participant of the plural participants, with respect to the room preview element, and provide, in response to receiving the indication, for increasing the audio level of the second audio channel. The room preview element may be user-selectable for navigating from the first room to the second room. The virtual conference server system108may receive indication of a user selection, by a first participant of the plural participants, of the room preview element, and provide, in response to receiving the indication, for navigating the first participant from the first room to the second room. The virtual conference server system108may provide, for each of the plural participants, display of a second room preview element within the first room, the second room preview element corresponding to a live preview of a third room of the plural rooms. The virtual conference server system108may provide, for each of the plural participants, a third audio channel corresponding to audio output of the second room. FIG.9is a diagrammatic representation of the machine900within which instructions910(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine900to perform any one or more of the methodologies discussed herein may be executed. For example, the instructions910may cause the machine900to execute any one or more of the methods described herein. The instructions910transform the general, non-programmed machine900into a particular machine900programmed to carry out the described and illustrated functions in the manner described. The machine900may operate as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine900may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine900may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a personal digital assistant (PDA), an entertainment media system, a cellular telephone, a smartphone, a mobile device, a wearable device (e.g., a smartwatch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions910, sequentially or otherwise, that specify actions to be taken by the machine900. Further, while only a single machine900is illustrated, the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions910to perform any one or more of the methodologies discussed herein. The machine900, for example, may comprise the client device102or any one of a number of server devices forming part of the virtual conference server system108. In some examples, the machine900may also comprise both client and server systems, with certain operations of a particular method or algorithm being performed on the server-side and with certain operations of the particular method or algorithm being performed on the client-side. The machine900may include processors904, memory906, and input/output I/O components902, which may be configured to communicate with each other via a bus940. In an example, the processors904(e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) Processor, a Complex Instruction Set Computing (CISC) Processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor908and a processor912that execute the instructions910. The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. AlthoughFIG.9shows multiple processors904, the machine900may include a single processor with a single-core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof. The memory906includes a main memory914, a static memory916, and a storage unit918, both accessible to the processors904via the bus940. The main memory906, the static memory916, and storage unit918store the instructions910embodying any one or more of the methodologies or functions described herein. The instructions910may also reside, completely or partially, within the main memory914, within the static memory916, within machine-readable medium920within the storage unit918, within at least one of the processors904(e.g., within the Processor's cache memory), or any suitable combination thereof, during execution thereof by the machine900. The I/O components902may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components902that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones may include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components902may include many other components that are not shown inFIG.9. In various examples, the I/O components902may include user output components926and user input components928. The user output components926may include visual components (e.g., a display such as a plasma display panel (PDP), a light-emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The user input components928may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In further examples, the I/O components902may include biometric components930, motion components932, environmental components934, or position components936, among a wide array of other components. For example, the biometric components930include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye-tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components932include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope). The environmental components934include, for example, one or cameras (with still image/photograph and video capabilities), illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. With respect to cameras, the client device102may have a camera system comprising, for example, front cameras on a front surface of the client device102and rear cameras on a rear surface of the client device102. The front cameras may, for example, be used to capture still images and video of a user of the client device102(e.g., “selfies”), which may then be augmented with augmentation data (e.g., filters) described above. The rear cameras may, for example, be used to capture still images and videos in a more traditional camera mode, with these images similarly being augmented with augmentation data. In addition to front and rear cameras, the client device102may also include a 360° camera for capturing 360° photographs and videos. Further, the camera system of a client device102may include dual rear cameras (e.g., a primary camera as well as a depth-sensing camera), or even triple, quad or penta rear camera configurations on the front and rear sides of the client device102. These multiple cameras systems may include a wide camera, an ultra-wide camera, a telephoto camera, a macro camera and a depth sensor, for example. The position components936include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. The I/O components902further include communication components938operable to couple the machine900to a network922or devices924via respective coupling or connections. For example, the communication components938may include a network interface Component or another suitable device to interface with the network922. In further examples, the communication components938may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices924may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB). Moreover, the communication components938may detect identifiers or include components operable to detect identifiers. For example, the communication components938may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components938, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth. The various memories (e.g., main memory914, static memory916, and memory of the processors904) and storage unit918may store one or more sets of instructions and data structures (e.g., software) embodying or used by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions910), when executed by processors904, cause various operations to implement the disclosed examples. The instructions910may be transmitted or received over the network922, using a transmission medium, via a network interface device (e.g., a network interface component included in the communication components938) and using any one of several well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions910may be transmitted or received using a transmission medium via a coupling (e.g., a peer-to-peer coupling) to the devices924. FIG.10is a block diagram1000illustrating a software architecture1004, which can be installed on any one or more of the devices described herein. The software architecture1004is supported by hardware such as a machine1002that includes processors1020, memory1026, and I/O components1038. In this example, the software architecture1004can be conceptualized as a stack of layers, where each layer provides a particular functionality. The software architecture1004includes layers such as an operating system1012, libraries1010, frameworks1008, and applications1006. Operationally, the applications1006invoke API calls1050through the software stack and receive messages1052in response to the API calls1050. The operating system1012manages hardware resources and provides common services. The operating system1012includes, for example, a kernel1014, services1016, and drivers1022. The kernel1014acts as an abstraction layer between the hardware and the other software layers. For example, the kernel1014provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functionality. The services1016can provide other common services for the other software layers. The drivers1022are responsible for controlling or interfacing with the underlying hardware. For instance, the drivers1022can include display drivers, camera drivers, BLUETOOTH® or BLUETOOTH® Low Energy drivers, flash memory drivers, serial communication drivers (e.g., USB drivers), WI-FI® drivers, audio drivers, power management drivers, and so forth. The libraries1010provide a common low-level infrastructure used by the applications1006. The libraries1010can include system libraries1018(e.g., C standard library) that provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries1010can include API libraries1024such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and three dimensions (3D) in a graphic content on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like. The libraries1010can also include a wide variety of other libraries1028to provide many other APIs to the applications1006. The frameworks1008provide a common high-level infrastructure that is used by the applications1006. For example, the frameworks1008provide various graphical user interface (GUI) functions, high-level resource management, and high-level location services. The frameworks1008can provide a broad spectrum of other APIs that can be used by the applications1006, some of which may be specific to a particular operating system or platform. In an example, the applications1006may include a home application1036, a contacts application1030, a browser application1032, a book reader application1034, a location application1042, a media application1044, a messaging application1046, a game application1048, and a broad assortment of other applications such as a third-party application1040. The applications1006are programs that execute functions defined in the programs. Various programming languages can be employed to create one or more of the applications1006, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, the third-party application1040(e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or another mobile operating system. In this example, the third-party application1040can invoke the APT calls1050provided by the operating system1012to facilitate functionality described herein. Glossary “Carrier signal” refers to any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible media to facilitate communication of such instructions. Instructions may be transmitted or received over a network using a transmission medium via a network interface device. “Client device” refers to any machine that interfaces to a communications network to obtain resources from one or more server systems or other client devices. A client device may be, but is not limited to, a mobile phone, desktop computer, laptop, portable digital assistants (PDAs), smartphones, tablets, ultrabooks, netbooks, laptops, multi-processor systems, microprocessor-based or programmable consumer electronics, game consoles, set-top boxes, or any other communication device that a user may use to access a network. “Communication network” refers to one or more portions of a network that may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, a network or a portion of a network may include a wireless or cellular network and the coupling may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or other types of cellular or wireless coupling. In this example, the coupling may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (CPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long-range protocols, or other data transfer technology. “Component” refers to a device, physical entity, or logic having boundaries defined by function or subroutine calls, branch points, APIs, or other technologies that provide for the partitioning or modularization of particular processing or control functions. Components may be combined via their interfaces with other components to carry out a machine process. A component may be a packaged functional hardware unit designed for use with other components and a part of a program that usually performs a particular function of related functions. Components may constitute either software components (e.g., code embodied on a machine-readable medium) or hardware components. A “hardware component” is a tangible unit capable of performing certain operations and may be configured or arranged in a certain physical manner. In various examples, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware components of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware component that operates to perform certain operations as described herein. A hardware component may also be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware component may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware component may be a special-purpose processor, such as a field-programmable gate array (FPGA) or an application specific integrated circuit (ASIC). A hardware component may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware component may include software executed by a general-purpose processor or other programmable processor. Once configured by such software, hardware components become specific machines (or specific components of a machine) uniquely tailored to perform the configured functions and are no longer general-purpose processors. It will be appreciated that the decision to implement a hardware component mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software), may be driven by cost and time considerations. Accordingly, the phrase “hardware component” (or “hardware-implemented component”) should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering examples in which hardware components are temporarily configured (e.g., programmed), each of the hardware components need not be configured or instantiated at any one instance in time. For example, where a hardware component comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware components) at different times. Software accordingly configures a particular processor or processors, for example, to constitute a particular hardware component at one instance of time and to constitute a different hardware component at a different instance of time. Hardware components can provide information to, and receive information from, other hardware components. Accordingly, the described hardware components may be regarded as being communicatively coupled. Where multiple hardware components exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware components. In examples in which multiple hardware components are configured or instantiated at different times, communications between such hardware components may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware components have access. For example, one hardware component may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware component may then, at a later time, access the memory device to retrieve and process the stored output. Hardware components may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented components that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented component” refers to a hardware component implemented using one or more processors. Similarly, the methods described herein may be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented components. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e. g., an API). The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some examples, the processors or processor-implemented components may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other examples, the processors or processor-implemented components may be distributed across a number of geographic locations. “Computer-readable storage medium” refers to both machine-storage media and transmission media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals. The terms “machine-readable medium,” “computer-readable medium” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure. “Machine storage medium” refers to a single or multiple storage devices and media (e.g., a centralized or distributed database, and associated caches and servers) that store executable instructions, routines and data. The term shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors. Specific examples of machine-storage media, computer-storage media and device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The terms “machine-storage medium,” “device-storage medium,” “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium.” “Non-transitory computer-readable storage medium” refers to a tangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine. “Signal medium” refers to any intangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine and includes digital or analog communications signals or other intangible media to facilitate communication of software or data. The term “signal medium” shall be taken to include any form of a modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a matter as to encode information in the signal. The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure.
76,005
11943073
DETAILED DESCRIPTION OF EMBODIMENTS Aspects of the present disclosure provide techniques for signaling of multiple grouping of an immersive video with overlays for teleconferencing and/or telepresence. In some examples, a teleconference can be a video conference, and the participants in the video conference can communicate via media streams that can include video and audio. Telepresence is a type of video conferencing technology that provides a higher level of audio and video quality to resemble face-to-face interactions as much as possible. In the descriptions of the disclosure, video conference is used to include teleconference and telepresence, and techniques disclosed in the present disclosure can be used in teleconference and telepresence. FIG.1andFIG.2show some video conference systems in some examples. FIG.1shows a video conference system (100). The video conference system (100) includes a sub system (110) and a plurality of user devices, such as user devices (120) and (130). The sub system (110) is installed at a location, such as conference room A. Generally, the sub system (110) is configured to have a relatively higher bandwidth than the user devices (120) and (130) and can provide a host service of a video conference session (also referred to as a video conference call). The sub system (110) can enable users or participants in the conference room A to participate in the video conference session, and can enable some remote users, such as user B of the user device (120) and user C of the user device (130) to participate in the video conference session from remote locations. In some examples, the sub system (110) and the user devices (120) and (130) are referred to as terminals in the video conference session. In some embodiments, the sub system (110) includes various audio, video and control components that are suitable for a conference room. The various audio, video and control components can be integrated into a device, or can be distributed components that are coupled together via suitable communication technologies. In some examples, the sub system (110) includes wide-angle camera (111), such as a fisheye camera, an omnidirectional camera and the like that has a relatively wide field of view. For example, an omnidirectional camera can be configured to have a field of view that covers approximately an entire sphere, and a video taken by the omnidirectional camera can be referred to as omnidirectional video or 360-degree video. Further, in some examples, the sub system (110) includes a microphone (112), such as an omnidirectional (also referred to as no directional) microphone that can capture sound waves from approximately any direction. The sub system (110) can include a display screen (114), a speaker device, and the like to enable users in the conference room A to play multimedia corresponding to video and audio of users at locations other than the conference room A. In an example, the speaker device can be integrated with the microphone (112) or can be a separate component (not shown). In some examples, the sub system (110) includes a controller (113). While a laptop computing device is shown inFIG.1as the controller (113), other suitable device, such as desktop computer, a tablet computer, and the like can be used as the controller (113). It is also noted, in an example, the controller (113) can be integrated together with other component in the sub system (110). The controller (113) can be configured to perform various control functions of the sub system (110). For example, the controller (113) can be used to initiate a video conference session, and manage communications between the sub system (110) and the user devices (120) and (130). In an example, the controller (113) can encode video and/or audio captured at the conference room A (e.g., captured by the camera (111) and the microphone (112)) to generate a media stream to carry the video and/or audio, and can cause the media stream to be transmitted to the user devices (120) and (130). Further, in some examples, the controller (113) can receive, from each of the user devices in the video conference system (100) (e.g., user devices (120) and (130)), media streams that carry audio and/or video captured at the respective user devices. The controller (113) can address and transmit received media streams to the other user devices in the video conference system (100). For example, the controller (113) can receive a media stream from the user device (120), address and transmit the media stream to the user device (130), and can receive another media stream from the user device (130), address and transmit the other media stream to the user device (120). Further, in some examples, the controller (113) can determine suitable video conference parameters, such as audio, video mixing parameters and the like, and transmit the video conference parameters to the user devices (120) and (130). In some examples, the controller (113) can cause a display of a user interface on a screen, such as the display screen (114), a screen of the laptop computing device, and the like to facilitate user inputs in the conference room A. Each of the user devices (120) and (130) can be any suitable video conference enabled equipment, such as a desktop computer, a laptop computer, a tablet computer, a wearable device, an handheld device, a smart phone, a mobile-type device, an embedded-type device, a game console, a gaming device, a personal data assistant (PDA), a telecommunication device, a global positioning system (“GPS”) device, a virtual reality (“VR”) device, an augmented reality (AR) device, an implanted computing device, an automotive computer, a network-enabled television, an Internet of Things (“IoT”) device, a work station, a media player, a personal video recorders (PVR), a set-top box, a camera, an integrated component (e.g., a peripheral device) for inclusion in a computing device, an appliance, or any other sort of computing device. In theFIG.1example, the user device (120) includes wearable multimedia components to allow a user, such as the user B, to participate in a video conference session. For example, the user device (120) includes a head mounted display (HMD) that can be worn on the head of the user B. The HMD can include display optic in front of one or two eyes of the user B to play video. In another example, the user device (120) includes a headset (not shown) that can be worn by the user B. The headset can include microphone to capture user voice and include one or two earpieces for outputting audio sound. The user device (120) also includes suitable communication components (not shown) that can transmit and/or receive media streams. In theFIG.1example, the user device (130) can be a mobile device, such as a smart phone and the like that integrates communication components, imaging components, audio components, and the like together to allow a user, such as the user C, to participate in a video conference session. In theFIG.1example, the sub system (110), the user device (120) and the user device (130) include suitable communication components (not shown) that can interface with a network (101). The communication components may include one or more network interface controllers (NICs) or other types of transceiver circuitry to send and receive communications and/or data over a network, such as the network (101), and the like. The network (101) may include, for example, public networks such as the Internet, private networks such as an institutional and/or personal intranet, or some combination of private and public networks. The network (108) may also include any type of wired and/or wireless network, including but not limited to local area networks (“LANs”), wide area networks (“WANs”), satellite networks, cable networks, Wi-Fi networks, WiMax networks, mobile communications networks (e.g., 3G, 4G, 5G and so forth) or any combination thereof. The network (101) may utilize communications protocols, including packet-based and/or datagram-based protocols such as Internet protocol (“IP”), transmission control protocol (“TCP”), user datagram protocol (“UDP”), or other types of protocols. Moreover, the network (101) may also include a number of devices that facilitate network communications and/or form a hardware basis for the networks, such as switches, routers, gateways, access points, firewalls, base stations, repeaters, backbone devices, and the like. In some examples, the network (101) may further include devices that enable connection to a wireless network, such as a wireless access point (“WAP”). In theFIG.1example, the sub system (110) can host a video conference session using peer to peer technologies. For example, after the user device (120) joints the video conference session, the user device (120) can suitably address packets (e.g., using IP address for the sub system (110)) and transmit the packets to the sub system (110), and the sub system (110) can suitably address packets (e.g., using IP address for the user device (120)), and transmits packets to the user device (120). The packets can carry various information and data, such as media streams, acknowledgements, control parameters, and the like. In some examples, the video conference system (100) can provide video conference sessions of immersive video conferencing. For example, during a video conference session, the sub system (110) is configured to generate immersive media, such as an omnidirectional video/audio using an omnidirectional camera and/or an omnidirectional microphone. In an example, the HMD in the user device (120) can detect head movements of the user B, and determine a viewport orientation of the user B based on the head movements. The user device (120) can send the viewport orientation of the user B to the sub system (110), and the sub system (110) can, in turn, send a viewport dependent stream, such as a video stream that is tailored based on the viewport orientation of the user B (a media stream carrying a video that is tailored based on the viewport orientation of the user B), an audio stream that is tailored based on the viewport orientation of the user B (a media stream carrying a video that is tailored based on the viewport orientation of the user B), and the like to the user device (120) for playing at the user device (120). In another example, the user C can use the user device (130) to enter a viewport orientation of the user C (e.g., using touch screen of the smart phone). The user device (130) can send the viewport orientation of the user C to the sub system (110), and the sub system (110) can, in turn, send a viewport dependent stream, such as a video stream that is tailored based on the viewport orientation of the user C (a media stream carrying a video that is tailored based on the viewport orientation of the user C), an audio stream that is tailored based on the viewport orientation of the user C (a media stream carrying an audio that is tailored based on the viewport orientation of the user C), and the like to the user device (130) for playing at the user device (130). It is noted that, during a video conference session, the viewport orientations of the user B and/or user C may vary. The change of the viewport orientations can be informed to the sub system (110), and the sub system (110) can adjust the viewport orientation in respective viewport dependent streams that are respectively sent to the user device (120) and the user device (130). For ease of description, immersive media is used to refer the wide angle media, such as an omnidirectional video, an omnidirectional audio, and to refer the viewport dependent media that is generated based on the wide angle media. It is noted that, in the present disclosure, 360-degree media, such as 360-degree video, 360-degree audio and the like is used to illustrate techniques for video conference, the techniques of video conference can be used on immersive media of less than 360-degree. FIG.2shows another video conference system (200) according to some examples of the disclosure. The video conference system (200) includes a network based media processing server (240), multiple sub systems, such as sub systems (210A)-(210Z) that are respectively installed at conference room A to conference room Z, and a plurality of user devices, such as user devices (220) and (230). The network based media processing server (240) can setup a video conference session and enable the sub systems (210A)-(210Z) and user devices, such as the user devices (220) and (230) to join the video conference session, thus users, such as users in the conference rooms A-Z, user B of the user device (220) and user C of the user device (230) can participate in the video conference session. In some examples, the sub systems (210A)-(210Z) and the user devices (220) and (230) are referred to as terminals in the video conference session and network based media processing server (240) can bridge the terminals in the video conference session. In some examples, the network based media processing server (240) is referred to as media aware networking element. The network based media processing server (240) can perform media resource functions (MRF) and can perform media control functions as a media control unit (MCU). In some examples, a terminal in the video conference system (200) that transmits immersive media to the network based media processing server (240) in a video conference session is referred to as immersive teleconferencing and telepresence for remote terminals (ITT4RT) transmitter (Tx) client, or remote transmitter or transmitter client; and a terminal in the video conference system (200) that receives media stream related to an immersive media (e.g., media stream is generated based on the immersive media) from the network based media processing server (240) in a video conference session is referred to as immersive teleconferencing and telepresence for remote terminals (ITT4RT) receiver (Rx) client, or remote receiver, or receiver client. It is noted that a terminal may be ITT4RT Rx client and/or ITT4RT Tx client in a video conference session. In some embodiments, each of the sub systems (210A)-(210Z) operates similarly to the sub system (110) described above. Further, each of the sub system (210A)-(210Z) utilizes certain components that are identical or equivalent to those used in the sub system (110); the description of these components has been provided above and will be omitted here for clarity purposes. It is noted that the sub systems (210A)-(210Z) can be configured differently from each other. The user devices (220) and (230) are similarly configured as the user devices (120) and (130) described above and the network (201) is similarly configured as the network (101). The description of these components has been provided above and will be omitted here for clarity purposes. In some examples, the network based media processing server (240) can initiate a video conference session. For example, one of the sub systems (210A)-(210Z) and the user devices (220) and (230) can access the network based media processing server (240) to initiate a video conference session. The sub systems (210A)-(210Z) and the user devices (220) and (230) can join the video conference session. Further, the network based media processing server (240) is configured to provide media related functions for bridging terminals in the video conference session. For example, the sub systems (210A)-(210Z) can respectively address packets that carry respective media information, such as video and audio, and transmit the packets to the network based media processing server (240). It is noted that, in some examples, the media information sent to the network based media processing server (240) is viewport independent. For example, the sub systems (210A)-(210Z) can send respective videos, such as the entire 360-degree videos, to the network based media processing server (240). Further, the network based media processing server (240) can receive viewport orientation from the user devices (220) and (230), perform media processing to tailor the media, and send the tailored media information to the respective user devices. In an example, after the user device (220) joints the video conference session, the user device (220) can address packets and transmit the packets to the network based media processing server (240), and the network based media processing server (240) can address packets and transmit the packets to the user device (220). The packets can include any suitable information/data, such as media stream, control parameters and the like. In an example, the user B can use the user device (220) to select a conference room in order to view the video from a sub system in the conference room. For example, the user B can use the user device (220) to select the conference room A in order to view the captured video from the sub system (210A) that is installed in the conference room A. Further, the HMD in the user device (220) can detect head movements of the user B, and determine a viewport orientation of the user B based on the head movements. The user device (220) can send the selection of the conference room A and the viewport orientation of the user B to the network based media processing server (240), and the network based media processing server (240) can process the media sent from the sub system (210A) and send a viewport dependent stream, such as a video stream that is tailored based on the viewport orientation of the user B, an audio stream that is tailored based on the viewport orientation of the user B, and the like to the user device (220) for playing at the user device (220). In some examples, when the user device (220) selects the conference room A, the user device (220), the sub system (210A) and the network based media processing server (240) can communicate with each other based on session description protocol (SDP). In another example, after the user device (230) joints the video conference session, the user device (230) can address packets and transmit the packets to the network based media processing server (240), and the network based media processing server (240) can address packets and transmit the packets to the user device (230). The packets can include any suitable information/data, such as media stream, control parameters and the like. In some examples, the network based media processing server (240) can send tailored media information to the user device (230). For example, the user C can use the user device (230) to enter a selection of a conference room, such as conference room Z, and a viewport orientation of the user C (e.g., using touch screen of the smart phone). The user device (230) can send the selection information of the conference room Z and the viewport orientation of the user C to the network based media processing server (240), and the network based media processing server (240) can process the media sent from the sub system (210Z) and send a viewport dependent stream, such as a video stream that is tailored based on the viewport orientation of the user C, an audio stream that is tailored based on the viewport orientation of the user C, and the like to the user device (230) for playing at the user device (230). In some examples, when the user device (230) selects the conference room Z, the user device (230), the sub system (210Z) and the network based media processing server (240) can communicate with each other based on session description protocol (SDP). It is noted that, during a video conference session, the viewport orientations of a user (e.g., the user B, user C) may vary. For example, the change of the viewport orientation of the user B can be informed to the network based media processing server (240) by the user B, and the network based media processing server (240) can adjust the viewport orientation in the viewport dependent stream that is sent to the user device (220) accordingly. For ease of description, immersive media is used to refer the wide angle media, such as an omnidirectional video, an omnidirectional audio, and to refer the viewport dependent media that is generated based on the wide angle media. It is noted that, in the present disclosure, 360-degree media, such as 360-degree video, 360-degree audio and the like is used to illustrate techniques for video conference, the techniques of video conference can be used on immersive media of less than 360 degrees. It is noted that conference room selection can be changed during a video conference session. In an example, a user device, such as the user device (220), the user device (230) and the like can trigger a switching from one conference room to another conference room based on an active speaker. For example, in response to the active speaker being in the conference room A, the user device (230) can determine to switch the selection of the conference room to the conference room A, and send the selection of the conference room A to the network based media processing server (240). Then, the network based media processing server (240) can process the media sent from the sub system (210A) and send a viewport dependent stream, such as a video stream that is tailored based on the viewport orientation of the user C, an audio stream that is tailored based on the viewport orientation of the user C, and the like to the user device (230) for playing at the user device (230). In some examples, the network based media processing server (240) can pause receiving a video stream from any conference room which does not have any active users. For example, the network based media processing server (240) determines that the conference room Z is without any active user, then the network based media processing server (240) can pause receiving the video stream the sub system (210Z). In some examples, the network based media processing server (240) can include distributed computing resources and can communicate with the sub systems (210A)-(210Z) and the user devices (220) and (230), via the network (201). In some examples, the network based media processing server (240) may be an independent system that is tasked with managing aspects of one or more video conference sessions. In various examples, the network based media processing server (240) may include one or more computing devices that operate in a cluster or other grouped configuration to share resources, balance load, increase performance, provide fail-over support or redundancy, or for other purposes. For instance, the network based media processing server (240) may belong to a variety of classes of devices such as traditional server-type devices, desktop computer-type devices, and/or mobile-type devices. Thus, although illustrated as a single type of device—a server-type device—the network based media processing server (240) may include a diverse variety of device types and are not limited to a particular type of device. The network based media processing server (240) may represent, but are not limited to, server computers, desktop computers, web-server computers, personal computers, mobile computers, laptop computers, tablet computers, or any other sort of computing device. According to an aspect of the disclosure, the network based media processing server (240) can perform certain media functions in order to alleviate processing burdens at the terminals, such as the user device (220), the user device (230) and the like. For example, the user device (220) and/or the user device (230) may have limited media processing capacity or may have difficulty in encoding and rendering multiple video streams, the network based media processing server (240) can perform the media processing, such as decoding/encoding audio and video streams and the like, to offload the media processing in the user devices (220) and (230). In some examples, the user devices (220) and (230) are battery powered devices, and when the media processing has been offloaded from the user devices (220) and (230) to the network based media processing server (240), the battery life of the user devices (220) and (230) can be increased. Media streams from different sources can be processed and mixed. In some examples, such as in international organization for standardization (ISO) 23090-2, overlay can be defined as a second media rendered over a first media. According to an aspect of the disclosure, for a video conference session of immersive video conferencing, additional media content (e.g., video and/or audio, still images, presentation slides and the like) can be overlaid on the immersive media content. The additional media (or media content) can be referred to as overlay media (or overlay media content or overlay in short) to the immersive media (or immersive media content). For example, overlay content can be a piece of visual/audio media rendered over omnidirectional video. Referring toFIG.2, in some examples, multiple or all sub systems (210A)-(210Z) can respectively send immersive media and overlay contents to the network based media processing server (240). Thus, there maybe multiple immersive media and related overlay contents from different sources available at the network based media processing server (240). In some embodiments, the network based media processing server (240) can inform the availability of the multiple immersive media and the related overlay contents to terminals that receive media content in a video conference session, for example, using session description protocol (SDP) messages. For example, the network based media processing server (240) can use SDP offer message to inform availability of media contents. In some examples, SDP is used to provide a standard representation for providing session description metadata to participants of a multimedia session, such as a video conference session. An SDP message includes a session description of a session level for the multimedia session and may include a number of media descriptions at a media level for media in the multimedia session. Each media description can start with an “m=” field and is terminated by either the next “m=” field or by the end of the session description. The “m=” field can include several sub-fields, such as:m=<media><port> . . . where <media> is a sub-field for media type, such as “audio”, “video”, “text” and the like, <port> is a sub-field for the transport port to which the media stream is sent. Further, the multimedia session and/or the media can have attributes defined using attribute fields. In some examples, an attribute field can have following two forms:a=<attribute>a=<attribute>:<value> The first form can be used when the attribute is a flag with a binary value, and the second form can be used to assign non binary value to the attribute. In some examples, the network based media processing server (240) receives multiple immerse media (e.g., multiple 360 videos) each from different sources, and can generate SDP message to inform the availability of the multiple immersive media to participants, such as receiver clients. In some examples, one of the sources, for example, a conference room is a default conference room, and other conference rooms can be alternative conference rooms. In an example, the SDP message can include multiple “m=” fields respectively for the multiple immersive media, and include attribute fields for the multiple immersive media. In an example, the SDP message can include following “m=” field and attribute field for a 360 video from a default conference room:m=videoa=3gpp_360videoa=content:main In another example, the SDP message can include following “m=” field and attribute field for a 360 video from an alternative conference room:m=videoa=3gpp_360videoa=content: alt In another example, the SDP message can include following “m=” field and attribute field for a 360 video from other remote participants (e.g., user devices) without the ‘content’ attribute.m=videoa=3gpp_360video According to an aspect of the disclosure, overlays can be included in SDP message using group attribute. In some embodiments, an attribute that is referred to as itt4rt_group attribute is used in SDP to provide overlay information to immersive media. The itt4rt_group attribute includes a group of media for example in a form of a list of media identifiers (mids) corresponding to the media. For example, the list of mids in the itt4rt_group attribute can include at least one mid associated with an immersive media (e.g., 360-degree media) and at least one mid associated with an overlay as defined by the mid attribute in the corresponding media description. According to an aspect of the disclosure, when the network based media processing server (240) receives multiple immerse media (e.g., multiple 360 videos) each from different sources, the network based media processing server (240) can generate an SDP message (also referred to as an SDP offer) to inform a remote participant, and the SDP message can include multiple groups defined using the itt4rt_group attribute. In some examples, each group can include one mid associated with an immersive media (e.g., 360-degree media), and one or more other mids associated with 2D media (e.g., 2D video media) as overlays. In an example, the 2D media can be provided from sources other than the source of the immersive media specified inside the group. In another example, the 2D media can be provided from the same source as the immersive media source specified inside the group. Some aspects of the disclosure provide control techniques for supporting multiple groups of immersive media (e.g., 360 videos) for Immersive Teleconferencing and Telepresence for Remote Terminals (ITT4RT). In some embodiments, the control techniques are used to inform multiple groups that include different immersive media (e.g., different 360-videos) and overlays, for example, from the network based media processing server (240) with MRF/MCU functions to a remote receiver or a ITT4RT-Rx client. According to an aspect of the disclosure, a grouping control signal can be provided from a source device of an immersive media, such as ITT4RT-Tx client of an immersive media, the sub systems (210A)-(210Z), the user devices (220) and (230), and the like to a network based media processing server, such as the network based media processing server (240), to indicate whether overlays from other sources can be grouped with the immersive media. In some examples, the grouping control signal can be provided as an attribute parameter in an SDP message. In some examples, an attribute parameter “a=other_overlays” is used in SDP from a transmitter client for grouping of media streams in a single or multiple itt4rt_group at a network based media processing server (e.g., MRF/MCU) that provides media service to the remote receiver clients. In some examples, an attribute parameter “a=no_other_overlays” is used in SDP from a transmitter client for grouping of media streams in a single or multiple itt4rt_group at a network based media processing server (e.g., MRF/MCU) that provides media service to the remote receiver clients. In some examples, an attribute a parameter “a=allowed_content” is used in SDP from a transmitter client for grouping of media streams in a single or multiple itt4rt_group at a network based media processing server (e.g., MRF/MCU) that provides media service to the remote receiver clients. In some embodiments, multiple conference rooms with an omnidirectional camera are in a teleconference, such as shown inFIG.2, the network based media processing server (240) with functions of MRF/MCU can be the central point for redistribution of the media contents to the ITT4RT-Rx clients. When an ITT4RT-Tx client signals media streams inclusive of the 360-video and overlays, the ITT4RT-Tx client includes a grouping control signal, such as “a=other_overlays”, “a=no_other_overlays” “a=allowed_content” in an SDP message to the network based media processing server (240). This grouping control signal indicates whether the network based media processing server (240) is allowed to group the media streams from that ITT4RT-Tx client with media streams from other ITT4RT-Tx clients. The network based media processing server (240) receives grouping control signals and other suitable media information from the ITT4RT-Tx clients, and forms groups, such as itt4rt_group groups, based on the grouping control signals. In an embodiment, all ITT4RT-Tx clients allows overlays from other ITT4RT-Tx clients, the network based media processing server (240) may group all media from the ITT4RT-Tx clients into a single itt4rt_group group, and inform the single itt4rt_group group to an ITT4RT-Rx client. In an embodiment, when one or more ITT4RT-Tx clients do not allow overlays from other ITT4RT-Tx clients, the network based media processing server (240) may form separate itt4rt_group groups respectively for media from the one or more ITT4RT-Tx clients. Further, the network based media processing server (240) may group all media of ITT4RT-Tx clients that allow overlays from other ITT4RT-Tx in a single itt4rt_group group. In an example, when “a=other_overlays” is in an SDP from a ITT4RT-Tx client, overlays from other ITT4RT-Tx clients are allowed; when “a=other_overlays” is not in the SDP from the ITT4RT-Tx client, overlays from other ITT4RT-Tx are not allowed. In another example, when “a=no_other_overlays” is in an SDP from a ITT4RT-Tx client, overlays from other ITT4RT-Tx clients are not allowed (or disallowed); when “a=no_other_overlays” is not in the SDP from the ITT4RT-Tx client, overlays from other ITT4RT-Tx are allowed. In some examples, for ITT4RT-Tx clients who send 2D videos (e.g., not immersive video), overlays to the 2D videos are allowed, and no need to send grouping control signals. In some examples, the grouping control signal is signaled when an ITT4RT-Tx client is sending immersive video (e.g., 360-video). FIG.3shows a video conference system (300) to illustrate some specific examples according to some aspects of the disclosure. The video conference system (300) includes a network based media processing server (340)), sub systems (310A)-(310C) that are respectively installed at conference room A to conference room C, and a user device (320). The network based media processing server (340) operates similarly to the network based media processing server (240), and utilizes certain components that are identical or equivalent to those used in the network based media processing server (240). The user device (320) operates similarly to the user device (220). In theFIG.3example, the sub systems (310A) includes an omnidirectional camera that generates a 360 video referred to as360A (media identifier is mid-360A). The sub system (310A) also generates overlays that are referred to as OA1(media identifier is mid-OA1) and OA2(media identifier is mid-OA2). The sub systems (310B) includes an omnidirectional camera that generates a 360 video referred to as360B (media identifier is mid-360B). The sub system (310B) also generates overlays that are referred to as OB1(media identifier is mid-OB1) and OB2(media identifier is mid-OB2). The sub systems (310C) includes an omnidirectional camera that generates a 360 video referred to as360C (media identifier is mid-360C). The sub system (310C) also generates overlays that are referred to as OC1(media identifier is mid-OC1) and OC2(media identifier is mid-OC2). In some examples, the sub system (310A) sends the 360 video360A and the overlays OA1and OA2to the network based media processing server (340); the sub system (310B) sends the 360 video360B and the overlays OB1and OB2to the network based media processing server (340); the sub system (310C) sends the 360 video360C and the overlays OC1and OC2to the network based media processing server (340). The sub systems (310A), (310B) and (310C) also provide grouping control signals to the network based media processing server (340). The grouping control signal from the sub system (310A) indicates whether overlays from the sub system (310B) and the sub system (310C) can be grouped with the 360 video360A. The grouping control signal from the sub system (310B) indicates whether overlays from the sub system (310A) and the sub system (310C) can be grouped with the 360 video360B. The grouping control signal from the sub system (310C) indicates whether overlays from the sub system (310A) and the sub system (310B) can be grouped with the 360 video360C. In some examples, the grouping control signal from the sub system (310A) indicates an allowance of grouping of overlays from other sources, such as the sub system (310B) and the sub system (310C), with the 360 video360A; the grouping control signal from the sub system (310B) indicates an allowance of grouping of overlays from other source, such as the sub system (310A) and the sub system (310C), with the 360 video360B; the grouping control signal from the sub system (310C) indicates an allowance of grouping of overlays from other sources, such as the sub system (310A) and the sub system (310B), with the 360 video360C, then the network based media processing server (340) can mix 360 videos and overlays from different sub systems. In an example, the network based media processing server (340) can form a single itt4rt_group, and include an attribute field in an SDP message (also referred to as SDP offer) to an ITT4RT-Rx client (e.g., user device (320)). For example, the attribute field is shown as following: a=itt4rt_group:<mid-360A><mid-OA1><mid-OA2>;<mid-360B><mid-OB1><mid-OB2>;<mid-360C><mid-OC1><mid-OC2> In an example, the network based media processing server (340) can form one or more itt4rt_group groups that mix 360 videos and overlays from different sub systems. In an example, the network based media processing server (340) includes attribute fields corresponding to the one or more itt4rt_group groups in an SDP message (also referred to as SDP offer) to an ITT4RT-Rx client (e.g., user device (320)). FIGS.4A-4Eshow some examples of attribute field(s) corresponding to the one or more itt4rt_group groups. In some examples, one or more sub systems do not allow grouping of overlays from other sources. For example, a presenter in a conference room with a sub system does not want overlays from other sources to disturb his/her presentation, and thus can configure the sub system to disallow overlays from other sources. In some examples, the grouping control signal from the sub system (310A) indicates a disallowance of grouping of overlays from other sources, such as the sub system (310B) and the sub system (310C), with the 360 video360A; the grouping control signal from the sub system (310B) indicates an allowance of grouping of overlays from other source, such as the sub system (310A) and the sub system (310C), with the 360 video360B; the grouping control signal from the sub system (310C) indicates an allowance of grouping of overlays from other sources, such as the sub system (310A) and the sub system (310B), with the 360 video360C, then the network based media processing server (340) can mix 360 videos and overlays from different sub systems. In an example, the network based media processing server (340) can form a separate itt4rt_group group to include the 360 video and overlays from the sub system (310A), and another itt4rt_group group to include other 360 videos and overlays that can be mixed. For example, the network based media processing server (340) includes attribute fields corresponding to the itt4rt_group groups in an SDP message (also referred to as SDP offer) to an ITT4RT-Rx client (e.g., user device (320)). For example, the attribute fields are shown as following:a=itt4rt_group:<mid-360A><mid-OA1><mid-OA2>a=itt4rt_group:<mid-360B><mid-OB1><mid-OB2>;<mid-360C><mid-OC1><mid-OC2> In another example, the network based media processing server (340) can form a separate itt4rt_group group to include the 360 video and overlays from the sub system (310A), and one or more itt4rt_group groups to include other 360 videos and overlays that can be mixed. For example, the network based media processing server (340) includes attribute fields corresponding to the itt4rt_group groups in an SDP message (also referred to as SDP offer) to an ITT4RT-Rx client (e.g., user device (320)). FIGS.5A-5Dshow some examples of attribute field(s) corresponding to the one or more itt4rt_group groups. According to an aspect of the disclosure, when an ITT4RT-Rx client, such as the user device (320) receives the SDP offer, the user device (320) can send an SDP message (also referred to as an SDP answer) to the network based media processing server (340), the SDP answer indicates one itt4rt_group group selected from the itt4rt_group groups in the SDP offer. According to an aspect of the disclosure, the grouping control signal can be content based. In some embodiments, an attribute field “a=allowed_content” can be included in an SDP message from an ITT4RT-Tx client (e.g., one of the sub systems (310A) (310B) and (310C) inFIG.3) to a network based media processing server (e.g., the network based media processing server (340)). The attribute field is assigned with a value that indicates the content that is allowed to be mixed. In an example, the value indicates 2D video. In another example, the value indicates image. In another example, the value indicates slides. For example, the sub system (310A) sends the 360 video360A and the overlays OA1(2D video) and OA2(image) to the network based media processing server (340); the sub system (310B) sends the 360 video360B and the overlays OB1(2D video) and OB2(slides) to the network based media processing server (340); the sub system (310C) sends the 360 video360C and the overlays OC1(image) and OC2(slides) to the network based media processing server (340). The sub systems (310A), (310B) and (310C) also provide grouping control signals to the network based media processing server (340). The grouping control signals from the sub system (310A) indicates whether overlays from the sub system (310B) and the sub system (310C) can be grouped with the 360 video360A, and overlay content that may be allowed for grouping. The grouping control signal from the sub system (310B) indicates whether overlays from the sub system (310A) and the sub system (310C) can be grouped with the 360 video360B, and overlay content that may be allowed for grouping. The grouping control signal from the sub system (310C) indicates whether overlays from the sub system (310A) and the sub system (310B) can be grouped with the 360 video360C, and overlay content that may be allowed for grouping. In some examples, the grouping control signals from the sub system (310A) indicate an allowance of grouping of overlays from other sources, such as the sub system (310B) and the sub system (310C), with the 360 video360A, and the allowed overlay content is video, for example using the following attribute fields in an SDP message from the sub system (310A) to the network based media processing server (340):a=other overlaya=allowed_content:video The grouping control signals from the sub system (310B) indicate an allowance of grouping of overlays from other source, such as the sub system (310A) and the sub system (310C), with the 360 video360B, and the allowed overlay content is image, for example using the following attribute fields in an SDP message from the sub system (310B) to the network based media processing server (340):a=other overlaya=allowed_content:image The grouping control signals from the sub system (310C) indicate an allowance of grouping of overlays from other sources, such as the sub system (310A) and the sub system (310B), with the 360 video360C, and the allowed overlay content is slides, for example using the following attribute fields in an SDP message from the sub system (310C) to the network based media processing server (340):a=other overlaya=allowed_content: slides Then, the network based media processing server (340) can mix 360 videos and overlays from different sub systems according to the allowed overlay content to form one or more itt4rt_group groups. For example, the network based media processing server (340) includes attribute fields corresponding to the one or more itt4rt_group groups in an SDP message (also referred to as SDP offer) to an ITT4RT-Rx client (e.g., user device (320)). FIGS.6A-6Eshow some examples of attribute field(s) corresponding to the one or more itt4rt_group groups. According to an aspect of the disclosure, when an ITT4RT-Rx client, such as the user device (320) receives the SDP offer, the user device (320) can send an SDP message (also referred to as an SDP answer) to the network based media processing server (340), the SDP answer indicates one itt4rt_group group selected from the one or more itt4rt_group groups in the SDP offer. FIG.7shows a flow chart outlining a process (700) according to an embodiment of the disclosure. In some embodiments, the process (700) can be executed by processing circuitry in a first client device that provides immersive media content in a video conference, such as the processing circuitry in the sub systems (110), (210A)-(210Z), (310A)-(310C), and the like. In some embodiments, the process (700) is implemented in software instructions, thus when the processing circuitry executes the software instructions, the processing circuitry performs the process (700). The process starts at (S701) and proceeds to (S710). At (S710), the first client device determines a grouping control that limits a grouping of an overlay media from a second client device with the immersive media of the first client device. In an example, the first client device determines the grouping control based on a user input. In another example, the first client device determines the grouping control based on a configuration at the first client device. The immersive media can be any suitable immersive media, such as an omnidirectional video, a 360 degree video, and a wide angle video and the like. At (S720), the first client device transmits a grouping control signal indicative of the grouping control to inform a media control device. In some examples, the first client device transmits a session description protocol (SDP) message with an attribute field indicative of the grouping control. In an example, the first client device transmits the SDP message with the attribute field indicative of a disallowance of a grouping of the overlay media from the second client device with the immersive media of the first client device. For example, the attribute field is in a form of “a=no_other_overlays”. In another example, the first client device transmits the SDP message with the attribute field indicative of an allowance of a grouping of the overlay media from the second client device with the immersive media of the first client device. For example, the attribute field is in a form of “a=other_overlays”. In another example, the first client device transmits the SDP message with the attribute field indicative of a media type that is allowed to be grouped with the immersive media of the first client device. For example, the attribute field is in a form of “a=allowed_content”. The media type can be 2D video, image, or slides. In some examples, the first client device can transmit the SDP message with a first attribute field indicative of an allowance of a grouping of the overlay media from the second client device with the immersive media of the first client device, and a second attribute field indicative of an allowed media type. At (S730), the first client device provides one or more media including the immersive media to the media control device. The one or more media can also include overlay media from the first client device. Then, the process proceeds to (S799) and terminates. The process (700) can be suitably adapted. Step(s) in the process (700) can be modified and/or omitted. Additional step(s) can be added. Any suitable order of implementation can be used. FIG.8shows a flow chart outlining a process (800) according to an embodiment of the disclosure. In various embodiments, the process (800) can be executed by processing circuitry in a media control device of network based media processing for a video conference, such as the network based media processing server (240), the network based media processing server (340) and the like. In some embodiments, the process (800) is implemented in software instructions, thus when the processing circuitry executes the software instructions, the processing circuitry performs the process (800). The process starts at (S801) and proceeds to (S810). At (S810), the media control device receives a grouping control signal from a first client device that provides an immersive media to the media control device, the grouping control signal is indicative of a grouping control that limits a grouping of an overlay media from a second client device with the immersive media of the first client device. The immersive media can be any suitable immersive media, such as an omnidirectional video, a 360 degree video, and a wide angle video and the like. In some examples, the media control device receives a session description protocol (SDP) message with an attribute field indicative of the grouping control. In an example, the media control device receives the SDP message with the attribute field indicative of a disallowance of a grouping of the overlay media from the second client device with the immersive media of the first client device. For example, the attribute field is in a form of “a=no_other_overlays”. In another example, the media control device receives the SDP message with the attribute field indicative of an allowance of a grouping of the overlay media from the second client device with the immersive media of the first client device. For example, the attribute field is in a form of “a=other_overlays”. In another example, the media control device receives the SDP message with the attribute field indicative of a media type that is allowed to be grouped with the immersive media of the first client device. For example, the attribute field is in a form of “a=allowed_content”. The media type can be 2D video, image, or slides. In some examples, the media control device receives the SDP message with a first attribute field indicative of an allowance of a grouping of the overlay media from the second client device with the immersive media of the first client device, and a second attribute field indicative of an allowed media type. At (S820), the media control device determines a group including the immersive media from the first client device with/without the overlay media from the second client device based on the grouping control signal. In an example, when the grouping control signal indicates a disallowance of a grouping of the overlay media from the second client device with the immersive media of the first client device, the media control device forms a group that includes the immersive media from the first client device, and the group may include overlays from the first client device, but the group does not include overlays from other client device. In an example, when the grouping control signal indicates an allowance of a grouping of the overlay media from the second client device with the immersive media of the first client device, the media control device can form a group that includes the immersive media from the first client device, and the overlay media from the second client device. In an example, the grouping control signal indicates an allowed media type to be grouped with the immersive media. Then, when the overlay media from the second client device is of the allowed media type, the media control device can form a group including of the immersive media of the first client device and the overlay media from the second client device. When the overlay media from the second client device is not of the allowed media type, the overlay media from the second client device cannot be grouped with the immersive media from the first client device. At (S830), the media control device offers the group to one or more participants of the video conference. In an example, the media control device can send a SDP offer with attribute fields that define one or more groups, such as the itt4rt_group groups, and the like, to one or more participants (e.g., user device (320), user device (220), user device (230), some of the sub systems (210A)-(210Z), some of the sub systems (310A)-(310C), and the like). Then, the process proceeds to (S899) and terminates. The process (800) can be suitably adapted. Step(s) in the process (800) can be modified and/or omitted. Additional step(s) can be added. Any suitable order of implementation can be used. The techniques described above, can be implemented as computer software using computer-readable instructions and physically stored in one or more computer-readable media. For example,FIG.9shows a computer system (900) suitable for implementing certain embodiments of the disclosed subject matter. The computer software can be coded using any suitable machine code or computer language, that may be subject to assembly, compilation, linking, or like mechanisms to create code comprising instructions that can be executed directly, or through interpretation, micro-code execution, and the like, by one or more computer central processing units (CPUs), Graphics Processing Units (GPUs), and the like. The instructions can be executed on various types of computers or components thereof, including, for example, personal computers, tablet computers, servers, smartphones, gaming devices, internet of things devices, and the like. The components shown inFIG.9for computer system (900) are exemplary in nature and are not intended to suggest any limitation as to the scope of use or functionality of the computer software implementing embodiments of the present disclosure. Neither should the configuration of components be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary embodiment of a computer system (900). Computer system (900) may include certain human interface input devices. Such a human interface input device may be responsive to input by one or more human users through, for example, tactile input (such as: keystrokes, swipes, data glove movements), audio input (such as: voice, clapping), visual input (such as: gestures), olfactory input (not depicted). The human interface devices can also be used to capture certain media not necessarily directly related to conscious input by a human, such as audio (such as: speech, music, ambient sound), images (such as: scanned images, photographic images obtain from a still image camera), video (such as two-dimensional video, three-dimensional video including stereoscopic video). Input human interface devices may include one or more of (only one of each depicted): keyboard (901), mouse (902), trackpad (903), touch screen (910), data-glove (not shown), joystick (905), microphone (906), scanner (907), camera (908). Computer system (900) may also include certain human interface output devices. Such human interface output devices may be stimulating the senses of one or more human users through, for example, tactile output, sound, light, and smell/taste. Such human interface output devices may include tactile output devices (for example tactile feedback by the touch-screen (910), data-glove (not shown), or joystick (905), but there can also be tactile feedback devices that do not serve as input devices), audio output devices (such as: speakers (909), headphones (not depicted)), visual output devices (such as screens (910) to include CRT screens, LCD screens, plasma screens, OLED screens, each with or without touch-screen input capability, each with or without tactile feedback capability—some of which may be capable to output two dimensional visual output or more than three dimensional output through means such as stereographic output; virtual-reality glasses (not depicted), holographic displays and smoke tanks (not depicted)), and printers (not depicted). Computer system (900) can also include human accessible storage devices and their associated media such as optical media including CD/DVD ROM/RW (920) with CD/DVD or the like media (921), thumb-drive (922), removable hard drive or solid state drive (923), legacy magnetic media such as tape and floppy disc (not depicted), specialized ROM/ASIC/PLD based devices such as security dongles (not depicted), and the like. Those skilled in the art should also understand that term “computer readable media” as used in connection with the presently disclosed subject matter does not encompass transmission media, carrier waves, or other transitory signals. Computer system (900) can also include an interface (954) to one or more communication networks (955). Networks can for example be wireless, wireline, optical. Networks can further be local, wide-area, metropolitan, vehicular and industrial, real-time, delay-tolerant, and so on. Examples of networks include local area networks such as Ethernet, wireless LANs, cellular networks to include GSM, 3G, 4G, 5G, LTE and the like, TV wireline or wireless wide area digital networks to include cable TV, satellite TV, and terrestrial broadcast TV, vehicular and industrial to include CANBus, and so forth. Certain networks commonly require external network interface adapters that attached to certain general purpose data ports or peripheral buses (949) (such as, for example USB ports of the computer system (900)); others are commonly integrated into the core of the computer system (900) by attachment to a system bus as described below (for example Ethernet interface into a PC computer system or cellular network interface into a smartphone computer system). Using any of these networks, computer system (900) can communicate with other entities. Such communication can be uni-directional, receive only (for example, broadcast TV), uni-directional send-only (for example CANbus to certain CANbus devices), or bi-directional, for example to other computer systems using local or wide area digital networks. Certain protocols and protocol stacks can be used on each of those networks and network interfaces as described above. Aforementioned human interface devices, human-accessible storage devices, and network interfaces can be attached to a core (940) of the computer system (900). The core (940) can include one or more Central Processing Units (CPU) (941), Graphics Processing Units (GPU) (942), specialized programmable processing units in the form of Field Programmable Gate Areas (FPGA) (943), hardware accelerators for certain tasks (944), graphics adapters (950), and so forth. These devices, along with Read-only memory (ROM) (945), Random-access memory (946), internal mass storage such as internal non-user accessible hard drives, SSDs, and the like (947), may be connected through a system bus (948). In some computer systems, the system bus (948) can be accessible in the form of one or more physical plugs to enable extensions by additional CPUs, GPU, and the like. The peripheral devices can be attached either directly to the core's system bus (948), or through a peripheral bus (949). In an example, the screen (910) can be connected to the graphics adapter (950). Architectures for a peripheral bus include PCI, USB, and the like. CPUs (941), GPUs (942), FPGAs (943), and accelerators (944) can execute certain instructions that, in combination, can make up the aforementioned computer code. That computer code can be stored in ROM (945) or RAM (946). Transitional data can be also be stored in RAM (946), whereas permanent data can be stored for example, in the internal mass storage (947). Fast storage and retrieve to any of the memory devices can be enabled through the use of cache memory, that can be closely associated with one or more CPU (941), GPU (942), mass storage (947), ROM (945), RAM (946), and the like. The computer readable media can have computer code thereon for performing various computer-implemented operations. The media and computer code can be those specially designed and constructed for the purposes of the present disclosure, or they can be of the kind well known and available to those having skill in the computer software arts. As an example and not by way of limitation, the computer system having architecture (900), and specifically the core (940) can provide functionality as a result of processor(s) (including CPUs, GPUs, FPGA, accelerators, and the like) executing software embodied in one or more tangible, computer-readable media. Such computer-readable media can be media associated with user-accessible mass storage as introduced above, as well as certain storage of the core (940) that are of non-transitory nature, such as core-internal mass storage (947) or ROM (945). The software implementing various embodiments of the present disclosure can be stored in such devices and executed by core (940). A computer-readable medium can include one or more memory devices or chips, according to particular needs. The software can cause the core (940) and specifically the processors therein (including CPU, GPU, FPGA, and the like) to execute particular processes or particular parts of particular processes described herein, including defining data structures stored in RAM (946) and modifying such data structures according to the processes defined by the software. In addition or as an alternative, the computer system can provide functionality as a result of logic hardwired or otherwise embodied in a circuit (for example: accelerator (944)), which can operate in place of or together with software to execute particular processes or particular parts of particular processes described herein. Reference to software can encompass logic, and vice versa, where appropriate. Reference to a computer-readable media can encompass a circuit (such as an integrated circuit (IC)) storing software for execution, a circuit embodying logic for execution, or both, where appropriate. The present disclosure encompasses any suitable combination of hardware and software. While this disclosure has described several exemplary embodiments, there are alterations, permutations, and various substitute equivalents, which fall within the scope of the disclosure. It will thus be appreciated that those skilled in the art will be able to devise numerous systems and methods which, although not explicitly shown or described herein, embody the principles of the disclosure and are thus within the spirit and scope thereof.
63,043
11943074
DETAILED DESCRIPTION OF THE DRAWINGS During a real-time communication, such as over a video conference between two or more people, one of the participants may at some point be considered a speaker based on him or her giving a presentation (e.g., leading a discussion, lecturing, or monologuing). The other participants may be considered audience participants. The ability of the speaker to maintain audience engagement is important, as a disengaged audience is less likely to pay attention to the speaker or otherwise care about what the speaker is saying. The speaker, however, is focused on his or her conversation points or presentation, and may in some cases not be able to discern the audience reaction to the presentation and a level of engagement therewith. For example, during a video conference, some audience participants may not be visible to the speaker in a gallery view, such as based on a maximum number of user tiles representing the participants that may be presented in a graphical user interface of the video conferencing software at a given time. The speaker is thus unable to perceive reactions from participants whose user tiles are not visible to him or her without cumbersomely scrolling through multiple user interfaces. Even where all participants are visible to the speaker, attempting to gauge audience interest may detract from the presentation such as by distracting the speaker, which may lead to greater levels of audience disengagement. Disengagement can occur for a number of reasons. For example, the presentation ability, style, or behavior (referred to herein as presentation behavior) of the speaker may be monotonic, dull, and/or unaspiring. In another example, the specific topic being discussed may not be of interest to one or more audience members. In yet another example, the audience may be confused by something the speaker said or the way in which the speaker said it, which in turn can lead to audience disengagement. The disengagement can occur during one or more portions of the real-time communication. A speaker may benefit greatly from understanding when audience disengagement occurs, especially where it is difficult or impossible for the speaker to perceive the disengagement on his or her own (e.g., where user tiles for some video conference participants are not visible without scrolling through multiple user interfaces), so that the speaker can adjust some aspect of his or her presentation. However, while solutions exist for evaluating audience sentiment based on audible reactions from the audience, conventional conferencing software services do not have mechanisms for evaluating audience sentiment based on video data obtained from devices used by the audience to connect to a video conference. Conventional approaches therefore do not contemplate the video modality, which is often more complicated than audio alone for evaluating audience sentiment given the variance in visible behaviors and gestures across people. As such, there is currently no solution for producing real-time recommendations for a speaker to alter his or her presentation behavior, pause for a question, change topics, or maintain a topic discussion based on an audience engagement level determined over video. Implementations of this disclosure address problems such as these using audience engagement services which provide real-time evaluation of audience sentiment using video data obtained from devices used by the audience to connect to a video conference. During a video conference, for example, sentiment types of audience participants reacting to a speaker participant are determined based on reaction detection from the video data of the audience participants. The reactions of the audience participants can be determined using facial recognition and movement detection on the video data, audio analysis from an audio stream, and keyword detection from a real-time transcription of the conference. This can include, for example, analysis of audience participants not visible without scrolling through multiple user interfaces. An engagement level of the audience participants are determined from the sentiment types. For example, the sentiment types can be aggregated to determine a consensus engagement level of the audience participants. A recommendation, a real-time recommendation, or real-time suggestive output (referred to herein as a real-time recommendation) is presented to the speaker participant based on the engagement level. For example, a real-time recommendation is one that is near in time to a reaction detection. The real-time recommendation output can include an engagement level indicator, a suggestive action, and/or combinations thereof. For example, the engagement level indicator can use or show a color-based format for different engagement levels, a numeric format, and/or a text-based format. The suggestive action, for example, can be to maintain a topic, maintain a speaker presentation behavior, change to a suggested topic, and/or change the speaker presentation behavior. The suggestive action is based on an analysis of the video data and the conference with respect to the speaker participant. To describe some implementations in greater detail, reference is first made to examples of hardware and software structures used to implement a system for real-time video-based audience reaction sentiment analysis.FIG.1is a block diagram of an example of an electronic computing and communications system100, which can be or include a distributed computing system (e.g., a client-server computing system), a cloud computing system, a clustered computing system, or the like. The system100includes one or more customers, such as customers102A through102B, which may each be a public entity, private entity, or another corporate entity or individual that purchases or otherwise uses software services, such as of a unified communications as a service (UCaaS) platform or other software platform. Enterprise entities rely upon several modes of communication to support their operations, including telephone, email, internal messaging, and the like. These separate modes of communication have historically been implemented by service providers whose services are not integrated with one another. The disconnect between these services, in at least some cases, requires information to be manually passed by users from one service to the next. Furthermore, some services, such as telephony services, are traditionally delivered via on-premises systems, meaning that remote workers and those who are generally increasingly mobile may be unable to rely upon them. One type of system which addresses problems such as these includes a UCaaS platform, which includes several communications services integrated over a network, such as the Internet, to deliver a complete communication experience regardless of physical location. Each customer can include one or more clients. For example, as shown and without limitation, the customer102A can include clients104A through104B, and the customer102B can include clients104C through104D. A customer can include a customer network or domain. For example, and without limitation, the clients104A through104B can be associated or communicate with a customer network or domain for the customer102A and the clients104C through104D can be associated or communicate with a customer network or domain for the customer102B. A client, such as one of the clients104A through104D, may be or otherwise refer to one or both of a client device or a client application. Where a client is or refers to a client device, the client can comprise a computing system, which can include one or more computing devices, such as a mobile phone, a tablet computer, a laptop computer, a notebook computer, a desktop computer, or another suitable computing device or combination of computing devices. Where a client instead is or refers to a client application, the client can be an instance of software running on a customer device (e.g., a client device or another device). In some implementations, a client can be implemented as a single physical unit or as a combination of physical units. In some implementations, a single physical unit can include multiple clients. The system100can include a number of customers and/or clients or can have a configuration of customers or clients different from that generally illustrated inFIG.1. For example, and without limitation, the system100can include hundreds or thousands of customers, and at least some of the customers can include or be associated with a number of clients. The system100includes a datacenter106, which may include one or more servers. The datacenter106can represent a geographic location, which can include a facility, where the one or more servers are located. The system100can include a number of datacenters and servers or can include a configuration of datacenters and servers different from that generally illustrated inFIG.1. For example, and without limitation, the system100can include tens of datacenters, and at least some of the datacenters can include hundreds or another suitable number of servers. In some implementations, the datacenter106can be associated or communicate with one or more datacenter networks or domains, which can include domains other than the customer domains for the customers102A through102B. The datacenter106includes servers used for implementing software services of a UCaaS platform. The datacenter106as generally illustrated includes an application server108, a database server110, and a telephony server112. The servers108through112can each be a computing system, which can include one or more computing devices, such as a desktop computer, a server computer, or another computer capable of operating as a server, or a combination thereof. A suitable number of each of the servers108through112can be implemented at the datacenter106. The UCaaS platform uses a multi-tenant architecture in which installations or instantiations of the servers108through112is shared amongst the customers102A through102B. In some implementations, one or more of the servers108through112can be a non-hardware server implemented on a physical device, such as a hardware server. In some implementations, a combination of two or more of the application server108, the database server110, and the telephony server112can be implemented as a single hardware server or as a single non-hardware server implemented on a single hardware server. In some implementations, the datacenter106can include servers other than or in addition to the servers108through112, for example, a media server, a proxy server, or a web server. The application server108runs web-based software services deliverable to a client, such as one of the clients104A through104D. As described above, the software services may be of a UCaaS platform. For example, the application server108can implement all or a portion of a UCaaS platform, including conferencing software, messaging software, and/or other intra-party or inter-party communications software. The application server108may, for example, be or include a unitary Java Virtual Machine (JVM). In some implementations, the application server108can include an application node, which can be a process executed on the application server108. For example, and without limitation, the application node can be executed in order to deliver software services to a client, such as one of the clients104A through104D, as part of a software application. The application node can be implemented using processing threads, virtual machine instantiations, or other computing features of the application server108. In some such implementations, the application server108can include a suitable number of application nodes, depending upon a system load or other characteristics associated with the application server108. For example, and without limitation, the application server108can include two or more nodes forming a node cluster. In some such implementations, the application nodes implemented on a single application server108can run on different hardware servers. The database server110stores, manages, or otherwise provides data for delivering software services of the application server108to a client, such as one of the clients104A through104D. In particular, the database server110may implement one or more databases, tables, or other information sources suitable for use with a software application implemented using the application server108. The database server110may include a data storage unit accessible by software executed on the application server108. A database implemented by the database server110may be a relational database management system (RDBMS), an object database, an XML database, a configuration management database (CMDB), a management information base (MIB), one or more flat files, other suitable non-transient storage mechanisms, or a combination thereof. The system100can include one or more database servers, in which each database server can include one, two, three, or another suitable number of databases configured as or comprising a suitable database type or combination thereof. In some implementations, one or more databases, tables, other suitable information sources, or portions or combinations thereof may be stored, managed, or otherwise provided by one or more of the elements of the system100other than the database server110, for example, the client104or the application server108. The telephony server112enables network-based telephony and web communications from and to clients of a customer, such as the clients104A through104B for the customer102A or the clients104C through104D for the customer102B. Some or all of the clients104A through104D may be voice over Internet protocol (VOIP)-enabled devices configured to send and receive calls over a network114. In particular, the telephony server112includes a session initiation protocol (SIP) zone and a web zone. The SIP zone enables a client of a customer, such as the customer102A or102B, to send and receive calls over the network114using SIP requests and responses. The web zone integrates telephony data with the application server108to enable telephony-based traffic access to software services run by the application server108. Given the combined functionality of the SIP zone and the web zone, the telephony server112may be or include a cloud-based private branch exchange (PBX) system. The SIP zone receives telephony traffic from a client of a customer and directs same to a destination device. The SIP zone may include one or more call switches for routing the telephony traffic. For example, to route a VOIP call from a first VOIP-enabled client of a customer to a second VOIP-enabled client of the same customer, the telephony server112may initiate a SIP transaction between a first client and the second client using a PBX for the customer. However, in another example, to route a VOIP call from a VOIP-enabled client of a customer to a client or non-client device (e.g., a desktop phone which is not configured for VOIP communication) which is not VOIP-enabled, the telephony server112may initiate a SIP transaction via a VOIP gateway that transmits the SIP signal to a public switched telephone network (PSTN) system for outbound communication to the non-VOIP-enabled client or non-client phone. Hence, the telephony server112may include a PSTN system and may in some cases access an external PSTN system. The telephony server112includes one or more session border controllers (SBCs) for interfacing the SIP zone with one or more aspects external to the telephony server112. In particular, an SBC can act as an intermediary to transmit and receive SIP requests and responses between clients or non-client devices of a given customer with clients or non-client devices external to that customer. When incoming telephony traffic for delivery to a client of a customer, such as one of the clients104A through104D, originating from outside the telephony server112is received, a SBC receives the traffic and forwards it to a call switch for routing to the client. In some implementations, the telephony server112, via the SIP zone, may enable one or more forms of peering to a carrier or customer premise. For example, Internet peering to a customer premise may be enabled to ease the migration of the customer from a legacy provider to a service provider operating the telephony server112. In another example, private peering to a customer premise may be enabled to leverage a private connection terminating at one end at the telephony server112and at the other end at a computing aspect of the customer environment. In yet another example, carrier peering may be enabled to leverage a connection of a peered carrier to the telephony server112. In some such implementations, a SBC or telephony gateway within the customer environment may operate as an intermediary between the SBC of the telephony server112and a PSTN for a peered carrier. When an external SBC is first registered with the telephony server112, a call from a client can be routed through the SBC to a load balancer of the SIP zone, which directs the traffic to a call switch of the telephony server112. Thereafter, the SBC may be configured to communicate directly with the call switch. The web zone receives telephony traffic from a client of a customer, via the SIP zone, and directs same to the application server108via one or more Domain Name System (DNS) resolutions. For example, a first DNS within the web zone may process a request received via the SIP zone and then deliver the processed request to a web service which connects to a second DNS at or otherwise associated with the application server108. Once the second DNS resolves the request, it is delivered to the destination service at the application server108. The web zone may also include a database for authenticating access to a software application for telephony traffic processed within the SIP zone, for example, a softphone. The clients104A through104D communicate with the servers108through112of the datacenter106via the network114. The network114can be or include, for example, the Internet, a local area network (LAN), a wide area network (WAN), a virtual private network (VPN), or another public or private means of electronic computer communication capable of transferring data between a client and one or more servers. In some implementations, a client can connect to the network114via a communal connection point, link, or path, or using a distinct connection point, link, or path. For example, a connection point, link, or path can be wired, wireless, use other communications technologies, or a combination thereof. The network114, the datacenter106, or another element, or combination of elements, of the system100can include network hardware such as routers, switches, other network devices, or combinations thereof. For example, the datacenter106can include a load balancer116for routing traffic from the network114to various servers associated with the datacenter106. The load balancer116can route, or direct, computing communications traffic, such as signals or messages, to respective elements of the datacenter106. For example, the load balancer116can operate as a proxy, or reverse proxy, for a service, such as a service provided to one or more remote clients, such as one or more of the clients104A through104D, by the application server108, the telephony server112, and/or another server. Routing functions of the load balancer116can be configured directly or via a DNS. The load balancer116can coordinate requests from remote clients and can simplify client access by masking the internal configuration of the datacenter106from the remote clients. In some implementations, the load balancer116can operate as a firewall, allowing or preventing communications based on configuration settings. Although the load balancer116is depicted inFIG.1as being within the datacenter106, in some implementations, the load balancer116can instead be located outside of the datacenter106, for example, when providing global routing for multiple datacenters. In some implementations, load balancers can be included both within and outside of the datacenter106. In some implementations, the load balancer116can be omitted. FIG.2is a block diagram of an example internal configuration of a computing device200of an electronic computing and communications system. In one configuration, the computing device200may implement one or more of the client104, the application server108, the database server110, or the telephony server112of the system100shown inFIG.1. The computing device200includes components or units, such as a processor202, a memory204, a bus206, a power source208, peripherals210, a user interface212, a network interface214, other suitable components, or a combination thereof. One or more of the memory204, the power source208, the peripherals210, the user interface212, or the network interface214can communicate with the processor202via the bus206. The processor202is a central processing unit, such as a microprocessor, and can include single or multiple processors having single or multiple processing cores. Alternatively, the processor202can include another type of device, or multiple devices, configured for manipulating or processing information. For example, the processor202can include multiple processors interconnected in one or more manners, including hardwired or networked. The operations of the processor202can be distributed across multiple devices or units that can be coupled directly or across a local area or other suitable type of network. The processor202can include a cache, or cache memory, for local storage of operating data or instructions. The memory204includes one or more memory components, which may each be volatile memory or non-volatile memory. For example, the volatile memory can be random access memory (RAM) (e.g., a DRAM module, such as DDR SDRAM). In another example, the non-volatile memory of the memory204can be a disk drive, a solid state drive, flash memory, or phase-change memory. In some implementations, the memory204can be distributed across multiple devices. For example, the memory204can include network-based memory or memory in multiple clients or servers performing the operations of those multiple devices. The memory204can include data for immediate access by the processor202. For example, the memory204can include executable instructions216, application data218, and an operating system220. The executable instructions216can include one or more application programs, which can be loaded or copied, in whole or in part, from non-volatile memory to volatile memory to be executed by the processor202. For example, the executable instructions216can include instructions for performing some or all of the techniques of this disclosure. The application data218can include user data, database data (e.g., database catalogs or dictionaries), or the like. In some implementations, the application data218can include functional programs, such as a web browser, a web server, a database server, another program, or a combination thereof. The operating system220can be, for example, Microsoft Windows®, Mac OS X®, or Linux®; an operating system for a mobile device, such as a smartphone or tablet device; or an operating system for a non-mobile device, such as a mainframe computer. The power source208provides power to the computing device200. For example, the power source208can be an interface to an external power distribution system. In another example, the power source208can be a battery, such as where the computing device200is a mobile device or is otherwise configured to operate independently of an external power distribution system. In some implementations, the computing device200may include or otherwise use multiple power sources. In some such implementations, the power source208can be a backup battery. The peripherals210includes one or more sensors, detectors, or other devices configured for monitoring the computing device200or the environment around the computing device200. For example, the peripherals210can include a geolocation component, such as a global positioning system location unit. In another example, the peripherals can include a temperature sensor for measuring temperatures of components of the computing device200, such as the processor202. In some implementations, the computing device200can omit the peripherals210. The user interface212includes one or more input interfaces and/or output interfaces. An input interface may, for example, be a positional input device, such as a mouse, touchpad, touchscreen, or the like; a keyboard; or another suitable human or machine interface device. An output interface may, for example, be a display, such as a liquid crystal display, a cathode-ray tube, a light emitting diode display, or other suitable display. The network interface214provides a connection or link to a network (e.g., the network114shown inFIG.1). The network interface214can be a wired network interface or a wireless network interface. The computing device200can communicate with other devices via the network interface214using one or more network protocols, such as using Ethernet, transmission control protocol (TCP), internet protocol (IP), power line communication, an IEEE 802.X protocol (e.g., Wi-Fi, Bluetooth, or ZigBee), infrared, visible light, general packet radio service (GPRS), global system for mobile communications (GSM), code-division multiple access (CDMA), Z-Wave, another protocol, or a combination thereof. FIG.3is a block diagram of an example of a software platform300implemented by an electronic computing and communications system, for example, the system100shown inFIG.1. The software platform300is a UCaaS platform accessible by clients of a customer of a UCaaS platform provider, for example, the clients104A through104B of the customer102A or the clients104C through104D of the customer102B shown inFIG.1. The software platform300may be a multi-tenant platform instantiated using one or more servers at one or more datacenters including, for example, the application server108, the database server110, and the telephony server112of the datacenter106shown inFIG.1. The software platform300includes software services accessible using one or more clients. For example, a customer302as shown includes four clients—a desk phone304, a computer306, a mobile device308, and a shared device310. The desk phone304is a desktop unit configured to at least send and receive calls and includes an input device for receiving a telephone number or extension to dial to and an output device for outputting audio and/or video for a call in progress. The computer306is a desktop, laptop, or tablet computer including an input device for receiving some form of user input and an output device for outputting information in an audio and/or visual format. The mobile device308is a smartphone, wearable device, or other mobile computing aspect including an input device for receiving some form of user input and an output device for outputting information in an audio and/or visual format. The desk phone304, the computer306, and the mobile device308may generally be considered personal devices configured for use by a single user. The shared device310is a desk phone, a computer, a mobile device, or a different device which may instead be configured for use by multiple specified or unspecified users. Each of the clients304through310includes or runs on a computing device configured to access at least a portion of the software platform300. In some implementations, the customer302may include additional clients not shown. For example, the customer302may include multiple clients of one or more client types (e.g., multiple desk phones or multiple computers) and/or one or more clients of a client type not shown inFIG.3(e.g., wearable devices or televisions other than as shared devices). For example, the customer302may have tens or hundreds of desk phones, computers, mobile devices, and/or shared devices. The software services of the software platform300generally relate to communications tools, but are in no way limited in scope. As shown, the software services of the software platform300include telephony software312, conferencing software314, messaging software316, and other software318. Some or all of the software312through318uses customer configurations320specific to the customer302. The customer configurations320may, for example, be data stored within a database or other data store at a database server, such as the database server110shown inFIG.1. The telephony software312enables telephony traffic between ones of the clients304through310and other telephony-enabled devices, which may be other ones of the clients304through310, other VOIP-enabled clients of the customer302, non-VOIP-enabled devices of the customer302, VOIP-enabled clients of another customer, non-VOIP-enabled devices of another customer, or other VOIP-enabled clients or non-VOIP-enabled devices. Calls sent or received using the telephony software312may, for example, be sent or received using the desk phone304, a softphone running on the computer306, a mobile application running on the mobile device308, or using the shared device310that includes telephony features. The telephony software312further enables phones that do not include a client application to connect to other software services of the software platform300. For example, the telephony software312may receive and process calls from phones not associated with the customer302to route that telephony traffic to one or more of the conferencing software314, the messaging software316, or the other software318. The conferencing software314enables audio, video, and/or other forms of conferences between multiple participants, such as to facilitate a conference between those participants. In some cases, the participants may all be physically present within a single location, for example, a conference room, in which the conferencing software314may facilitate a conference between only those participants and using one or more clients within the conference room. In some cases, one or more participants may be physically present within a single location and one or more other participants may be remote, in which the conferencing software314may facilitate a conference between all of those participants using one or more clients within the conference room and one or more remote clients. In some cases, the participants may all be remote, in which the conferencing software314may facilitate a conference between the participants using different clients for the participants. The conferencing software314can include functionality for hosting, presenting scheduling, joining, or otherwise participating in a conference. The conferencing software314may further include functionality for recording some or all of a conference and/or documenting a transcript for the conference. The messaging software316enables instant messaging, unified messaging, and other types of messaging communications between multiple devices, such as to facilitate a chat or other virtual conversation between users of those devices. The unified messaging functionality of the messaging software316may, for example, refer to email messaging which includes a voicemail transcription service delivered in email format. The other software318enables other functionality of the software platform300. Examples of the other software318include, but are not limited to, device management software, resource provisioning and deployment software, administrative software, third party integration software, and the like. In one particular example, the other software318can include audience engagement software for determining, during a video conference, sentiment types of audience participants reacting to a speaker participant based on reaction detection from video data of the audience participants, determining an engagement level based on the sentiment types, and presenting a real-time recommendation output to the speaker participant based on the engagement level. The software312through318may be implemented using one or more servers, for example, of a datacenter such as the datacenter106shown inFIG.1. For example, one or more of the software312through318may be implemented using an application server, a database server, and/or a telephony server, such as the servers108through112shown inFIG.1. In another example, one or more of the software312through318may be implemented using servers not shown inFIG.1, for example, a meeting server, a web server, or another server. In yet another example, one or more of the software312through318may be implemented using one or more of the servers108through112and one or more other servers. The software312through318may be implemented by different servers or by the same server. Features of the software services of the software platform300may be integrated with one another to provide a unified experience for users. For example, the messaging software316may include a user interface element configured to initiate a call with another user of the customer302. In another example, the telephony software312may include functionality for elevating a telephone call to a conference. In yet another example, the conferencing software314may include functionality for sending and receiving instant messages between participants and/or other users of the customer302. In yet another example, the conferencing software314may include functionality for file sharing between participants and/or other users of the customer302. In some implementations, some or all of the software312through318may be combined into a single software application run on clients of the customer, such as one or more of the clients304through310. FIG.4is a block diagram of an example of a system400for real-time video-based audience reaction sentiment analysis. The system400includes a server402which runs conferencing software404and audience engagement software406. The conferencing software404implements a conference between operators of multiple devices and may, for example, be the conferencing software314shown inFIG.3. As shown, the conferencing software404implements a conference between an operator of a speaker device414and operators of one or more audience devices416. Each of the speaker device414and the audience devices416may be a client device such as one of the clients304through310shown inFIG.3or a non-client device which accesses the conferencing software404other than by using a client application. The audience engagement software406interfaces with the conferencing software404to provide real-time recommendations to an operator of the speaker device414(e.g., a speaker) based on sentiment analysis of reactions of operators of the one or more audience devices416(e.g., audience participants) during a conference implemented by the conferencing software404. The audience engagement software406can include reaction recognition software408and sentiment analysis software410. The reaction recognition software408determines a reaction of an audience participant in response to speaker actions or presentations. In an example, the reaction recognition software408monitors video of the audience participants during the conference implemented by the conferencing software404to determine video data-based reactions of audience participants in response to speaker participant actions or presentations. The video can be from, for example, tiles or similar video windows, which show videos of the audience participants in the conference. For example, the reaction recognition software408can use facial recognition and movement detection to determine facial expressions, gestures, head positions, and movement with respect to an audience device of the audience participant. In another example, the reaction recognition software408may supplement and/or confirm the video-based determinations by using a real-time transcription of the conference, to detect audible or verbal reactions of audience participants in response to speaker actions or presentations. For example, a contextual machine learning model can use words spoken by the audience participant temporally at or near the video-based determinations to identify words indicative of understanding, questioning, or other expressive terms. The reaction determination can be for audience participants that are perceptible and imperceptible to the speaker participant during the conference. For example, depending on a view used during the conference implemented by the conferencing software404and/or a number of audience members participating in the conference, user tiles for some of the audience participants may not be visible to the speaker participant. In an implementation, audience participants may be on mute for the conference but can provide audio feedback that isn't heard by other audience participants, which can be used for the audience engagement or sentiment analysis. The sentiment analysis software410determines a context of the speaker presentation or conference by evaluating content of a real-time transcription using a contextual machine learning model. The context can refer to a purpose of the conference or a setting or environment in or for which the speaker presentation is being made. For example, the contextual machine learning model can identify, based on the words, whether the speaker presentation is for a classroom, training, education, or a customer service call. The context is used by the sentiment analysis software410to determine a sentiment type of the determined reaction. The determined reaction can have multiple meanings depending on the context. For example, an audience participant nodding his or her head may have multiple meanings depending on the context of the speaker presentation. For example, if the context is a sales presentation, then the nodding can indicate a positive reaction. In another example, if the context is a customer service conversation, then the nodding can indicate a negative reaction. The sentiment types can include, but is not limited to, a positive reaction, negative reaction, questioning reaction, surprised reaction, neutral, or blank face reaction. The sentiment analysis software410aggregates the sentiment types to determine an engagement level or type. For example, engagement levels can include, but is not limited to, highly engaged, somewhat engaged, somewhat disengaged, not engaged, positive, or negative. A real-time recommendation is presented to the speaker participant based on the engagement level. For example, the real-time recommendation output can be to maintain the current topic. In another example, the real-time recommendation output can be to change the current topic. The real-time recommendation output can include a different topic identified by evaluating content of a real-time transcription of the conference using a contextual machine learning model. In an example, the audience engagement software406can determine, at or near a time of the reaction detection, a performance behavior of the speaker participant based on analyzing video data obtained from a device of the speaker and/or some or all of a real-time transcription of a presentation of the speaker using one or more contextual machine learning models. For example, the performance behavior can include, but are not limited to, monotonic speaking patterns, waving arms, no eye contact, and talking too fast. For example, the real-time recommendation output can be to maintain a current speaker participant behavior when the engagement level is positive. In another example, the real-time recommendation output can be to change a current speaker participant behavior when the engagement level is negative. In yet another example, the real-time recommendation output can be to maintain the current topic and presentation behavior. In a further example, the real-time recommendation output can be to change the current topic and the presentation behavior. In some implementations, the audience engagement software406can receive electronic reactions (e.g., the “thumbs up,” “hands clapping,” and other emojis available during a video conference) from the audience participants. The audience engagement software406can aggregate the electronic reactions to determine an engagement level or type. For example, engagement levels can include, but is not limited to, highly engaged, somewhat engaged, somewhat disengaged, not engaged, positive, or negative. A real-time recommendation is presented to the speaker participant based on the engagement level. For example, the real-time recommendation output can be to maintain the current topic. In another example, the real-time recommendation output can be to change the current topic. The real-time recommendation output can include a different topic identified by evaluating content of a real-time transcription of the conference using a contextual machine learning model. In an example, the aggregate electronic reactions can be used to validate the video-based audience reactions and other audience reactions as described herein. In some implementations, the audience engagement software406can include post-presentation analytics software412. The post-presentation analytics software412can aggregate conference sessions including, but not limited to, reaction detections, sentiment types, engagement levels, real-time recommendation outputs, and associated timestamps. The aggregated conference sessions can indicate which real-time recommendation outputs were effective, what topics were interesting based on the engagement levels, what presentation behaviors were effective, trends, and the impact of real-time recommendation outputs. In an example, the post-presentation analytics software412can analyze historical conference sessions using the audience engagement software406as described herein. The audience engagement software406can provide recommendation outputs for reaction detections in the historical conference sessions. These recommendation outputs can be used for training and education purposes. In yet another example, the recommendation outputs can be used to train a machine learning model specific to a speaker participant. The trained machine learning model can then be used to provide real-time recommendations to the speaker participant when presenting during conferences, webinars, and other conferencing arrangements. In still another example, the post-presentation analytics software412can analyze the reaction detections, the sentiment types, the engagement levels, and the real-time recommendation outputs to determine effectiveness of speakers with respect to one or more presentations. This can identify strengths and weaknesses of the speakers with respect to the presentations. This, in turn, can be used for training or education purposes. For example, the output from the post-presentation analytics software412can be used for cross-speaker analysis, training videos and in focus groups. In some implementations, the audience engagement software406can include post-presentation analytics software412. The post-presentation analytics software412can aggregate conference sessions including, but not limited to, reaction detections, sentiment types, engagement levels, real-time recommendation outputs, and associated timestamps. The conference sessions can be of the speaker and other speakers. The aggregated conference sessions can indicate which real-time recommendation outputs were effective, what topics were interesting based on the engagement levels, what presentation behaviors were effective, trends, the impact of real-time recommendation outputs, and audience reaction to different speaker actions or behaviors. This can identify patterns with respect to audience reactions and different speaker actions or behaviors. This, in turn, can be used for training or education purposes. For example, the output from the post-presentation analytics software412can be used for cross-speaker analysis, training videos and in focus groups. In some implementations, the system400may be used when the speaker participant is presenting in a live audience scenario such as in a lecture hall or at a stadium. In this example, the audience devices416can be cameras and other audio-visual devices which can capture and feed video data to the conferencing software404for real-time video-based audience reaction sentiment analysis by the audience engagement software406. The real-time recommendation outputs can be presented at the speaker device414or at secondary devices as described with respect toFIG.9. In some implementations, the system400may be used when the conferencing software404is used for running a webinar, which typically have large audiences and the focus is on the speaker participant or panelists. The webinars include the ability to provide polling, answer questions live or via text, and bring a view-only attendee live on video to ask a question or contribute. In this example, the audience devices416can represent one or more webinar participants, which feed video data to the conferencing software404for real-time video-based audience reaction sentiment analysis by the audience engagement software406. The real-time recommendation outputs can be presented at the speaker device414or at secondary devices as described with respect toFIG.9. In some implementations, the post-presentation analytics software412can be used for training, preparing, or assisting contact center agents for future customer interactions. The post-presentation analytics software412can process one or more video data recordings for a contact center agent using the real-time video-based audience reaction sentiment analysis provided by the audience engagement software406. The audience engagement software406can provide recommendation outputs for reaction detections in the one or more video data recordings. The recommendation outputs can be used to train a machine learning model specific to the contact center agent. The trained machine learning model can then be used to provide real-time recommendations to the contact center agent during future customer interactions. The machine learning model specific to the contact center agent can be updated by the post-presentation analytics software412based on video recordings of further customer interactions. For example, the post-presentation analytics software412can aggregate a contact center agent's customer interactions including, but not limited to, reaction detections, sentiment types, engagement levels, real-time recommendation outputs, and associated timestamps. The aggregated customer interactions can indicate which real-time recommendation outputs were effective, what topics were interesting based on the engagement levels, what presentation behaviors were effective, trends, and the impact of real-time recommendation outputs. The machine learning model specific to the contact center agent can be updated accordingly. In an example, the post-presentation analytics software412can analyze historical conference sessions from the speaker and other speakers using the audience engagement software406as described herein. The audience engagement software406can provide recommendation outputs for reaction detections in the historical conference sessions. These recommendation outputs can be used for training and education purposes. In yet another example, the recommendation outputs can be used to train a machine learning model specific to a speaker participant. The trained machine learning model can then be used to provide real-time recommendations to the speaker participant when presenting during conferences, webinars, and other conferencing arrangements. The audience engagement software406, for example, may be the audience engagement software referred to above with respect to the other software318shown inFIG.3. In some implementations, the conferencing software404may include the audience engagement software406. In some implementations, the conferencing software404and the audience engagement software406may be wholly or partially run on different servers. FIG.5is a block diagram of example functionality of reaction recognition software500, which may, for example, be the reaction recognition software408shown inFIG.4. The reaction recognition software500includes tools, such as programs, subprograms, functions, routines, subroutines, operations, and/or the like for detecting and determining reactions of audience participants to a presentation by a speaker participant during a conference. As shown, the reaction recognition software500includes a facial recognition tool502, a movement detection tool504, a keyword reaction detection tool506, and a sound reaction detection tool508. The facial recognition tool502determines from video data of an audience participant a reaction to a temporally associated portion of a presentation by the speaker participant. The facial recognition tool502can detect and determine a generic shape of certain facial features including, for example, eyebrows, eyes, and mouth on a face of the audience participant. The facial recognition tool502may use the output of a learning model trained for reaction or expression determination processing to identify a reaction. For example, the identified reactions can include, but is not limited to, surprise, blank expression, neutral, happy, smiling, frowning, puzzled, curious, or questioning. The movement detection tool504determines from video data of an audience participant a reaction to a temporally associated portion of a presentation by the speaker participant. The movement detection tool504can detect, for example, whether an audience participant is leaving a room or premise during a presentation by the speaker participant or raising a hand. For example, the movement detection tool504can detect movement by identifying an object in the video data and tracking the object across multiple frames in the video data. In another example, the movement detection tool504can detect, for example, whether an audience participant is gesturing during a presentation by the speaker participant. For example, the gesturing can include rubbing their eyes as indication of boredom or tiredness, raising their hand for a question, or scratching their head in confusion. For example, the movement detection tool504can detect movement or gestures by identifying an object in the video data and tracking the object across multiple frames in the video data. In some implementations, the movement detection tool504can detect, for example, whether an audience participant is making vernacularized gesturing during a presentation by the speaker participant. In this example, the vernacularized gestures are gestures which can have regional or region-based meanings. For example, the movement detection tool504may, using one or more learning models trained for gesture recognition on a regional basis, understand that a person located in one region who is nodding their head may be communicating the same reaction as a person located in another region who is wobbling their head. The movement detection tool504can detect movement or gestures by identifying an object in the video data and tracking the object across multiple frames in the video data. The keyword reaction detection tool506detects keyword reactions of an audience participant to a temporally associated portion of a presentation by the speaker participant based on a real-time transcription of the conference. The keyword reaction detection tool506may obtain and use the real-time transcription of the conference, which may be generated by the reaction recognition software500, the audience engagement software, the conferencing software, or other software, to detect the keywords which are associated with reactions. For example, timestamps from the real-time transcript can be compared against timestamps from the conference to determine the audience video data which aligns with the presentation portions spoken by the speaker. Alternatively, the keyword reaction detection tool506may use output of a learning model trained for keyword reaction processing to detect keywords. For example, the learning model may evaluate content of the real-time transcription to produce the output. For example, keyword reactions can include, but is not limited to, yes, no, got it, huh, okay, and question. The sound reaction detection tool508determines from audible files of an audience participant a reaction to a temporally associated portion of a presentation by the speaker participant. The sound reaction detection tool508can detect and identify sounds such as, but not limited to, gasps, exclamations, and yawning from the audience participant. The sound reaction detection tool508may use the output of a learning model trained for sounds to identify a reaction. For example, the identified reactions can include, but is not limited to, surprise and bored. Although the tools502through508are shown as functionality of the reaction recognition software500as a single piece of software, in some implementations, some or all of the tools502through508may exist outside of the reaction recognition software500and/or the software platform may exclude the reaction recognition software500while still including the some or all of tools502through508in some form elsewhere. FIG.6is a block diagram of example functionality of sentiment analysis software600. The sentiment analysis software600includes tools, such as programs, subprograms, functions, routines, subroutines, operations, and/or the like for determining a sentiment type for a detected reaction, determining an engagement level based on the sentiment types, and presenting a real-time recommendation output to the speaker participant based on the engagement level. As shown, the sentiment analysis software600includes a context determination tool602, a sentiment determination tool604, an engagement status determination tool606, a recommendation determination tool608, and a recommendation outputting tool610. The context determination tool602determines a context of the conference at or near the time of the reaction detection by evaluating content of a real-time transcription of the conference using a contextual machine learning model. In some implementations, the context determination tool602determines a context associated with the speaker participant at a time of the reaction detection based on a real-time transcription of the conference or portions of the conference associated with the speaker presentation. In some examples, the context determined by the context determination tool602can indicate that the conference is a sales conference, a classroom presentation, or a seminar. The context determination tool602may obtain and use the real-time transcription of the conference, which may be generated by the sentiment analysis software600, the audience engagement software, the conferencing software, or other software, to determine the context. For example, the words from the real-time transcription of the conference are input to a contextual machine learning model. The contextual machine learning model can identify the context, i.e., a setting or environment suggested by the words. Alternatively, the context determination tool602may use output of a learning model trained for contextual content processing to determine the context. For example, the learning model, which may be a contextual machine learning model, may evaluate content of the real-time transcription to produce output. The output may, for example, be a context of the conference. The sentiment determination tool604determines a sentiment type for a detected reaction using the determined context. A sentiment type for a detected reaction refers to, but is not limited to, a positive reaction, negative reaction, questioning reaction, surprised reaction, neutral, or blank face reaction. The context determination tool602may use output of a learning model trained for sentiment processing to determine the sentiment type. For example, the learning model, which may be a contextual machine learning model, may evaluate the detected reaction in view of the determined context to produce output. The output may, for example, be a sentiment type for the detected reaction. The sentiment determination tool604maintains a count of outputted sentiment types. For example, sentiment types can be data maintained as bins in a histogram. The engagement status determination tool606determines an engagement level or type based on the aggregated sentiment types. In an example, the engagement level can be determined from a most frequent bin in the histogram. In another example, sentiment types can be assigned a numerical value such as 10 for smiling and 0 for bored. The assigned values can change depending on the determined context. The engagement level can then be determined by averaging the numbers. A high value can be highly engaged and a low value can be not engaged. Alternatively, the engagement status determination tool606may use output of a learning model trained for contextual content processing to determine the engagement level. For example, the learning model, which may be a contextual machine learning model, may evaluate the sentiment types and a quantitative value for each sentiment type to produce output. The output may, for example, be the engagement level. The recommendation determination tool608determines a real-time recommendation output based on the engagement level. The recommendation determination tool608can provide, for example, maintain, change, or pause real-time recommendations based on the engagement level. In an example, a change type recommendation output can include a recommended topic. For example, the recommendation determination tool608can evaluate content of a real-time transcription of the conference using a contextual machine learning model to identify change recommendation topics. In another example, a change type recommendation output can include a recommended speaker presentation behavior. For example, when a current behavior is determined to be monotonic, the recommendation determination tool608can recommend change voice modulations to the speaker. In another example, when the speaker is mumbling, the recommendation determination tool608can recommend to the speaker to speak more clearly and/or loudly. The recommendation determination tool608can provide, for example, a combination of the topic and presentation behavior recommendations based on the engagement level. The recommendation outputting tool610causes a presentation of the real-time recommended output to the speaker participant in accordance with the determination by the recommendation determination tool608. Generally, the recommendation outputting tool610outputs instructions, commands, or other information configured to cause the device of the speaker participant to output the recommended output to the speaker participant. For example, the real-time recommendation output can be provided as a prompt to the speaker, a green-yellow-red or color-based graphic, a number from1to10, text, and/or combinations thereof. In some cases, the recommendation outputting tool610outputs those instructions, commands, or other information to a secondary device associated with the speaker participant. For example, if conferencing software running at the device associated with the speaker is outputting user tiles of conference participants in a gallery view arrangement, then the presentation of the real-time recommendation output can be on a secondary device such as tablet or at a lectern device. Thus, in such a case, the gallery view arrangement output at the device may remain uninterrupted while the real-time recommendation output is presented at the secondary device. In some examples, the device associated with the speaker may be considered a primary device and the secondary device may be designated in a companion mode for use with the primary device. Although the tools602through610are shown as functionality of the sentiment analysis software600as a single piece of software, in some implementations, some or all of the tools602through610may exist outside of the sentiment analysis software600and/or the software platform may exclude the sentiment analysis software600while still including the some or all of tools602through610in some form elsewhere. FIG.7is a block diagram of an example of reaction detection based on input received from devices connected to conferencing software. As shown, a server700runs reaction detection software702which includes a reaction recognition tool704. For example, the reaction recognition tool704includes the facial recognition tool502, the movement detection tool504, the keyword reaction detection tool506, and the sound reaction detection tool508ofFIG.5. The reaction recognition tool704detects reactions of conference participants during a conference based on input706received from one or more devices708connected to the conference. For example, the server700, the reaction detection software702, and the one or more devices708may respectively be the server402, part of or integrated with audience engagement software406, and the one or more audience devices416shown inFIG.4. In particular, the one or more devices708include or otherwise refer to devices such as audience participant devices (e.g., the audience devices416shown inFIG.4). As such, the input706received from the one or more devices608is received from all devices in the conference. For example, the input706is audio information captured over one or more audio channels between the devices708and conferencing software which implements the conference (e.g., the conferencing software404shown inFIG.4) and video information captured over one or more video channels between the devices708and conferencing software which implements the conference. The audio information of the input706is processed using transcription software710to generate a real-time transcription of the conference. In particular, the real-time transcription is generated in real-time concurrently with the conference based on real-time presentations, conversations, and the like occurring within the conference. Thus, the real-time transcription may not be considered fully generated until after a final presentation, conversation, or the like during the conference has ended. Accordingly, generating the real-time transcription includes or refers to generating a portion of the real-time transcription corresponding to a current conversation occurring at a given time during the conference. The transcription software710may, for example, be or refer to an automated speech recognition engine configured to access audio data of the conference, such as via the conferencing software. The real-time transcription generated by the transcription software710is next processed using a learning model712to determine keywords and sound emanations associated with a participant of the conference. The learning model712may be or include a neural network (e.g., a convolutional neural network, recurrent neural network, or other neural network), decision tree, vector machine, Bayesian network, genetic algorithm, deep learning system separate from a neural network, or another machine learning model. The learning model712is trained to recognize content and context of conversations. For example, the learning model712may be a contextual learning model which is trained to evaluate the content of the real-time transcription generated by the transcription software710, to identify keywords spoken and sounds emanated in reaction to a speaker presentation. In particular, to identify keywords, the learning model712evaluates instances of words within the real-time transcription based on a context thereof to determine when such an instance is associated as a reaction of a participant. For example, a participant saying “understood” after an explanation by the speaker can be a keyword reaction detection. In another example, a participant saying “over my head” can after an explanation by the speaker can be a keyword reaction detection. The learning model712evaluates keywords and/or related content within the real-time transcription against historical communication records714to determine when such keywords and/or related content correspond to reaction detections. Separately, to identify sound utterances, the learning model712evaluates instances of sounds utterances within the audio stream to determine when such an instance is associated as a reaction of a participant. For example, a participant uttering “huh” after an explanation by the speaker can be a sound reaction detection. In another example, a participant eliciting a “gasp” after an explanation by the speaker can be a sound reaction detection. The learning model712evaluates the sound utterances against historical communication records714to determine when such sound utterances correspond to reaction detections. In some implementations, one or both of the transcription software710or the learning model712may be included in the reaction recognition software702. In some implementations, the historical communication records714may be located other than on the server700on which the reaction recognition software702is partially or wholly run. FIG.8is a block diagram of an example of a sentiment determination based on input received from reaction recognition software. As shown, a server800runs sentiment analysis software802which includes a sentiment-based recommendation tool804. For example, the sentiment-based recommendation tool804includes one or more of the context determination tool602, the sentiment determination tool604, the engagement status determination tool606, the recommendation determination tool608, and the recommendation outputting tool608ofFIG.6. The sentiment-based recommendation tool804determines sentiment types, engagement levels, and real-time recommendations for a speaker from detected reactions of participants during a conference. For example, the server800and the sentiment analysis software802may respectively be the server402and part of or integrated with audience engagement software406shown inFIG.4. In particular, the sentiment-based recommendation tool804determines the context of the conference at or near the time of the detected reaction by evaluating content of a real-time transcription of the conference using a contextual machine learning model. The determined context and the recognized reactions from a reaction detection software such as reaction detection software702as shown inFIG.7are input processed using a learning model806to determine a sentiment type associated with the participant of the conference. The learning model806may be or include a neural network (e.g., a convolutional neural network, recurrent neural network, or other neural network), decision tree, vector machine, Bayesian network, genetic algorithm, deep learning system separate from a neural network, or another machine learning model. The learning model806is trained to recognize context and reaction patterns. For example, the learning model806may be a contextual learning model which is trained to evaluate the recognized reaction in view of the determined context. For example, if the recognized reaction is a frown after an explanation by the speaker, then the sentiment type can be one of confusion. In another example, if the recognized reaction is a “yay” after a sales presentation, then the sentiment type can be one of elation. The learning model806evaluates the context and reaction against historical communication records808to determine when and which context and reaction pairs correspond different sentiment types. In some implementations, the learning model806may be included in the sentiment analysis software802. In some implementations, the historical communication records808may be located other than on the server800on which the sentiment analysis software802is partially or wholly run. FIG.9is a block diagram of an example of recommendation outputting in connection with audience engagement analysis during a conference. As shown, a server900runs audience engagement software902which includes a reaction recognition tool904, a sentiment analysis tool906, a recommendation determination tool908, and a recommendation outputting tool910. The reaction recognition tool904detects and recognizes reactions from audience participants. The sentiment analysis tool906determines a context of a presentation during the conference, determines a sentiment type of a recognized reaction using the determined context, and determines an engagement level based on aggregated sentiments. The recommendation determination tool908determines a real-time recommendation output based on the engagement level which can include suggestions to maintain, change, or pause a current presentation by a speaker. The recommendation outputting tool910outputs the recommendation output. For example, the server900may be the server402shown inFIG.4, include the server700shown inFIG.7, and include the server800shown inFIG.8, to the extent different. In another example, the audience engagement software902may be the audience engagement software409shown inFIG.4, include the reaction recognition software500shown inFIG.5, or include the sentiment analysis software600shown inFIG.6, to the extent different. In yet another example, the reaction recognition tool904, the sentiment analysis tool906, the recommendation determination tool908, and the recommendation outputting tool910may respectively be the reaction recognition tool704shown inFIG.4, the sentiment determination tool604and the engagement status determination tool606shown inFIG.6, the recommendation determination tool608shown inFIG.6, and the recommendation outputting tool610shown inFIG.6, to the extent different. The reaction recognition tool904receives content from input components914of audience devices912during a conference. For example, the input components914may be image capturing devices, cameras, audio input devices, and video input devices. The reaction recognition tool904processes the content. The processing can include one or more of detecting and recognizing participant reactions to a speaker presentation during the conference. The sentiment analysis tool906determines the context of the speaker presentation during the conference. The context is used by the sentiment analysis tool906to determine a sentiment type for recognized reactions. The determined sentiment types are aggregated from each of the audience devices912. The sentiment analysis tool906analyzes the aggregated sentiment types to determine an engagement level or status representative of the collective audience devices912. The recommendation determination tool908determines the real-time recommendation output based on the engagement level. In an example, the recommendation determination tool908can also use a speaker presentation behavior as determined by the audience engagement software902. The audience engagement software902can use input from input components920of a speaker device916to determine the speaker presentation behavior. The real-time recommendation output can be provided in one or more formats including, for example, a numeric value, a textual recommendation, and/or combinations thereof. In an example, the real-time recommendation output can include suggestive language including, for example, maintain a current presentation by a speaker, change topics, maintain speaker presentation behavior, change speaker presentation behavior, and/or pause presentation or conference for a question. The recommendation outputting tool910then causes a presentation of output to the speaker participant according to the determinations made by the recommendation determination tool908. In particular, the recommendation outputting tool910transmits instructions, commands, or other information configured to output the real-time recommendation output to one or more output components918on the speaker device916. The output components918may, for example, include a display and/or an audio output device associated with the speaker device916. In some implementations, the recommendation outputting tool910may transmit instructions, commands, or other information configured to output the real-time recommendation output to one or more output components924of a secondary device922associated with the speaker participant. For example, the secondary device922may be another device through which the speaker participant has accessed conferencing software used to implement the conference. In another example, the secondary device922may be another device registered to an account of the speaker participant. In yet another example, the secondary device922may be another device detected on a same network to which the speaker device916is connected. The output components924may, for example, include a display and/or an audio output device associated with the secondary device922. The secondary device922may be a mobile device, such as a laptop, tablet, or mobile phone, or it may be a wearable device, such as a network-connected wristband, ring, or watch. To further describe some implementations in greater detail, reference is next made to examples of techniques which may be performed by or using a system for real-time video-based audience reaction sentiment analysis.FIG.10is a flowchart of an example of a technique1000for a conference. The technique1000can be executed using computing devices, such as the systems, hardware, and software described with respect toFIGS.1-9. The technique1000can be performed, for example, by executing a machine-readable program or other computer-executable instructions, such as routines, instructions, programs, or other code. The steps, or operations, of the technique1000or another technique, method, process, or algorithm described in connection with the implementations disclosed herein can be implemented directly in hardware, firmware, software executed by hardware, circuitry, or a combination thereof. For simplicity of explanation, the technique1000is depicted and described herein as a series of steps or operations. However, the steps or operations in accordance with this disclosure can occur in various orders and/or concurrently. Additionally, other steps or operations not presented and described herein may be used. Furthermore, not all illustrated steps or operations may be required to implement a technique in accordance with the disclosed subject matter. At1002, content including audience participation reactions of audience participants to a speaker participant during a conference are obtained. The content also includes the speaker presentation associated with the audience participation reactions. The content may include audio and/or video. Both aspects of the content, the audience participation reactions and the associated speaker presentation can be timestamped so as to provide context to the audience participation reactions as described herein. The content is obtained from audience participants who are perceptible and are imperceptible with respect to a view arrangement used at a speaker participant device. Accordingly, real-time recommendation outputs, as described herein, can account for audience participation reactions which are missed by the speaker participant. The audience participation reactions are processed using reaction recognition software. A real-time transcription of the audio content is obtained. Obtaining the real-time transcription may include generating the real-time transcription. Alternatively, obtaining the real-time transcription may include obtaining the real-time transcription from a software aspect which generates it. The real-time transcription is generated in real-time with a conversation occurring within a conference call attended by multiple participants including a speaker participant. The transcribed content including the audience participation reactions are evaluated using a contextual machine learning model to recognize reactions. The audio content is also evaluated using an audio-based contextual machine learning model to recognize sound reactions. The video content is processed using facial recognition and movement detection techniques to recognize visual reactions. The sentiment types are determined for each recognized reaction. The sentiment type processing includes determining a context of the speaker presentation associated with the audience participation reaction. A contextual machine learning model can be used to determine the context. The recognized reactions are evaluated in view of the context using, for example, a contextual machine learning model to identify sentiment types. At1004, an engagement level is based on the sentiment types. The sentiment types for the audience participation reactions for the audience participants are aggregated, accumulated, counted, or tracked. For example, a histogram can be maintained to track quantity of sentiment types. In another example, each sentiment type is associated with a value in a range of values. For example, positive sentiment types can be assigned values at a higher end of the range of values and negative sentiment types can be assigned values at a lower end of the range of values. Counters can be maintained for each sentiment type, for example. The engagement level is determined from the aggregated sentiment types. An engagement level can be determined by numerically analyzing the aggregated sentiment types in a histogram, for example. For example, an engagement level is assigned based on a most frequently occurring sentiment type in the histogram. In another example, the sentiment type with the highest value is used to determine the engagement level. Other techniques can be used to determine engagement level. A real-time recommendation output is then determined based on the engagement level. In addition, the recommendation output processing can account for speaker participant presentation behavior as described herein. The real-time recommendation output can provide suggestions including, for example, maintain a present presentation topic and behavior due to a positive engagement level, change a present presentation topic due to a negative engagement level, change a present presentation behavior due to a negative engagement level, change a present presentation topic and a presentation behavior due to a negative engagement level, and/or pause a presentation due to a question. The real-time recommendation output can be presented in one or more formats including, for example, a numeric format and/or a text-based format. At1006, the real-time recommendation output is presented to provide feedback or reinforcement with respect to audience participant engagement during the conference call. Configuration information associated with a speaker device of the speaker participant is obtained. The configuration information associated with the speaker device may refer to audio output device settings, video output device settings, view selection, usage and/or environment settings. The configuration information is used to determine how the real-time recommendation output is presented and where the real-time recommendation output is presented as described herein. In some implementations, presenting the real-time recommendation output to the speaker participant may include causing a secondary device associated with the speaker participant to present the output. In some such implementations, the secondary device may be identified as part of the process for presenting the real-time recommendation output. The secondary device may be a mobile device or a wearable device. Some implementations may include a method that includes determining, during a conference, sentiment types of audience participants reacting to a speaker participant based on reaction detections from video data of the audience participants. An engagement level can be determined based on the sentiment types. Real-time recommendation outputs can be presented by a client device associated with the speaker participant based on the engagement level. In one or more implementations, the sentiment types of the audience participants are determined by aggregating the sentiment types for the audience participants which are perceptible and imperceptible to the speaker participant. In one or more implementations, determining the sentiment types of the audience participants comprises determining a context associated with the speaker participant of the reaction detection based on a real-time transcription of the conference and determining the sentiment type based on the context. In one or more implementations, the method may include evaluating content of a real-time transcription of the conference using a contextual machine learning model to identify the real-time recommendation output. In one or more implementations, the method may include determining a performance characterization of the speaker participant corresponding to the reaction detection, wherein the real-time recommendation output indicates, when the engagement level is positive, to continue discussing a current topic and to continue a speaker participant behavior. In one or more implementations, the method may include determining a performance characterization of the speaker participant corresponding to the reaction detection, wherein the real-time recommendation output indicates, when the engagement level is negative, to change at least one of a topic or a speaker participant behavior. In one or more implementations, the method may include maintaining engagement levels over a course of the conference to determine trends and determining an impact of real-time recommendation outputs on the engagement levels over the course of the conference. In one or more implementations, the real-time recommendation output indicates, when the engagement level is positive, to continue discussing a current topic. In one or more implementations, the real-time recommendation output indicates, when the engagement level is negative, to change to a new topic determined by a contextual machine learning model. In one or more implementations, the real-time recommendation output indicates, when the engagement level is neutral, to pause the conference for questions. Some implementations may include an apparatus that includes a memory and a processor configured to execute instructions stored in the memory to determine sentiment types of audience participants reacting to a speaker participant during a conference, the sentiment types based on reaction detections from video data of the audience participants, determine an engagement level based on the sentiment types, and present a real-time recommendation output by a client device associated with the speaker participant based on the engagement level. In one or more implementations, the processor is configured to execute the instructions to measure a number of occurrences of each sentiment type. In one or more implementations, the processor is configured to execute the instructions to determine the reaction detections based on facial recognition and movement detection on video data of the audience participants. In one or more implementations, the processor is configured to execute the instructions to determine the engagement level based on most frequently occurring sentiment type. In one or more implementations, the processor is configured to execute the instructions to use output of a contextual machine learning model that evaluates content of real-time transcription of the conference to determine a context for the reaction detections. Some implementations may include a non-transitory computer readable medium storing instructions operable to cause one or more processors to perform operations comprising determining, during a conference, sentiment types of audience participants reacting to a speaker participant based on reaction detections from video data of the audience participants, determining an engagement level based on the sentiment types, and presenting a real-time recommendation output by a client device associated with the speaker participant based on the engagement level. In one or more implementations, the processor is configured to execute the instructions to generate a histogram with bins for different sentiment types and determine the engagement level based on most populated bin in the histogram. In one or more implementations, the processor is configured to execute the instructions to assign a numeric value from a range of values to each sentiment type and determine the engagement level based on sentiment type with highest total value. In one or more implementations, the processor is configured to execute the instructions to evaluate content of real-time transcription of the conference to determine keywords as the reaction detections. In one or more implementations, the processor is configured to execute the instructions to detect the reaction detections from sound utterances present in an audio content. The implementations of this disclosure can be described in terms of functional block components and various processing operations. Such functional block components can be realized by a number of hardware or software components that perform the specified functions. For example, the disclosed implementations can employ various integrated circuit components (e.g., memory elements, processing elements, logic elements, look-up tables, and the like), which can carry out a variety of functions under the control of one or more microprocessors or other control devices. Similarly, where the elements of the disclosed implementations are implemented using software programming or software elements, the systems and techniques can be implemented with a programming or scripting language, such as C, C++, Java, JavaScript, assembler, or the like, with the various algorithms being implemented with a combination of data structures, objects, processes, routines, or other programming elements. Functional aspects can be implemented in algorithms that execute on one or more processors. Furthermore, the implementations of the systems and techniques disclosed herein could employ a number of conventional techniques for electronics configuration, signal processing or control, data processing, and the like. The words “mechanism” and “component” are used broadly and are not limited to mechanical or physical implementations, but can include software routines in conjunction with processors, etc. Likewise, the terms “system” or “tool” as used herein and in the figures, but in any event based on their context, may be understood as corresponding to a functional unit implemented using software, hardware (e.g., an integrated circuit, such as an ASIC), or a combination of software and hardware. In certain contexts, such systems or mechanisms may be understood to be a processor-implemented software system or processor-implemented software mechanism that is part of or callable by an executable program, which may itself be wholly or partly composed of such linked systems or mechanisms. Implementations or portions of implementations of the above disclosure can take the form of a computer program product accessible from, for example, a computer-usable or computer-readable medium. A computer-usable or computer-readable medium can be a device that can, for example, tangibly contain, store, communicate, or transport a program or data structure for use by or in connection with a processor. The medium can be, for example, an electronic, magnetic, optical, electromagnetic, or semiconductor device. Other suitable mediums are also available. Such computer-usable or computer-readable media can be referred to as non-transitory memory or media, and can include volatile memory or non-volatile memory that can change over time. The quality of memory or media being non-transitory refers to such memory or media storing data for some period of time or otherwise based on device power or a device power cycle. A memory of an apparatus described herein, unless otherwise specified, does not have to be physically contained by the apparatus, but is one that can be accessed remotely by the apparatus, and does not have to be contiguous with other memory that might be physically contained by the apparatus. While the disclosure has been described in connection with certain implementations, it is to be understood that the disclosure is not to be limited to the disclosed implementations but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures as is permitted under the law.
89,255
11943075
DETAILED DESCRIPTION FIG.1AandFIG.1Billustrate view100and view140, respectively, of an assistant server device104adapting to changes in reliability, and/or patterns of reliability, of a third party server device102when reporting status updates for a particular client device108(e.g., a thermostat). The client device108can be connected to a local area network that is provided via a wireless network source116, such as a Wi-Fi router. Other client devices can also be connected to the local area network, such as a tablet device (and/or any computing device (i.e., client device110) and an assistant device114, which can be in communication with the assistant server device104. The client device110can be controlled by a user106and can include a third party application120, through which the user106can control the client device108. The user106can interact with the third party application120via an interface118of the third party application120in order to control the client device108. The interface118can be presented at a display panel128of the client device110, and can present the user with controls for modifying an operational status of the client device108. For instance, the third party application120can present a first control interface124and a second control interface126. The first control interface124can include various graphical control elements through which the user106can modify one or operations and/or settings of the client device108. The second control interface126can include one or more graphical control elements from which the user106can control one or operations and/or settings of the client device108. For example, the interface118can provide a graphical control element from which the user106can change a temperature setting of the client device108, when the client device108is a thermostat. The interface118can also include a current temperature reading (e.g., 65 degrees), which can present a current temperature in a room112in which the user is located. A current temperature of the room112can be 65 degrees, and the user can provide an input to a graphical control element of the interface118in order to adjust the room112temperature to be 72 degrees. In response to the user106providing, or otherwise interacting with the interface118, the third party application120can transmit a device request136over the local area network to the third party server device102. The device request136can cause an operating status of the client device108to change, in order to effectuate the settings change requested by the user106via the third party application120. The third party server device102can be tasked with updating server devices and/or client devices with changes to the operational status of the client device108. For example, the user106can access the assistant device114in order to determine a status of the client device108. The status of the client device108can be presented at a display panel134of the assistant device114, according to status data that is available to the assistant device114and/or the assistant server device104. However, if the third party server device102is not proactive about providing status updates to the assistant device114, the assistant device114may not have accurate status data to present to the user106. In other words, when a metric corresponding to the third party server device102indicates that the third party server device102is not reliable, the assistant device114can elect to query the third party server device102for an updated status, at least in response to a user input, such as a request for a status of the client device108. Alternatively, when the metric corresponding to the third party server device102indicates that the third party server device102is reliable, the assistant device114can elect to rely on locally stored status data, and bypass querying the third party server device102for an updated status. In this way, the metric would mitigate the need to query the third party server device102every time the user106requested a status update for a particular client device. However, in some implementations, the assistant server device104can determine a latency of the third party server device102when reporting status changes exhibited by the client device108. One or more metrics can be generated based on the latency of the third party server device102, and the one or more metrics can thereafter be used to determine whether to: proactively query the third party server device102for status updates regarding the client device108, and/or bypass providing such queries, at least in furtherance of relying on the third party server device102to proactively communicate status data to the assistant server device104. For example, in response to the user106sliding a graphical control element138to change a setting of the client device108via the interface118, a device request136can be transmitted to the third party server device102. As a result, some amount of latency can be exhibited by the third party server device102when providing a corresponding status update to the assistant server device104and/or the assistant device114. For example, latency can be, but is not limited to, a time period between the user106changing a setting of the client device108, and the assistant server device104receiving updated status data. As illustrated inFIG.1B, the third party server device102can receive a status request138from the assistant server device104in response to the assistant device114and/or the assistant server device104not receiving a status update proactively from the third party server device102. In other words, based on a metric generated from the latency of the third party server device104, the assistant server device104can proactively determine whether to issue the status request138to the third party server device102, or wait for the third party server device102to proactively send a status update to the assistant server device104. In some implementations, a metric can be generated based on a latency corresponding to a time period between a device setting of the client device108being changed via the third party application120, and a time when the assistant server device104and/or assistant device114has received an status data from the third party server device102. Alternatively, or additionally, the metric can be generated based on a latency between a time that the third party server device102transmitted a most recent status update to the assistant server device104, and a time when the third party server device102received a request from the third party application120and/or the user106. In some implementations, a change in status of the client device108can be performed via the client device110and/or directly at an interface of the client device108. For example, the client device108can include one or more interfaces that are controllable via a touch input, gesture input, spoken input, and/or any other input that can be provided by the user106. In some implementations, the latency from which the metric is generated can be based on a period between a time when the client device108received a request from the user106at an interface that is physically integral to the client device108, and a time when the assistant server device104and/or the assistant device114received a status update from the third party server device102, subsequent to the request from the user106. FIG.2AandFIG.2Billustrate a view200and a view240, respectively, of an assistant server device204adapting to reliability of a third party server device202with respect to reporting status changes at a client device208. A user206can control the client device208using one or more spoken utterances238, which can be provided to an automated assistant interface of an assistant device214. The assistant device214can be in communication with the client device208via at least local area network, such as a Wi-Fi network provided by a Wi-Fi router216, and/or a wide area network, such as the internet. In response to receiving the spoken utterance238, the assistant device214can generate audio data, which can characterize the spoken utterance238, and provide the audio data to the assistant server device204for processing. Alternatively, or additionally, the assistant device214can locally process the audio data. Processing the audio data can include, for example, converting the audio data to text (or other representation), generating at least natural language understanding (NLU) data from the converted representation, and identifying, based on the NLU data, one or more intent requests that are reflected by the spoken utterance238. Based on processing of the audio data by the assistant server device204, and/or the assistant device214, the assistant server device204can transmit a device request236to a third party server device202. In response to receiving the device request236, the third party server device202can communicate with the client device208, in furtherance fulfilling one of more requests from the user206. For example, the spoken utterance238can include natural language content such as, “Assistant, please set my thermostat to 72°.” Based on this natural language content, the device request236can be generated to identify an intent for modifying a setting of the client device208, and, specifically, modifying a temperature setting of the client device208, (e.g., a thermostat). The user206can provide the spoken utterance238after coming home, entering a room212of their home, and seeing a current temperature reading presented at a display panel234of the assistant device214. The assistant device214can present an interface230of an assistant application, which can provide one or more graphical control elements232for controlling one or more settings of the client device208. Furthermore, the interface230can present status data corresponding to the client device208. For example, prior to the user206providing the spoken utterance238, the interface230of the assistant application can indicate that a setting of the client device208is set to 65 degrees. This status data can be based on data provided by the third party server device202prior to the user providing the spoken utterance238. The assistant server device204, and/or the assistant device214, can determine one or more metrics characterizing a reliability of a third party server device202and/or the client device208with respect to providing such status data updates. Based on these one or more metrics, the assistance server device204and/or the assistant device214can determine whether to proactively request status data from the third party server device202, and/or wait for the third party server device202and/or the client device208to provide status updates. In some implementations, one or more metrics can be based on a latency exhibited by the third party server device202when reporting a status update after a user has provided a spoken utterance238. The spoken utterance can be provided in furtherance of controlling the third party client device208that is in communication with the third party server device202. For example, an amount of latency can be based on an amount of time delay between the assistant device214receiving the spoken utterance238from the user206and a time when the assistant device214receives a status update corresponding to the spoken utterance238. Alternatively, or additionally, an amount of latency can be based on a time delay between when the assistant server device204provides the device request236to the third party server device202, and when the assistant server device204receives status data242from the third party server device202. In some implementation, one or more metrics characterizing a reliability of a third party server device202and/or the client device208, can be based on one or more measurements of latency, and/or a comparison of latency to one or more reference values. For example, an automated assistant, and/or the assistant server device204, can interact with a variety of different third party server devices202corresponding to a variety of different third party entities. Each third party entity can cause some amount of latency at their respective third party server device and/or their respective client device. A reference value, in some implementations, can characterize an average amount of latency caused by one or more third party entities. Therefore, a metric value can indicate a higher reliability for a particular third party server device that exhibits less latency then the average latency for one or more third party entities. Moreover, another metric value can indicate a lower reliability for a particular third party server device that exhibits more latency than the average latency for one or more third party entities. In some implementations, a reference value can be based on multiple different types of latency properties (e.g., each latency property can characterize one or more different types of interactions involving a third party server device and/or a third party client device, characterize a type(s) of devices, characterize a time of day), or a single type of latency measurement. When the third party server device202corresponds to a metric indicating lower reliability relative to other third party entities, the assistant server device204can provide a status request244to the third party server device202subsequent to providing the device request236. Alternatively, or additionally, when the third party server device202corresponds to a different metric indicating a high reliability relative to other third party entities, the assistant server device204can bypass providing the status request244to the third party server device202subsequent to providing the device request236. In response to receiving the status request244, the third party server device202can generate updated status data, and provide the status data to the assistant server device204and/or the assistant device214. In response to receiving the updated status data, the assistant server device204and/or the assistant device214can cause the display panel234of the assistant device214to render an interface230that characterizes the updated status data242. For instance, because the user206previously requested that the temperature setting of the client device208be set to 72 degrees, the interface230can characterize this most recent setting based on the updated status data received from the third party server device202. In this way, using such metrics allows the assistant device214, and/or the assistant server device204, to determine whether to be proactive about retrieving updated status data, or rely on the third party server device202to proactively provide the updated status data. By operating the assistant server device204and/or the assistant device214according to one or more metrics characterizing reliability of different third party devices and/or entities, computational resources can be preserved. For example, network bandwidth can be preserved when an assistant server device204acknowledges a reliability of a third party server device, and therefore bypasses transmitting frequent status update requests. Furthermore, for those third party entities that may be indicated as unreliable, the assistant server device204and/or the assistant device214will have more accurate data regarding other client devices, at least by proactively requesting such data. As a result, the assistant server device204and/or the assistant device214will be able to make more informed decisions about controlling certain client devices. For example, a user may rely on inaccurate data about a status of a client device, and issue a request to the automated assistant in order to modify the status of the client device. As a result, the request may cause the assistant device214to implement instructions that cause a particular client device to operate inefficiently, perform redundant operations, and/or otherwise waste power and/or other computational resources. Such wasteful operations can be eliminated by employing the implementations provided herein. FIG.3illustrates a system300for characterizing reliability of various different third party servers, at least when reporting third party device statuses to an automated assistant304, as well as adapting protocols for device ecosystems affected by such reliability. The automated assistant304can operate as part of an assistant application that is provided at one or more computing devices, such as a client device318and/or a server device302. A user can interact with the automated assistant304via an assistant interface, which can be a microphone, a camera, a touch screen display, a user interface, and/or any other apparatus capable of providing an interface between a user and an application. For instance, a user can initialize the automated assistant304by providing a verbal, textual, and/or a graphical input to an assistant interface320to cause the automated assistant304to perform a function (e.g., provide data, control a peripheral device, access an agent, generate an input and/or an output, etc.). The client device318can include a display device, which can be a display panel that includes a touch interface for receiving touch inputs and/or gestures for allowing a user to control applications of the client device318via the touch interface. In some implementations, the client device318can lack a display device, thereby providing an audible user interface output, without providing a graphical user interface output. Furthermore, the client device318can provide a user interface, such as a microphone, for receiving spoken natural language inputs from a user. In some implementations, the client device318can include a touch interface and can be void of a camera, but can optionally include one or more other sensors. The client device318and/or other third party client devices318can be in communication with the server device302over a network340, such as the internet. Additionally, the client device318and the other computing devices434can be in communication with each other over a local area network (LAN), such as a Wi-Fi network. The client device318can offload computational tasks to the server device302in order to conserve computational resources at the client device318. For instance, the server device302can host the automated assistant304, and client device318can transmit inputs received at one or more assistant interfaces320to the server device302. However, in some implementations, the automated assistant304can be hosted at the client device318as a client automated assistant322. In various implementations, all or less than all aspects of the automated assistant304can be implemented on the client device318. In some of those implementations, aspects of the automated assistant304are implemented via the client automated assistant322of the client device318and can interface with the server device302, which can implement other aspects of the automated assistant304. The server device302can optionally serve a plurality of users and their associated assistant applications via multiple threads. In implementations where all or less than all aspects of the automated assistant304are implemented via a client automated assistant322at the client device318, the client automated assistant322can be an application that is separate from an operating system of the client device318(e.g., installed “on top” of the operating system)—or can alternatively be implemented directly by the operating system of the client device318(e.g., considered an application of, but integral with, the operating system). In some implementations, the automated assistant304and/or the client automated assistant322can include an input processing engine306, which can employ multiple different modules for processing inputs and/or outputs for the client device318and/or the server device302. For instance, the input processing engine306can include a speech processing engine308, which can process audio data received at an assistant interface320to identify the text embodied in the audio data. The audio data can be transmitted from, for example, the client device318to the server device302in order to preserve computational resources at the client device318. The process for converting the audio data to text can include a speech recognition algorithm, which can employ neural networks, and/or statistical models for identifying groups of audio data corresponding to words or phrases. The text converted from the audio data can parsed by a data parsing engine310and made available to the automated assistant as textual data that can be used to generate and/or identify command phrase(s), intent(s), action(s), slot value(s), and/or any other content specified by the user. In some implementations, output data provided by the data parsing engine310can be provided to a parameter module312to determine whether the user provided an input that corresponds to a particular intent, action, and/or routine capable of being performed by the automated assistant304and/or an application or agent that is capable of being accessed via the automated assistant304. For example, assistant data316can be stored at the server device302and/or the client device318as assistant device data316, and can include data that defines one or more actions capable of being performed by the automated assistant304and/or client automated assistant322, as well as parameters necessary to perform the actions. In some implementations, the server device302, the automated assistant304, and/or the client device318can track reliability of a third party server device350and/or the third party client device336. Specifically, reliability of a third party server device350and/or the third party client device336can be tracked with respect to latency when reporting status updates, accuracy of particular status updates, and/or any other characteristic that can be indicative of reliability of a server device and/or a client device. For example, a user can provide an input to an assistant interface320, which can cause the automated assistant304to provide an output via an output generating engine314. The output can be based on data provided by the parameter engine312, which can process the input from the user. The output can be provided to a third party server device315in order to influence an operation and/or a setting of a third party client device336. For example, the third party client device336can be one of multiple third party client devices connected within a home of the user, or otherwise associated with the user. The user can control third party client devices336via the automated assistant304, by permitting the automated assistant304to communicate with the third party client devices336and/or the third party server device350. The output from the output generating engine314can be provided to the third party server device350. For example, the output from the server device302can characterize an intent requested by the user, such as an alarm system control intent. An input request engine354can receive the output from the server device302, and determine that the output has been provided by the server device302, and that the output is based on an input from a particular user interacting with their respective automated assistant. Furthermore, a client command engine356can generate one or more client commands based on the output of the server device302. Based on one or more processes of the input request engine354and/or the client command engine356, the third party server device350can transmit one or more commands to the third party client device336, which the user intended to control with their input. When the third party client device336receives the one or more commands, a command engine342of the third party client device336can process the one more commands Command data can be generated by a command engine342and used by a setting engine344to determine one or more settings, and/or one or more operations, to modify according to the one or more commands. Any changes to settings, operations, and/or any other feature of the third party client device336can be characterized in status data that is stored as local device data346. When the status data has been generated at the third party client device336, the third party client device336can confirm completion of execution of the one more commands for the third party server device350. In some implementations, when the third party server device350transmits one or more commands to the third party client device336based on the input from the user to the automated assistant, the third party server device350can also transmit confirmation to the server device302and/or the client device318. The confirmation can indicate to the server device302and/or client device318that the third party server device350has acknowledged the input from the user and has acted on the input from the user. Thereafter, the third party server device315can determine that the third party client device336has executed the one more commands successfully and, in response to determining that the third party client device336has executed the one or more commands successfully, the third party server device350can update any client device data358, in order to characterize an updated operating status of the third party client device336. Furthermore, depending on the third party entity that is responsible for the third party server device350, the third party server device250may send status data to the server device302. However, different third party server devices may provide such status updates according to a variety of different protocols. In some implementations, a third party server device350can operate according to a protocol in which a client status engine352of the third party server device350provides requests for confirmation that certain commands were executed by a third party client device336. Based on such confirmations from the third party client device336, the client status engine352can generate various sets of status data, which can be stored as client device data358at the third party server device350. However, a third party server device350may operate according to, or fail to operate according to, another protocol for providing status updates to other interested parties, such as the server device302, the automated assistant304, and/or the client device318. A third party interaction engine326of the server device302can collect various data about various different third party server devices350in furtherance of eliminating status inaccuracies and/or delays being exhibited at the client device318. For example, temporal data corresponding to interactions between the server device302and the third party server devices350can be collected by the third party interaction engine326. The third party interaction engine326can identify specific times when a request is transmitted from the server device302, when a request is received at a client device318from a user, when a request is received at the server device302from the client device318, when the third party server device350receives instructions from the server device302, when the third party server device350sends command data to one or more third party client devices336, when one or more third party client devices336confirm receipt of the command data to the third party server device350, when the third party server device350provides a confirmation of receipt of instructions from the server device302, when a third party server device350requests a status update from a third party client device336, when a third party server device350receives status data from a third party client device336, when the server device302receives client status data from the third party server device250, and/or any other temporal feature that can characterize a transaction occurring with respect to at least one device. The server device302can include a reliability engine324that can use temporal data generated by the third party interaction engine326in order to generate one or more metrics328based on one or more temporal features of any transaction discussed herein. The one or more metrics can characterize a reliability of one or more third party server devices350, at least with respect to proactively providing status updates to the automated assistant304, the server device302, and/or the client device318. Furthermore, the third party interaction engine326can use one or more metrics328to make decisions about whether and/or how to interact with a particular third party server device350. For example, in response to determining that the user provided a request to change an operating setting of a third party client device336, the server device302can determine whether to immediately query the third party server device350after communicating the user request to the third party server device350, or wait for the third party server device350to provide updated status data corresponding to the third party client device336. In some implementations, the reliability engine324can determine and/or generate one or more reference values360, with which the third party interaction engine326can compare the metrics328. The reference value360can be based on one or more transactions corresponding to a third party server device350and/or a third party client device336. Furthermore, a reference value360can indicate a reliability of a particular third party entity, at least with respect to how reliable a third party entity is in employing services that proactively report status changes of server devices and/or client devices. In some implementations, a reference value and/or a metric can characterize a reliability of multiple different third party entities, the reliability of multiple different third party servers, a reliability of multiple different third party client devices, a reliability of certain types of client devices, a reliability of certain types of third party entities, and/or any other third party entities and/or devices capable of interacting with an automated assistant. As an example, and in some implementations, a reference value can characterize on average amount of latency exhibited by one or more different third party server devices when reporting a client device status update subsequent to fulfilling a user request to an automated assistant. In other words, at least one reference value of the reference values360, for instance, 1 hour, when an average latency of multiple different third party server devices350is 1 hour. Therefore, when an automated assistant is interacting with a third party server device350associated with a third party entity, the third party interaction engine326can access a metric corresponding to that third party entity and compare the metric to the reference value. When the metric corresponds to an amount of latency that is less than the reference value or average value, the third party interaction engine326can bypass proactively querying the third party server device350for a client device status update. However, when the metric corresponds to an amount of latency that is greater than the reference value or average value, a third party interaction engine326can proactively query the third party server device350for a client device status update. It should be noted that the reference value can correspond to amount of time that is less than or greater than any number of microseconds, milliseconds, seconds, hours, days, and/or any other units of time. For example, a reference value can be, but is not limited to, 500 milliseconds, and therefore comparison of metrics to this particular reference can provide a basis from which to query a third party server device350for status update, or bypass providing a query to third party server device350for the status update. FIG.4illustrates a method400for determining whether to query a third party server for a status update, corresponding to a particular third party client device, based on one or more metrics. The method400can be performed by one or more computing devices, applications, and/or any other apparatus or module capable of interacting with a third party server device and/or a third party client device. The method400can include an operation402of determining whether updated device status data has been received for a particular device. When updated device status data has been received for a particular device, the method400can proceed from the operation402to the operation404. The operation404can include determining a time delay or the amount of time from a latest request for modifying an operation status of a particular device. In other words, the operation404can include determining the time period between a time when the updated device status data was received for that particular device and a time when a latest request for modifying the operating status of the particular device was provided to a third party server device, a particular client device, and/or any other associated device. The method400can further include an optional operation406of determining an accuracy of a stored operating status relative to the updated status data. The optional operation406can be determined periodically by a server device and/or a client device that is in communication with the third party server device and/or the third party client device. In some implementations, the optional operation406can be determined randomly, and/or in response to a request for devices statuses from a user, in order to gauge how reliable a particular third party server device is with respect to being proactive about providing status updates. For example, a user may have modified and/or initialized an operation of a third party client device using an interface of the third party client device, or a third party application corresponding to the third party client device. Depending on how proactive the third party entity is with respect to providing status updates, an automated assistant or assistant related device may or may not receive any status update regarding the actions of the user. Therefore, if the server device checks the status of the third party client device, and the status indicated by the third party server device is different than the status stored at the server device, the server device can generate data to characterize this inaccuracy of device status. Furthermore, if the server device checks the status of the third party client device, and the status indicated by the third party server device is the same as the status stored at the server device, the server device can capture this accuracy also using data. In this way, the server device and/or the client device, can characterize a probability that status data stored by the server device and/or the client device for a particular third party client device is accurate at any given time. Therefore, if the server device determines that there is a high probability that status data for a particular third party client device is accurate, and a user requests that the third party client device operate according to the same status data, the server device can bypass requesting the third party server device to modify an operation of the third party client device. As a result, this can save network bandwidth and computational resources, as many inconsequential or otherwise unnecessary transactions between clients and servers would be avoided. However, if the server device determines that there is a low probability that status data for a particular third party client device is accurate, and the server device receives a request from a user to operate the third party client device according to the same particular status data, then, despite the server device being under the impression that the third party client device is already operating as the user has requested, the server device can, regardless, communicate the request to the third party server device. In response, the third party server device can process the request and indicate that the third party client device is already operating according to the same particular status data. Alternatively, in response, when the third party client device is not already operating according to the same particular status data stored by the server device, the third party server device can process the request and cause the third party client device to operate in furtherance of fulfilling the request from the user. The method400can further include an operation408of generating and/or modifying one or more metrics corresponding to third party server device reliability. The one or more metrics can be based on the time delay and/or time period determined at the operation404, and/or the accuracy determined at the optional operation406. In other words, in some implementations, the one or more metrics can be based on an amount of latency exhibited by one or more third party server devices and/or an accuracy determined for a particular third party server device. In some implementations, a metric can be one or more different values selected from a range of different values. Alternatively, or additionally, a metric can be a discrete value, such as a 1 or a 0. For example, the metric can be defined as 1 when the time delay is greater than equal to, or less than a particular reference value. Furthermore, the metric can be defined as 0 when the time delay is less than, equal to, or greater than the particular reference value or values. In one non-limiting example, the metric can be assigned 1 when the time delay is 2 seconds, and the reference value is 5 seconds (i.e., when the time delay is less than the reference value). Alternatively, in another non-limiting example, the metric can be assigned as 0 when the time delay is 7 seconds and the reference value is 5 seconds (i.e., when the time delay is more than the reference value). In some implementations, the one or more metrics can be based on data generated at operation406and/or operation404, and/or any other data that can indicate reliability of a third party server device. The one or more metrics can be selected from a range of different values, and can optionally be based on one or more reference values. In some implementations, the one or more metrics can be discrete or binary values, such as a 1 or a 0, indicating that a third party server device, that is responsible for providing status data regarding one or more third party client devices, is reliable or not reliable with respect to proactively providing status data. In some implementations, the one or more metrics can be used to determine whether to query a particular third party server device for updated status data. The method400can proceed from the operation408to the operation402, in order to determine whether any further updated device status data has been received for a particular device. When, at the operation402, no updated status data has been received for a particular device at a given time, the method400can proceed from the operation402to the operation410. The operation410can include determining whether one or more device metrics provide a basis to query a third party server device for status data regarding the particular device. For example, one or more metrics can be compared to one or more reference values in order to determine whether there is a basis for providing a query to the third party server device for status data. In some implementations, one or more metrics can be processed to determine whether the one or more metrics satisfy a particular threshold, and/or whether the one or more metrics are greater than, equal to, and/or or less than one or more particular reference values. When the one or more metrics provide no basis from which to query the third party server device for status data, the method400can proceed from the operation410to the operation402. Alternatively, when the one or metrics do provide a basis for providing a query to the third party server device for status data, the method400can proceed from the operation410to the operation412. By completing the method400, at least operations402,410, and412, an assistant server and/or an assistant device can determine whether and/or when to query a third party server device for updated operational status data for a client device, at least when an operational status update has not been received for a particular period of time. The operation412can include an operation of accessing status data for the particular device via the third party server device. In order to access the status data, the server device can transmit a request to the third party server device, and in response, the third party server device can provide status data back to the requesting server device and/or a corresponding client device. The method400further include an operation414of determining whether an automated assistant has received a request to modify an operating status of the particular device. When the automated assistant has not received a request to modify an operating status of the particular device, the method400can proceed from the operation414to the operation402. However, when the assistant has received a request to modify an operating status of the particular device, the method400can proceed from the operation414to the operation416. The operation416can include causing the particular device to have a modified operating status according to the request received by the automated assistant. Furthermore, the method400can proceed from the operation416to the operation402, in which a determination is made as to whether updated status data has been received for the particular device affected by the request from the operation414and/or the operation416. The reliability of the third party server device may be determined to vary over time, for example by exhibiting reliability patterns corresponding to the time of day or day of the week. For example, a time delay associated with a particular third party server may be determined, using the techniques described herein, to be larger during evenings/mornings and/or weekends than at other times of day (or larger for other times of the day than other times such as evenings/mornings and/or weekends). On the basis of these determinations, a first party server device can modify its behavior with regard to proactively seeking operational status updates for particular third party client devices associated with the third party server. At times of the day/days of the week at which the third party server is determined to be satisfactorily reliable in proactively providing status updates, the first party server may reduce (or eliminate) its proactivity in requesting status updates from the third party server. Correspondingly, at times of the day/days of the week at which the third party server is determined to be less than satisfactorily reliable in proactively providing status updates, the first party server may increase its proactivity in requesting status updates from the third party server. This may contribute to the first party server being efficient in terms of its usage of network resources and computing resources at the first and third party servers, whilst also maintaining accurate information at the first party server with regard to internal operational states of one or more third party client devices associated with the third party server. FIG.5is a block diagram of an example computer system510. Computer system510typically includes at least one processor514which communicates with a number of peripheral devices via bus subsystem512. These peripheral devices may include a storage subsystem524, including, for example, a memory525and a file storage subsystem526, user interface output devices520, user interface input devices522, and a network interface subsystem516. The input and output devices allow user interaction with computer system510. Network interface subsystem516provides an interface to outside networks and is coupled to corresponding interface devices in other computer systems. User interface input devices522may include a keyboard, pointing devices such as a mouse, trackball, touchpad, or graphics tablet, a scanner, a touchscreen incorporated into the display, audio input devices such as voice recognition systems, microphones, and/or other types of input devices. In general, use of the term “input device” is intended to include all possible types of devices and ways to input information into computer system510or onto a communication network. User interface output devices520may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem may include a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. The display subsystem may also provide non-visual display such as via audio output devices. In general, use of the term “output device” is intended to include all possible types of devices and ways to output information from computer system510to the user or to another machine or computer system. Storage subsystem524stores programming and data constructs that provide the functionality of some or all of the modules described herein. For example, the storage subsystem524may include the logic to perform selected aspects of method400, and/or to implement one or more of third party server device102, assistant server device104, third party application120, assistant device114, client device110, third party server device202, assistant server device204, assistant device214, server device302, client device318, third party server device350, and/or third party client device336. These software modules are generally executed by processor514alone or in combination with other processors. Memory525used in the storage subsystem524can include a number of memories including a main random access memory (RAM)530for storage of instructions and data during program execution and a read only memory (ROM)532in which fixed instructions are stored. A file storage subsystem526can provide persistent storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The modules implementing the functionality of certain implementations may be stored by file storage subsystem526in the storage subsystem524, or in other machines accessible by the processor(s)514. Bus subsystem512provides a mechanism for letting the various components and subsystems of computer system510communicate with each other as intended. Although bus subsystem512is shown schematically as a single bus, alternative implementations of the bus subsystem may use multiple busses. Computer system510can be of varying types including a workstation, server, computing cluster, blade server, server farm, or any other data processing system or computing device. Due to the ever-changing nature of computers and networks, the description of computer system510depicted inFIG.5is intended only as a specific example for purposes of illustrating some implementations. Many other configurations of computer system510are possible having more or fewer components than the computer system depicted inFIG.5. In situations in which the systems described herein collect personal information about users (or as often referred to herein, “participants”), or may make use of personal information, the users may be provided with an opportunity to control whether programs or features collect user information (e.g., information about a user's social network, social actions or activities, profession, a user's preferences, or a user's current geographic location), or to control whether and/or how to receive content from the content server that may be more relevant to the user. Also, certain data may be treated in one or more ways before it is stored or used, so that personal identifiable information is removed. For example, a user's identity may be treated so that no personal identifiable information can be determined for the user, or a user's geographic location may be generalized where geographic location information is obtained (such as to a city, ZIP code, or state level), so that a particular geographic location of a user cannot be determined. Thus, the user may have control over how information is collected about the user and/or used. In some implementations, a method implemented by one or more processors is set forth as including operation such as receiving, from a third party server device, first status data indicating a status of a third party client device that is controllable via an assistant device, wherein the assistant device includes an automated assistant interface via which the user interacts with an automated assistant in order to control the third party client device. The method can further include determining, based on receiving the first status data, a metric that is based on an amount of time delay between receiving the first status data and a previous time associated with a request for modifying an operating status of the third party client device. The method can further include, subsequent to determining the metric that is based on the amount of time delay: determining whether to provide a status request to the third party server device for retrieving second status data indicating a current status of the third party client device. The method can further include, when the metric indicates at least some basis to not query the third party server device for the current operating status of the third party client device: bypassing, based on the metric, providing the status request to the third party server device. The method can further include when the metric indicates at least some basis to query the third party server device for the current operating status of the third party client device: providing, based on the metric, the status request to the third party server device, and receiving, in response to the third party server device receiving the status request, the second status data, which characterizes the current operating status of the third party client device. In some implementations, the method can further include subsequent to determining the metric that is based on the amount of time delay: determining that a user has provided a gesture to an interface of the assistant device, or another client device, in furtherance of controlling the third party client device, wherein determining whether to provide the status request is based on determining that the user provided the gesture to the interface in furtherance of controlling the third party client device. In some implementations, the gesture is tactile gesture and/or a spoken utterance, and the interface includes a touch display panel and/or a microphone. In some implementations, the method can further include generating, in response to receiving the second status data, output data representative of the current operating status of the third party client device; and causing the output data to be rendered via the assistant device, or another client device. In some implementations, determining the metric that is based on the amount of time delay includes: modifying a previously generated metric based on the amount of time delay between receiving the first status data and the previous time associated with the status of the third party client device. In some implementations, modifying the previously generated metric based on the amount of time delay includes adapting the previously generated metric to limit the basis from which to query the third party server device, for the current operating status of the third party client device, when the amount of time delay is less than a reference amount of time delay. In some implementations, modifying the previously generated metric based on the amount of time delay includes adapting the previously generated metric to expand the basis from which to query the third party server device, for the current operating status of a third party client device, when the amount of time delay is greater than the reference amount of time delay. In some implementations, the reference value is based on time values corresponding to other requests, from one or more other users, to other third party server devices in furtherance of modifying operating statuses of other third party client devices. In some implementations, determining the metric includes: accessing metric data that characterizes the metric and other metrics, and the assistant device is in communication with multiple other third party client devices, and one or more client devices of the multiple other third party client devices are correlated to at least one other metric of the other metrics. In some implementations, the request is based on a spoken utterance provided by a user to the automated assistant interface, and natural language content of the spoken utterance identifies an automated assistant routine corresponding to multiple different actions, and the multiple different actions include an action for causing the status of the third party client device to transition into the current operating status. In some implementations, determining the metric includes accessing another metric that is based on whether the third party server device has previously provided a status update to the automated assistant in response to the user requesting a status change of the third party client device via a third party hardware interface, and/or a third party application, that is in communication with the third party client device. In some implementations, the other metric is further based on another amount of time between the user interacting with the third party hardware interface, and/or the third party application, and the third party server device providing the status update to the automated assistant and/or the assistant device. In some implementations, determining whether to query the third party server device for the indication of the current operating status of the third party client device is further based on the other metric, and the third party server device is at least partially controlled by a third party entity that is different from an entity that at least partially controls the automated assistant. In some implementations, the metric and/or the other metric is based on interactions between one or more other users and one or more other third party client devices that are in communication with the third party server device. In some implementations, the one or more other third party client devices are: different from the third party client device, and connected to separate networks from a network that the third party client device is connected. In some implementations, the request is received by the third party server device in response to the user interacting with a third party application that controls the third party client device, and/or the third party server device. In some implementations, the method can further include generating, in response to receiving the second status data, a graphical user interface element representative of the current operating status of the third party client device; and causing a graphical user interface of the assistant device, or another client device, to render the graphical user interface element. In some implementations, the metric is based on a particular time delay between a first request, originating at the third party server device, to change the operating status of the client device, and a second request from the automated assistant for a current status. In some implementations, the request identifies at least one action for the automated assistant, and the at least one action includes an action for causing the status of the third party client device to transition into the current operating status. In some implementations, a method implemented by one or more processors is set forth as including operation such as determining, based on processing audio data corresponding to a spoken utterance, that a user has requested that an automated assistant cause a modification to an operational status of a client device. In some implementations, the method can further include providing, in response to determining that the user has requested that the automated assistant cause the modification to the operational status of the client device, a request to a server device and/or the client device to effectuate the modification to the operational status of the client device. In some implementations, the method can further include receiving, based on providing the request to the server device and/or the client device, first status data from the server device and/or the client device, wherein the first status data characterizes a modified operational status of the client device. In some implementations, the method can further include determining, based on receiving the first status data, an amount of time delay that characterizes a time period between determining that the user has requested that the automated assistant cause the modification to the operational status of the client device, and receiving the first status data from the server device and/or the client device. In some implementations, the method can further include, subsequent to determining the amount of time delay: determining that the user has subsequently requested the automated assistant to cause a particular modification to a current operational status of the client device. In some implementations, the method can further include providing, in response to determining that the user has subsequently requested the automated assistant to cause the particular modification to the current operational status of the client device, another request to the server device and/or the client device to effectuate the particular modification to the current operational status of the client device. In some implementations, the method can further include determining, based on the amount of time delay and in response to determining that the user has requested that the automated assistant cause the particular modification to the current operational status of the client device, whether the amount of time delay indicates at least some basis to query the server device and/or the client device for second status data. In some implementations, the method can further include, when the amount of time delay indicates at least some basis to not query the server device and/or the client device for the second status data: bypassing, based on the amount of time delay, providing a status request to the server device and/or the client device. In some implementations, the method can further include, when the amount of time delay indicates at least some basis to query the server device and/or the client device for the second status data: providing, based on the amount of time delay, the status request to the server device and/or the client device, and receiving, in response to the server device and/or the client device receiving the status request, the second status data, which characterizes an operational status update for the client device. In some implementations, the method can further include, subsequent to determining the amount of time delay: causing a user interface of a separate client device to provide content that characterizes the current operational status of the client device. In some implementations, the user interface of the separate client device is a display panel and provides graphical content that characterizes the current operational status of the client device. In some implementations, the method can further include, when the amount of time delay indicates at least some basis to query the server device and/or the client device for the second status data: causing the display panel to render other graphical content that characterizes the operational status update for the client device. In some implementations, the amount of time delay characterizes a particular time period between: providing the request to the server device and/or the client device to effectuate the modification to the operational status of the client device, and receiving the first status data from the server device and/or the client device. In some implementations, a method implemented by one or more processors is set forth as including operation such as causing a client device to render content that characterizes a first status of another client device, wherein the client device and the other client device are connected to a common local area network, and the other client device is controlled using at least an automated assistant that is accessible via the client device. In some implementations, the method can further include determining, subsequent to causing the client device to render the content, that the client device has received a command for causing the other client device to operate according to a second status. In some implementations, the method can further include providing, in response to determining that the client device has received the command to cause the other client device to operate according to the second status, a request to a third party server device to cause the other client device to operate according to the second status. In some implementations, the method can further include receiving, subsequent to the request transmitted to the third party server device, status data from the third party server device, wherein the status data characterizes an operational status of the other client device. In some implementations, the method can further include, when the status data indicates that the client device is operating according to the second status: modifying, or bypassing modifying, one or more metrics to characterize the server device as being reliable with respect to providing accurate status data, at least in response to the server device causing the other client device to exhibit any modified status relative to any previous status of the other client device. In some implementations, the method can further include, when the status data indicates that the client device is not operating according to the second status: modifying, or generating, the one or more metrics to indicate that the server device is unreliable with respect to providing the accurate status data, at least in response to the server device causing the other client device to exhibit any modified status relative to any previous status of the other client device. In some implementations, wherein modifying, or generating, the one or more metrics to indicate that the server device is unreliable with respect to providing the accurate status data includes: modifying, or generating, the one or more metrics to indicate a basis for pro-actively querying the server device in response to determining that a subsequent command is received for causing the other client device to operate according to a particular status. In some implementations, the method can further include modifying, or bypassing modifying, one or more metrics to characterize the server device as being reliable with respect to providing accurate status data includes: modifying, or bypassing modifying, the one or more metrics to indicate a basis for bypassing pro-actively querying the server device in response to determining that a subsequent command is received for causing the other client device to operate according to a particular status. In some implementations, causing the client device to render content that characterizes the first status of the other client device includes causing a graphical user interface of the client device to render one or more graphical elements at the graphical user interface. In some implementations, the method can further include determining a type of device that characterizes the other client device, wherein modifying or generating the one or more metrics is based on the type of device that characterizes the other client device. In some implementations, the method can further include determining a particular time that the request was provided to the third party server device, wherein modifying or generating the one or more metrics is based on the particular time that the request was provided to the third party server device. In some implementations, determining that the client device has received the command includes determining a type of command for causing the other client device to operate according to the second status, wherein modifying or generating the one or more metrics is based on the type of command. In some implementations, determining type of command includes determining whether the command was initialized via the automated assistant, a particular interface of the other client device, or a separate interface of a peripheral device that is associated with the other client device. In some implementations, a method implemented by one or more processors is set forth as including operation such as causing a client device to store status data that characterizes a status of another client device, wherein the client device and the other client device are connected to a common local area network, and the other client device is controlled using at least an automated assistant that is accessible via the client device. The method can further include providing, subsequent to storing the status data at the client device, a server request to a server device to determine a current operating status of the other client device. In some implementations, the method can further include providing, subsequent to storing the status data at the client device, a client request to the other client device to determine the current operating status of the other client device. In some implementations, the method can further include receiving, based on the server request transmitted to the server device, server status data from the server device, wherein the server status data characterizes a particular status of the other client as indicated by the server device. In some implementations, the method can further include receiving, based on the client request transmitted to the other client device, client status data from the other client device, wherein the client status data characterizes another particular status of the other client as indicated by the other client device. In some implementations, the method can further include, when the server status data and the client status data indicate a common operating status for the other client device: causing the client device to store updated status data characterizing the common operating status for the other client device. In some implementations, the method can further include, when the server status data and the client status data indicate the common operating status for the other client device: modifying, or bypassing modifying, one or more metrics to characterize the server device as being reliable with respect to providing accurate status data. In some implementations, the method can further include, when the server status data and the client status data fail to indicate the common operating status for the other client device: causing the client device to store other updated status data characterizing the other particular status of the other client, as indicated by the other client device. In some implementations, the method can further include, when the server status data and the client status data fail to indicate the common operating status for the other client device: modifying, or generating, the one or more metrics to indicate that the server device is unreliable with respect to providing the accurate status data. In some implementations, the method can further include determining a type of device that characterizes the other client device, wherein modifying or generating the one or more metrics is based on the type of device that characterizes the other client device. In some implementations, the method can further include determining a particular time that the server request was provided to the server device, wherein modifying or generating the one or more metrics is based on the particular time that the server request was provided to the server device. In some implementations, the method can further include determining an amount of time between a first time stamp corresponding to the status of the other client device, and a second time stamp corresponding to the server status data received from the server device. While several implementations have been described and illustrated herein, a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein may be utilized, and each of such variations and/or modifications is deemed to be within the scope of the implementations described herein. More generally, all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific implementations described herein. It is, therefore, to be understood that the foregoing implementations are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, implementations may be practiced otherwise than as specifically described and claimed. Implementations of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure.
70,271
11943076
In the figures, identical or functionally identical elements are provided with the same reference numerals if not indicated otherwise. DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS FIG.1shows, as an example, a bus system1, which is designed in particular to be the foundation of a CAN bus system, a CAN FD bus system, a CAN XL bus system, and/or modifications thereof, as described hereinafter. However, the bus system may alternatively be designed for FlexRay or any other serial bus system. InFIG.1, bus system1includes a plurality of user stations10,20,30, which are each connected to a bus40including a first bus wire41and a second bus wire42. Bus wires41,42may also be called CAN_H and CAN_L or CAN XL_H and CAN XL_L in a CAN-based bus system and are used for electrical signal transfer after coupling in the particular levels for a signal in the send mode. In a CAN-based bus system, dominant levels are coupled in or recessive levels are generated at least temporarily to send a signal in the send mode via bus40. Messages45,46are serially transferable in the form of signals between individual user stations10,20,30via bus40. If an error occurs during the communication on bus40, as shown by the jagged black block arrow inFIG.1, an error frame47(error flag) may optionally be sent. Bus system1may be used in a vehicle, in particular in a motor vehicle, an aircraft, etc., or in a hospital, etc. User stations10,20,30are, for example, control units, sensors, display devices, etc. of a motor vehicle. As shown inFIG.1user station10includes an application11and a communication module15. Communication module15includes a communication control unit16, a transceiver unit17, and a filter module18. Filter module18is provided in communication control unit16. User station20includes an application21and a communication module25. Communication module25includes a communication control unit26, a transceiver unit27, and optionally a filter module28. Filter module28is provided in transceiver unit27. User station30includes an application31and a communication module35. Communication module35includes a communication control unit36, a transceiver unit37, and a filter module38. Filter module38is provided separately from communication control unit36and transceiver unit27in communication module35. Transceiver units17,27,37of user stations10,20,30are each directly connected to bus40, even if this is not illustrated inFIG.1. Application11may be software which is executable by a microcontroller of user station10. Application11may include at least one computer program. If user station10is, for example, a control unit for a windshield wiper, application11may be software for controlling the movement of the windshield wiper. The software may use, for example, the data of a rain sensor and/or a velocity sensor to automatically control the movement of the windshield wiper. The rain sensor and/or the velocity sensor may be user stations of bus system1and may send their data via bus40. Very generally, application11carries out a task13, in particular a control task, which is to be carried out by user station10. Task13may induce a movement of at least one part, which may be mechanically and/or electrically and/or pneumatically and/or hydraulically driven. Alternatively or additionally, task13may include a calculation and/or comparison and/or a detection of data. Moreover, application21may be software which is executable by a microcontroller of user station20. Application11carries out a task23which is to be carried out by user station20. Moreover, application31may be software which is executable by a microcontroller of user station20. Application31carries out a task33which is to be carried out by user station30. The same applies for applications23,33as described above for application11. Communication control units16,26,36are each used for controlling a communication of particular user station10,20,30via bus40with at least one other user station of user stations10,20,30, which are connected to bus40. Messages45,46are exchanged between user stations10,20,30, in particular their applications11,21,31, via bus40. Communication control units16,36create and read first messages45, which are, for example, modified CAN messages45. Modified CAN messages45are constructed on the foundation of a CAN XL format, which is described in greater detail in reference toFIG.2. Communication control units16,36may moreover be designed to provide a CAN XL message45or a CAN FD message46for transceiver unit37or receive these messages therefrom. Communication control units16,36thus create and read a first message45or second message46, first and second method45,46differing due to their data transfer standard, namely in this case CAN XL or CAN FD. Communication control unit26may be designed like a conventional CAN controller according to ISO 11898-1:2015, i.e., like a CAN FD-tolerant classical CAN controller or a CAN FD controller. Communication control unit26creates and reads second messages46, for example, CAN FD messages46. CAN FD messages46may include a number of 0 to 64 data bytes, which are also transferred in addition at a significantly faster data rate than in the case of a classical CAN message. In particular, communication control unit26is designed like a conventional CAN FD controller. Transceiver unit27may be designed like a conventional CAN transceiver according to ISO 11898-1:2015 or CAN FD transceiver. Transceiver units17,37may be designed to provide messages45according to the CAN XL format or messages46according to the present CAN FD format for associated communication unit16,36or receive these messages therefrom as needed. Forming and then transferring messages45using the CAN XL format and receiving such messages45is implementable using the two user stations10,30. FIG.2shows, for message45, a frame450, as is received serially from bus40by transceiver unit17over time t and may be provided to communication control unit16. Frame450may be a CAN or CAN FD frame or a CAN XL frame in a CAN-based bus system. The CAN XL frame may be transferred in the bus system at an increased net data transfer rate in comparison to CAN or CAN FD, in particular at approximately 10 Mb per second. Moreover, the size of the payload per CAN XL frame may be approximately 2 kB or have any other value, for example, between 1 byte and 2 kB. In this case, communication control unit16creates a CAN XL frame in the present exemplary embodiment, for example, as compatible with CAN FD. The same also applies similarly to communication control unit36and transceiver unit37of user station30. According toFIG.2, serially received frame450is divided over time t into different segments SG1, SG2, SG3, SG4to SGn−1, SGn. Segments SG1to SGn may be associated with different communication phases451,452, namely in the case of CAN an arbitration phase451and a following data phase452. Segment SG1is received first, then segment SG2, and so on. In the example ofFIG.2, segment SG1forms an arbitration field including an identifier (ID). Segment SG2is a control field of frame450. Segments SG3, SG4each form a part of the data field of frame450. The data field ends in segment SGn−1. Alternatively, for example, segments SG1, SG2form an arbitration field having an identifier (ID). Segments SG3, SG4form a control field. A data field of frame450begins subsequently, thus with a segment SG5, and ends in segment SGn−1. Other classifications of segments SG1to SGn are possible. In segment SGn, a checksum field and a frame termination field including a field for switching over between communication phases452,451and a frame end field may be provided. Segments SG1to SGn are each of equal length in the present exemplary embodiment, as shown in the example ofFIG.2. In arbitration phase451, it is bitwise negotiated between user stations10,20,30with the aid of the identifier (ID) which user station10,20,30wishes to send message45,46having the highest priority and therefore receives exclusive access to bus40of bus system1for the upcoming time for sending in subsequent data phase452. In arbitration phase451, a physical layer is used as in CAN and CAN FD. The physical layer corresponds to the bit transfer layer or layer 1 of the conventional OSI model (Open Systems Interconnection Model). An important point during phase451in a CAN-based bus system is that the conventional CSMA/CR method is used, which permits simultaneous access of user stations10,20,30to bus40, without higher priority message45,46being destroyed. Further bus user stations10,20,30may thus be added to bus system1relatively easily, which is very advantageous. The CSMA/CR method has the result that there must be so-called recessive states on bus40, which may be overridden by other user stations10,20,30having dominant states on bus40. In the recessive state, high-resistance conditions prevail at individual user stations10,20,30, which, in combination with the parasites of the bus circuitry, results in longer time constants. If CAN XL and CAN FD are compatible, the frame formats of CAN FD and CAN XL are identical up to the res bit or XLF bit. The res bit from CAN FD, which is called the XLF bit hereinafter, is used for switching over from the CAN FD format to the CAN XL Format. A receiver first recognizes at the res bit, in which format frame450is sent. A CAN XL user station, thus user stations10,30here, also supports CAN FD. In data phase452, in addition to a part of the control field, the payload of CAN XL frame450or message45from the data field and a checksum field are sent. A field then follows which is used to switch over from data phase452back to arbitration phase451, as already mentioned above. A sender of message45begins a sending of bits of data phase452on bus40only when user station10has won the arbitration as the sender and user station10as the sender thus has exclusive access to bus40of bus system1for sending. As shown inFIG.3, filter module18of user station10evaluates frame450. Filter module18uses above-described segments SG1to SGn of frame450. Filter module18may establish the length of individual segments SG1to SGn. Moreover, filter module18may establish how many of segments SG1to SGn are evaluated jointly. Moreover, filter module18may establish which of segments SG1to SGn are evaluated jointly. It is possible that the jointly evaluated segments of segments SG1to SGn follow one another directly in frame450or that at least one other segment of segments SG1to SGn is provided between jointly evaluated segments SG1to SGn. In addition, it is possible that at least one segment of segments SG1to SGn is evaluated multiple times to verify the segment. In such a case, one segment is compared in succession to more than one reference value and evaluated. Therefore, in particular one segment could be evaluated using four reference values for verification. Alternatively or additionally, four segments are each evaluated using one reference value for verification. In both mentioned special examples, in total up to four verifications are carried out. Of course, other examples for repeated evaluation of at least one segment are possible to verify the segment. First segments SG1to SGn of frame450are available to filter module18for the filtering. As described above, these are, for example, in a CAN frame450the header, which contains the identifier and the message format, and the beginning of the data field. According toFIG.3, filter module18includes a first memory block181, a second memory block182, a checking block183, a selection block184, and a result block185. A plurality of filter instructions1811to1815is stored in first memory block181. A plurality of reference data1821to1822is stored in second memory block182. Reference data1821to1822are provided in separate lists. Each of these lists may be used by each filter instruction1811to1815. Reference data1821to1822may each include, for example, a 32-bit reference value and a 32-bit mask. Reference data1821to1822may thus each be in two parts, as shown by dashed lines inFIG.3. The reference value has the same bit number as the mask. Filter instructions1811to1815define which pieces of information of a message45or its frame450checking block183has to compare to which reference data1821to1822. Additionally or alternatively, filter instructions1811to1815define how the comparison results ascertained by checking block183are logically linked or are to be linked by result block185to form an overall result. Selection block184selects the plurality of reference data1821to1822for the checking block on the basis of filter instructions1811to1815. The filter method of filter module18separates filter instructions1811to1815from reference data1821to1822, in particular their reference values. As shown inFIG.3, each of filter instructions1811to1815has two instruction parts, as shown by dashed lines inFIG.3. A first instruction part of particular filter instruction1811to1815contains specifications on the acceptance check of frame450, which is to be carried out by the checking block. A second instruction part of particular filter instruction1811to1815contains specifications for the processing of an accepted frame450. In general: A filter instruction, for example filter instruction1811, may instruct in its first part checking block183to check multiple segments SG1to SGn of frame450or message45. A filter instruction, for example filter instruction1812, defines in its first part which segments of segments SG1to SGn are to be compared in checking block183to which of reference data1821,1822and verified. A filter instruction, for example filter instruction1813, may instruct in its first part that multiple such comparisons, which checking block183has carried out, are logically combined with one another. The combination may also be carried out in checking block183. A filter instruction contains in its second part instructions which are carried out when frame450or message45has been accepted. For example, filter instruction1811may contain in its second part an instruction that checked frame450or message45is to be stored in a predetermined memory area1851or1852in result block185. This instruction may apply for an accepted and/or a non-accepted frame450or message45. According to another example, filter instruction1812may include in its second part an instruction to mark checked frame450or message45. The marking may in particular indicate or mean a high priority of checked frame450or message45. The marking may in particular indicate that checked frame450or message45is a frame450or a method45which is listed on a blocking list, which is also called a blacklist. Filter instructions1811to1815therefore also define what is to take place with an accepted frame450or message45. Possibilities for processing of accepted frame450are, for example, storing and/or marking frame450. FIG.4shows a filtering method carried out by filter module18. After beginning the method, checking module183checks in a first step S1whether a frame450or a message45is received. If the answer is YES in step S1, the sequence continues to a step S2. In step S2, checking module183checks whether first 4 segments SG1to SG4were received during the reception of a frame450, for example. In general, checking module183may check at least two segments SG1, SG2. If the answer is YES in step S2, the sequence continues to a step S3. In step S3, checking block183carries out an acceptance filtering using memory blocks181,182and selection block184. For this purpose, checking block183checks the list of filter instructions1811to1815in memory block181one after another, until a filter instruction is found among filter instructions1811to1815which accepts segments SG1to SG4. If checking block183has not found a matching filter instruction1811to1815after processing the complete list, checking block183may thus optionally check and carry out an alternative instruction. For the checking of segments SG1to SG4in step S3, reference is made to particular reference data1821,1822, which are provided in the separate lists for reference data1821,1822of memory block182. Each reference datum of reference data1821,1822has a reference value, which may also be called a setpoint value, and a mask, both having the same bit width as the bit width of one of segments SG1to SGn. The term “bit width” includes here that all segments SG1to SGn have the same number of bits and the length of individual segments SG1to SGn is equal. Each of segments SG1to SGn has at least two bits. For example, each of segments SG1to SGn has a number of 32 bits. During the checking of each segment of segments SG1to SGn, checking block183first carries out a bitwise comparison between particular segment SG1to SGn and the reference value or setpoint value. Checking block183then bitwise masks the result in result block185. For example, checking block183first carries out a bitwise comparison between the first bit of segment SG1and the associated reference value. Subsequently, checking block183carries out a bitwise comparison between the second bit of segment SG1and the associated reference value. Such a comparison is also carried out for all other bits of segment SG1up to a comparison between the last bit of segment SG1and the associated reference value. The individual results, in particular conformance ü or not, are logically interpreted in step S3in filter instruction1811to1815, this is, for example:individual result is accepted if there is conformance between segment and reference value or setpoint valueindividual result is accepted if there is no conformance between segment and reference value or setpoint valueindividual result is accepted if checking is not required. A filter instruction1811to1815checked by checking block183accepts a frame450or a message45when all individual results are accepted. The filter process is then also completed. As a result, no further filter instruction1811to1815is processed for present frame450or a method45. The sequence then continues to a step S4. The filtering of filter module18in step S3may be expressed as follows as pseudocode. for n = 1 to 4 doif ((Segment[n] XOR setpoint value [s])AND (NOT mask [s])) = 0 then// conformanceend ifend for In step S4, it is checked whether frame450or message45is accepted. Optionally, an interrupt, in particular of the communication on bus40and/or the execution of application11, may be triggered if filter instruction1811to1815requires this. If frame450or message45is accepted, the sequence thus continues to a step S5. If frame450or method45is not accepted, the sequence thus continues to a step S7. In step S5, accepted frame450or method45is stored, in particular in result block185. Frame450or message45is supplemented here in result block185with further pieces of information455, namely, for example 455_1) Where frame450or message45is to be stored, a selection of multiple memories being able to be ready. At least one of the memories may be a storage queue according to the storage principle in which those elements which were first stored are also first taken out of the memory again (First in First Out (FIFO)). 455_2) Which filter instruction1811to1815has frame450or method45has fulfilled or has passed. In other words, a filter index is created. 455_3) Marking of frame450or message45controlled by fulfilled filter instruction1811to1815, in particular priority (high or middle or low) or hazardous (blacklist). The sequence then continues to a step S6. In step S6, result block185passes on frame450or message45to application11. Application11may thus, optionally without time delay or immediately, use the payload contained in frame450or message45. The sequence then returns to step S1. In step S7, non-accepted frame450or message45is discarded. Result block185thus does not pass on frame450or method45to application11. The sequence then returns to step S1. The method is ended when user station10or filter module18is switched off. The above-described method may be designed using a pseudocode, for example, as follows: Begin program acceptance filteringwait for the reception of a messagewait until segment 4 has been receivedrepeat for all filter instructionsread filter instruction from the memoryrepeat for all segments 1..4 // sequential checking of thesegmentsif checking of this segment is required by the filter instructionread corresponding reference data from the memoryx = Segment[s] XOR reference data.setpoint value // bitwisecomparisonx = x AND (NOT reference data.mask) // bitwise maskingü[n] = TRUE if x = 0 otherwise FALSE // ü = conformanceif filter instruction requires the inversion of the resultü[n] = NOT ü[n]otherwise ü[n] = TRUEü = ü[1] AND ü[2] AND ü[3] AND ü[4] //ascertainconformance over all segmentsif ü = FALSE then check next filter instructionif ü = FALSE // No matching filter instruction foundRead alternative filter instruction from memory //no checking ofthe segments, only instructions//execute instructionstrigger interrupt if filter instruction requires thissupplement items of information in the message according to filterinstructionFilterIndexBlacklistpass on message if filter instruction requires thisEnd Program acceptance filtering According to a second exemplary embodiment, segments SG1to SGn are not all equal in length, as shown in the example ofFIG.2. Instead, in the present exemplary embodiment, at least one segment of segments SG1to SGn has a different length than the other segments of segments SG1to SGn. In this case, for example, only a certain section of a segment may be compared to the entire other segment, or in particular to a reference value. Alternatively, only a predetermined section of a segment may be compared to a predetermined section of another segment, or in particular to a reference value. In particular, the beginning of segment SG1in arbitration field may be compared to segment SG3of the data field and/or the end of segment SG1is compared to segment SG4of the data field. Arbitrary other variants for a comparison of sections of segments SG1to SGn are possible. Otherwise, the functionality of bus system1in the second exemplary embodiment is identical to the functionality as described above for the first exemplary embodiment. All above-described embodiments of user stations10,20,30, of bus system1, and the method carried out therein may be used individually or in all possible combinations. In particular, all features of the above-described exemplary embodiments and/or their modifications may be combined arbitrarily. Additionally or alternatively, the following modifications are possible in particular. Although the present invention is described above on the basis of the example of the CAN bus system, the present invention may be used in any communication network and/or communication method in which acceptance filtering is carried out for data from bus40. In particular, the present invention is usable in developments of other serial communication networks, such as ethernet and/or 100 base T1 ethernet, fieldbus systems, etc. In particular, bus system1according to the exemplary embodiments may be a communication network in which data are transferable serially at two different bit rates. It is advantageous, but not a necessary condition, that in bus system1, an exclusive, collision-free access of a user station10,20,30to a shared channel is ensured at least for certain time spans. The number and arrangement of user stations10,20,30in bus system1of the exemplary embodiment is arbitrary. In particular, user station20in bus system1may be omitted. It is possible that one or multiple of user stations10or30are provided in bus system1. It is possible that all user stations in bus system1are designed identically, thus only user station10or only user station30is provided.
23,880
11943077
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The present specification discloses a multidrop network system and a network device of the multidrop network system. The multidrop network system and network device support instant transmission and allow a new node (i.e., a new network device) to join the multidrop network system. The background knowledge of a multidrop network system is found in the IEEE 802.3 standard and the US patent application publication of publication number US2019/0230705A1. FIG.1shows an embodiment of the multidrop network system of the present disclosure. The configuration of the multidrop network system100inFIG.1is based on the IEEE 802.3 standard and includes N network devices (i.e., N nodes), wherein the N is an integer equal to or greater than two. The N network devices exchange data through a physical transmission medium102(e.g., twisted pair cable or optical cable), and include a master device110and (N−1) slave device(s)120. Each of the N network devices has its own identification code (IDC) as an identification in the multidrop network system100and thus the N network devices have N IDCs (e.g., 0, 1, 2, . . . , (N−2), and (N−1)) in total, but the implementation of the present invention is not limited to the above features. The N network devices obtain their respective transmission opportunities in turn according to the N IDCs in each round of data transmission of the multidrop network system100. In regard to the embodiment ofFIG.1, each of the N network devices performs a predetermined operation (e.g., an incremental count operation counting from 0 to (N−1), or a decremental count operation counting from (N−1) to 0, or a known/self-developed polling operation) to generate a current value, and the N network devices have the same current value. When the IDC of a Kthdevice of the N network devices is the same as the current value, the Kthdevice earns its transmission opportunity (hereinafter referred to as the Kth-device transmission opportunity) of the aforementioned transmission opportunities, wherein the K is a positive integer not greater than the N. For better understanding of the present embodiment, in the following description the predetermined operation is an incremental count operation and the current value is a current count value. After the Kthdevice earns the Kth-device transmission opportunity, there are several circumstances as follows:(1) If the Kthdevice has no need to transmit, the Kthdevice is silent in a duration of a predetermined time slot; if no cut-in signal is observed in a front duration of the predetermined time slot, each of the N network devices increases its current count value by one (or resets its current count value to a default value when the current count value is the maximum count value) to update the current count value after the predetermined time slot elapses; if the increased current count value is equal to the IDC of a (K+1)thdevice (or the reset current count value is equal to the IDC of a 1stdevice), the (K+1)thdevice (or the 1stdevice) is assigned the Kth-device transmission opportunity.The duration of the predetermined time slot is determined according to the demand for implementation, and is not specified in the IEEE 802.3cg standard; for example, the length of the predetermined time slot is longer than the length of any predetermined time slot specified by the IEEE 802.3cg standard. However, the above-mentioned features are not limitations on the implementation of the present invention.(2) If the Kthdevice has no need to transmit and a cut-in signal of another network device is observed in the front duration of the predetermined time slot, the another network device takes over the Kth-device transmission opportunity, and each network device increases its current count value by one (or resets its current count value) to update the current count value after the another network device finishes its transmission; if the updated current count value is equal to the IDC of the (K+1)thdevice (or the 1stdevice), the (K+1)thdevice (or the 1stdevice) earns its transmission opportunity. It is noted that the total transmission time of the another network device is determined according to the another network device's demand and/or the requirements of the multidrop network system100, and is not limited to the length of the predetermined time slot.The another network device can be one of the N network devices (e.g., the salve device120marked with “cut in” inFIG.1), or be a network device asking to join the multidrop network system100(e.g., the network device marked with “new” inFIG.1). A nonrestrictive example of the front duration of the predetermined time slot is top one-Xthof the predetermined time slot, wherein the X is a number greater than one (e.g., X=2 or X=3).It is noted that if multiple devices of the N network devices except the Kthdevice want to cut in, one of the multiple devices will be assigned the Kth-device transmission opportunity according to the order of the IDCs of the multiple devices and/or other orders (e.g., the priority of the multiple devices' data to be transmitted). Once one of the multiple network devices wins the Kth-device transmission opportunity in the duration of the predetermined time slot of the Kthdevice, the other network device(s) cannot try to cut in until the start of the duration of a predetermined time slot of the (K+1)thdevice (or the 1stdevice). During the transmission time of any of the N network devices, none of the other network device(s) is allowed to transmit in order to prevent packet collision.According to the demand for implementation, the front duration of the predetermined time slot can be divided into multiple parts (e.g., the front duration210of the predetermined time slot200inFIG.2) including a first front duration, a second front duration, and so on and so forth. In the first front duration, the network device(s) having the highest transmission priority (or the highest demand for transmission) is/are allowed to contend for a cut-in opportunity; if no devices contend for the cut-in opportunity, the network device(s) having the second-highest transmission priority is/are allowed to contend for the cut-in opportunity; and so on.(3) If the Kthdevice needs to transmit and no cut-in signal is observed by the Kthdevice in the front duration of the predetermined time slot, the Kthdevice starts its transmission operation after the end of the front duration; after the Kthdevice finishes its transmission operation, each of the N network devices increases its current count value by one (or resets its current count value) to update the current count value; and if the updated current count value is equal to the IDC of the (K+1)thdevice (or the 1stdevice), the (K+1)thdevice (or the 1stdevice) earns the its transmission opportunity. It is noted that the total transmission time of the Kthdevice is determined according to the Kthdevice's demand and/or the requirements of the multidrop network system100, and is not limited to the length of the predetermined time slot.(4) If the Kthdevice needs to transmit but receives a cut-in signal from the another network device in the front duration of the predetermined time slot, the Kthdevice abandons or defers its right to start transmitting in a back duration (e.g., the back duration220of the predetermined time slot200inFIG.2) of the predetermined time slot. In an exemplary implementation, after receiving the cut-in signal, the Kthdevice abandons its right to transmit and the another network device takes over the right; after the another network device finishes its transmission, each network device increases its current count value by one (or resets its current count value) to update the current count value; and if the updated current count value is equal to the IDC of the (K+1)thdevice (or the 1stdevice), the (K+1)thdevice (or the 1stdevice) earns the its transmission opportunity. In an exemplary implementation, the cut-in signal is a predetermined signal (e.g., a signal with a predetermined pattern) and recognizable to the N network devices; after receiving the cut-in signal, the device(s) including the Kthdevice stop(s) counting to allow the another network device to transmit; after the another network device finishes its transmission, the device(s) resume(s) counting and thus the Kthdevice merely defers its right to transmit rather than abandons the right.It is noted that the back duration of the predetermined time slot can follow the front duration of the predetermined time slot, but the implementation of the present invention is not limited thereto.(5) If the Kthdevice needs instant transmission, the Kthdevice can start transmitting in the front duration of the predetermined time slot or determine whether to start transmitting in the front duration according to the rank of the Kthdevice's IDC among all the ranks of the N IDCs and/or according to other predetermined orders (e.g., the priority of the Kthdevice's data to be transmitted); after the Kthdevice finishes its transmission, each network device increases its current count value by one (or resets its current count value) to update the current count value; and if the updated current count value is equal to the IDC of the (K+1)thdevice (or the 1stdevice), the (K+1)thdevice (or the 1stdevice) earns its transmission opportunity. It is noted that if the predetermined operation is another kind of operations (e.g., a decremental count operation), those having ordinary skill in the art can derive the corresponding circumstances from the above description. The incremental/decremental count operation can be realized with known/self-developed technologies (e.g., counter/timer). In regard to the embodiment ofFIG.1and the description in the preceding paragraphs, the Kthdevice is the master device110or one of the (N−1) slave device(s)120. The cut-in signal is a data signal or a request signal from the another network device asking to join the multidrop network system100. When the cut-in signal is a data signal, the another network device is one of the N network devices, and can be the master device110or one of the (N−1) slave device(s)120(e.g., the salve device120marked with “cut in” inFIG.2, wherein this slave device120has an IDC “M” being a positive integer not greater than the N and not equal to the K). When the cut-in signal is the request signal, the another network device is a new device and its IDC is not yet determined (e.g., the device130marked with “new” and “IDC=?” inFIG.2), and the new device is not any of the N network devices; and after the new device joins the multidrop network system100, the new device acts as a slave device. On the basis of the above, if the cut-in signal is the request signal, the count range of the current count value should be adjusted to the affiliation of the another network device. Normally, the count range is defined by the difference between a maximum count value (e.g., N−1) and a minimum count value (e.g., 0). The difference could be equal to the number of effective nodes (i.e., devices that can earn transmission opportunities in turn in the multidrop network system100) in the multidrop network system100, but the present invention is not limited thereto. For example, if the N network devices receive the request signal in a round of data transmission of the multidrop network system100, the master device110adjusts the count range of the current count value (e.g., increases the maximum count value by one) after this round of data transmission finishes (i.e., when or before a next round of data transmission begins). For another example, if the N network devices receive the request signal in a round of data transmission of the multidrop network system100, the (N−1) slave device(s)120is/are silent after receiving the request signal; and after receiving the request signal, the master device110adjusts the count range of the current count value (e.g., increasing the maximum count value by one) before the end of the round of data transmission and then sends out a beacon to notify all the slave device(s)120of the start of the next round of data transmission. In regard to the embodiment ofFIG.1, the IDC of the master device110is the minimum count value, and after the current count value reaches the maximum count value in a current round of data transmission, each of the N network devices resets its current count value to the minimum count value in order to start the next round of data transmission. However, the implementation of the present invention is not limited to the above features; more specifically, provided that the N network devices of the multidrop network system100adopt the same predetermined operation (e.g., known/self-developed polling operation) to obtain their respective transmission opportunities in sequence, the present invention can play a role through the aforementioned utilization of the predetermined time slot. In regard to the embodiment ofFIG.1, the duration of each round of data transmission of the N network devices is defined according to two successive beacons, the another network device ascertains the value of the N according to the length of the predetermined time slot and the transmission statuses of the N network devices within the interval between the two successive beacons, and afterward the another network device uses a new IDC (e.g., N) as its identification in the multidrop network system according to the value of N, wherein the new IDC is different from any of the N IDCs owned by the N network devices. In an exemplary implementation, if none of the N network devices transmits in the interval between the two successive beacons, each of the N network devices merely consumes the duration of the predetermined time slot during the interval and thus the another network device can estimate the value of the N according to the following equation: N≅interval between the two successive beacons/duration of the predetermined time slot In another exemplary implementation, if an Mthnetwork device transmits in the interval between the two successive beacons (i.e., a first beacon and a second beacon) while the other (N−1) network device(s) do(es) not transmit in the interval, the another network device can ascertain the start and end of the transmission of the Mthnetwork device according to the Mthnetwork device's transmitting data or status, and then the another network device can estimate the value of the N according to the following equation: N≅time⁢point⁢of⁢the⁢first⁢beacon-time⁢point⁢of⁢the⁢Mt⁢h⁢network⁢device⁢starting⁢transmittingduration⁢of⁢the⁢predetermined⁢time⁢slot+1+time⁢point⁢of⁢the⁢Mt⁢h⁢network⁢device⁢finishing⁢transmitting-time⁢point⁢of⁢the⁢second⁢beaconduration⁢of⁢the⁢predetermined⁢time⁢slot). Other ways to estimate the value of the N can be derived from the above description. It is noted that the aforementioned new IDC of the another network device can be determined by other means without the value of the N. For example, when a physical layer (not shown) of the master device110receives the request signal from the another network device, the physical layer can forward the request signal or the information thereof to an upper layer (e.g., application layer) (not shown) of the master device110, and then the upper layer can determine the new IDC accordingly; afterward, the master device110provides the new IDC for the another network device through the physical layer, and adjusts the count range of the current count value. Each network device of the multidrop network system100inFIG.1can be implemented independently as a cooperative network device incorporating the inventive features of the present disclosure. Since people having ordinary skill in the art can refer to the description of the multidrop network system100inFIG.1to appreciate the detail and modification of each network device of the multidrop network system100, repeated and redundant description is omitted here. It is noted that all effective nodes in the multidrop network system100are supposed to include the present invention to normally operate under the same protocol. It should be noted that people of ordinary skill in the art can selectively use some or all of the features of any embodiment in this specification or selectively use some or all of the features of multiple embodiments in this specification to implement the present invention as long as such implementation is practicable; in other words, the present invention can be carried out flexibly in accordance with the present disclosure. To sum up, the present invention supports instant transmission (e.g., instant messaging) and allows the affiliation of a new node without significantly changing the configuration of a multidrop network system. The aforementioned descriptions represent merely the preferred embodiments of the present invention, without any intention to limit the scope of the present invention thereto. Various equivalent changes, alterations, or modifications based on the claims of the present invention are all consequently viewed as being embraced by the scope of the present invention.
17,209
11943078
DESCRIPTION OF EXAMPLE EMBODIMENTS Overview This disclosure describes method(s) for using virtual network instance (VNI) labels (e.g., a multiprotocol label switching (MPLS) label, a virtual extensible local area network (VXLAN) label, a generic network virtualization encapsulation (GENEVE) label, etc.) in a hub and spoke overlay network to advertise network routing configurations, embed spoke addresses as metadata associated with spoke routes, and route network data. The method includes receiving, at a first hub node of a hub and spoke network and from a first spoke of the hub and spoke network, a network advertisement including a first border gateway protocol (BGP) large community associated with a first tenant node of the first spoke, wherein the first spoke is associated with the first hub node. Additionally, or alternatively, the method includes decoding, from a first portion of the first BGP large community, a first indication of a first virtual network instance (VNI) type associated with the first spoke. Additionally, or alternatively, the method includes decoding, from a second portion of the first BGP large community, a first address of a first tunnel endpoint associated with the first spoke. Additionally, or alternatively, the method includes storing, in a database associated with the first hub node, a mapping between the first VNI type and the first address of the first tunnel endpoint. Additionally, or alternatively, the method includes sending, from the first hub node and to one or more second hub nodes of the hub and spoke network, the first BGP advertisement packet, wherein the one or more second hub nodes are associated with one or more respective spokes of the hub and spoke network. Additionally, or alternatively, the method includes receiving, at a first hub node of a hub and spoke network and from a second hub node associated with the hub and spoke network, a network advertisement including a first border gateway protocol (BGP) large community associated with a first tenant node of a first spoke of the hub and spoke network, wherein the first hub node is remote from the second hub node and the first spoke is associated with the second hub node. Additionally, or alternatively, the method includes decoding, from a first portion of the first BGP large community, a first indication of a first virtual network instance (VNI) type associated with the first spoke. Additionally, or alternatively, the method includes decoding, from a second portion of the first BGP large community, a first address of a first tunnel endpoint associated with the first spoke. Additionally, or alternatively, the method includes storing, in a database associated with the first hub node, a first mapping between the first VNI type and the first address of the first tunnel endpoint. Additionally, or alternatively, the method includes receiving, at the first hub node and from a second spoke of the hub and spoke network, a data packet to be transmitted to the first tenant node, wherein the second spoke is associated with the first hub. Additionally, or alternatively, the method includes sending, based at least in part on the first mapping, the data packet from the first hub node and to the first spoke. Additionally, the techniques described herein may be performed by a system and/or device having non-transitory computer-readable media storing computer-executable instructions that, when executed by one or more processors, performs the method described above. EXAMPLE EMBODIMENTS Overlay networks may utilize various techniques to encapsulate data packets and allow for multi-tenant configurations of an overlay network, such as, for example, a hub and spoke overlay network. Hub nodes of a hub and spoke overlay network may be configured to directly communicate network traffic with other hub nodes, while spokes may be unaware of the network topology and may be configured to communicate with a particular hub node that is local to a particular spoke (e.g., a hub node in the same network site as a spoke) to facilitate the transmission of the network traffic. As such, for any spoke-to-spoke communication in the hub and spoke network, a data packet may be transmitted from a first spoke (e.g., a source spoke), deployed in a first network site, to at least a first hub node in the first network site. In some instances, where a source spoke and a destination spoke associated with a spoke-to-spoke communication are deployed in the same site (e.g., the first network site), the first hub node may transmit the data packet directly to a second spoke (e.g., the destination spoke). However, in examples where a source spoke and a destination spoke associated with a spoke-to-spoke communication are deployed in separate sites (e.g., the first spoke being in the first site and the second spoke being in a second site), the data transmission may require an additional hop through the network. For example, the first hub node may be configured to transmit the data packet to a second hub node deployed in the second site, where the second hub node may transmit the data packet to the second spoke in the second site. The return traffic may be communicated through the hub and spoke network in a similar manner, from the second spoke to the second hub, from the second hub to the first hub, and from the first hub to the first spoke. This disclosure describes techniques for a hub node, provisioned in a network site of a multi-site computing resource network, such as, for example, a hub and spoke overlay network, to receive, store, and/or forward network routing information associated with a spoke, and send network communications directly to spoke(s) that are remote from the given hub node. In some examples, a first hub node of a hub and spoke network, provisioned in a first network site and configured to transmit network traffic associated with a first spoke provisioned in the first network site, may receive a network advertisement from the first spoke. The first hub node may be configured to decode network routing information from a border gateway protocol (BGP) large community string associated with the network advertisement and may store the network routing information in a database associated with the first hub node. In some examples, the first hub node may be configured to further advertise the network routing information to additional hub nodes in the hub and spoke network. For example, the first hub node may send the network routing information and/or the BGP large community string to a second hub node provisioned in a second site of the hub and spoke network, where the second hub node may store the network routing information in a database associated with the second hub node. Additionally, or alternatively, the second hub node may receive a packet from a source tenant node, provisioned in a second spoke associated with the second network site, to be transmitted to a destination tenant node provisioned in the first spoke. In some examples, the packet may include an inner header including next hop information for a destination tenant node provisioned in a spoke network and/or an outer header including next hop information for a given spoke node of the hub and spoke overlay network. The second hub node may receive the data packet from the second spoke and identify the network routing information associated with the first spoke that was previously stored in the database. The second hub node may be configured to leverage the network routing information associated with the first spoke and send the data packet directly to the first spoke, without intervention by the first hub node (e.g., without routing the data packet via an extra hop from the second hub node to the first hub node). The network routing information may be received as a network advertisement. In some examples, the network advertisement may be configured as a data packet, a network advertisement packet, such as, for example, a border gateway protocol (BGP) advertisement packet, and/or an application programming interface (API) call, such as, for example, a representational state transfer (REST) API call including details for configuring a given spoke network with the computing resource network. Additionally, or alternatively, the network advertisement may comprise a BGP large community string encoded with various network routing information. The BGP large community string may comprise various portions encoded with the information, such as, for example, a first 4-byte portion encoded with a tenant virtual network instance (VNI) associated with a tenant node provisioned in a spoke network, a second 4-byte portion encoded with an indication of a VNI type associated with a spoke network, and/or a third 4-byte portion encoded with an address associated with the spoke network (e.g., an internet protocol (IP) address associated with the spoke network and/or a tunnel endpoint associated with the spoke network). In some examples, when a spoke and/or a tenant node provisioned in a spoke connects to a site of the hub and spoke network (e.g., comes online), the spoke may transmit network routing information associated with the spoke and/or the tenant node to a hub node provisioned in the same site of the hub and spoke network (e.g., the hub node local to the spoke). For example, one or more routing device(s) associated with a given spoke network may receive a tenant VNI for a given tenant node in the spoke network and may encode the first 4-byte portion of the BGP large community string with the tenant VNI. In some examples, a tenant VNI may be received from a hub node provisioned locally to a given spoke and/or from a database storing tenant VNIs. The tenant VNI may be configured as a VNI label used for tenant traffic, such as, for example, MPLS, VXLAN, GENEVE, and/or any other network overlay protocol. Additionally, or alternatively, the one or more routing device(s) may encode the second 4-byte portion of the BGP large community string with an indication of the VNI type of the given spoke, such as, for example, MPLS, VXLAN, GENEVE, and/or any other network overlay protocol. Additionally, or alternatively, the one or more routing device(s) may encode the third 4-byte portion of the BGP large community string with an IP version 4 (IPv4) address and/or an identifier of an IP version 6 (IPv6) address of a network tunnel endpoint associated with the given spoke (or the one or more routing devices provisioned in the given spoke network). In examples where the network tunnel endpoint is an IPv6 address, the IPv6 address may be mapped to a 4-byte identifier. In such examples, a given hub node may be configured to perform a lookup using the 4-byte identifier to determine the IPv6 address of the tunnel endpoint based on the VNI type included in the second portion of the BGP large community string indicating that the VNI type of a given spoke is a VNI configured for IPv6 routing. A hub node may be configured to decode the network routing information from the various portions of the BGP large community string. In some examples, while decoding the network routing information, a hub node may be configured to assume that the IP address included in the third portion of the BGP large community string is an IPv4 address, unless an indication is otherwise provided. A hub node may be configured to determine such an indication by decoding the second portion of the BGP large community, and determining, based at least in part on the VNI type, that the tunnel endpoint associated with given spoke is an IPv6 address. In such examples, a hub node may then perform a lookup, based on a 4-byte identifier encoded in the third portion of the BGP large community string (in place of the IPv4 address) to determine the IPv6 address associated with the tunnel endpoint of the given spoke network. Additionally, or alternatively, a hub node may be configured to store the network routing information associated with a given spoke and/or a given tenant of the given spoke as a mapping between the tenant VNI, the indication of the VNI type of the given spoke, and/or the address associated with the spoke network. In some examples, the mapping may include only the indication of the VNI type of the given spoke and the address associated with the spoke network. By storing the network routing information in this way, a hub node may be configured to send a data packet from a source node provisioned in a first spoke that is local to the hub node (e.g., provisioned in the same network site) directly to a second spoke that is remote from the hub node and hosting a destination node associated with the packet. For example, the hub node may identify, based on the mapping, the tenant VNI associated with the destination node, the VNI type associated with the second spoke, and the address associated with the spoke network, and utilize the network routing information to send the data packet directly to the second spoke, thus eliminating an additional network hop by foregoing sending the data packet to a remote hub node associated with the second spoke. As previously described, hub nodes associated with the hub and spoke network may be configured to advertise the network routing information received from a given spoke to each of the additional hub nodes in the hub and spoke network. By configuring each of the hub nodes to forward advertisements to each of the hub nodes in the hub and spoke network, each hub in the network may be configured to read and/or decode the BGP large community string of each route and learn the VNI type and/or IP address of a given spoke for the prefix of the route. As described herein, a computing-based, cloud-based solution, hub node, and/or spoke network can generally include any type of resources implemented by virtualization techniques, such as containers, virtual machines, virtual storage, and so forth. Further, although the techniques described as being implemented in data centers and/or a cloud computing network, the techniques are generally applicable for any network of devices managed by any entity where virtual resources are provisioned. In some instances, the techniques may be performed by a schedulers or orchestrator, and in other examples, various components may be used in a system to perform the techniques described herein. The devices and components by which the techniques are performed herein are a matter of implementation, and the techniques described are not limited to any specific architecture or implementation. The techniques described herein provide various improvements and efficiencies with respect to using BGP large community strings to encode network routing information and route network data more efficiently in a multi-site network with a multi-tenant overlay. For instance, the techniques described herein may allow for hub nodes in a hub and spoke overlay network to share network routing information associated with spokes and/or tenants provisioned locally to a given hub node (e.g., provisioned in the same site of the hub and spoke overlay network) with other hub nodes that are remote from the spokes and/or tenants. By encoding network routing information specific to a given spoke and/or tenant node into a BGP large community string associated with a packet, a local hub node may decode the network routing information, store the network routing information locally, and forward the network routing information to remote hub nodes in the hub and spoke network, allowing for the remote hub nodes to read the BGP large community string of each route and learn network specific information required to route packets directly to a given remote spoke (e.g., a VNI associated with the remote spoke, and/or an address of the remote spoke and/or a tunnel endpoint associated with the remote spoke). Additionally, by advertising network routing information associated with local spokes to remote hub nodes, a hub node, provisioned in a network site of a hub and spoke overlay network that is remote from a given spoke, may be configured to send packets directly to the given spoke without intervention by the remote hub node associated with the given spoke (e.g., without sending the packet to the remote hub node). As such, by directly sending data packets from a hub node in a first site to a spoke in a second site that is remote from the hub node, an extra hop in the routing of the packet may be eliminated, and computing and/or network resources may be preserved. Certain implementations and embodiments of the disclosure will now be described more fully below with reference to the accompanying figures, in which various aspects are shown. However, the various aspects may be implemented in many different forms and should not be construed as limited to the implementations set forth herein. The disclosure encompasses variations of the embodiments, as described herein. Like numbers refer to like elements throughout. FIGS.1A and1Billustrate a system architecture diagram100of an example flow for a multi-site computing resource network102including one or more network sites104including a respective hub node106, respective spokes108, and/or a respective datastore110, where the hub nodes106may be configured to advertise BGP advertisement packet(s) including network routing information and received from respective spokes108local to a given hub node106to other hub nodes106in the network102, where the network routing information may be stored in a respective datastore110and utilized to send a packet from a given hub node106directly to a spoke108that is remote from the given hub node106. FIG.1Aillustrates an example system-architecture diagram100and an example flow for a computing resource network102(e.g., a hub and spoke overlay network) including at least a first hub node106(A) provisioned in a first site104(A) to receive a border gateway protocol (BGP) advertisement packet from a local spoke108(3) and forward the BGP advertisement packet to a second hub node106(B) provisioned in a second site104(B) that is remote from the first site104(A). The computing resource network102may comprise one or more data centers that include various networking components, such as, a network controller and/or network switches (also referred to as nodes) operating on physical servers. In some examples, physical server(s) may host one or more virtual machines. Each virtual machine may be configured to execute one of various operations and act as one or more virtual components for the computing resource network102, such as, for example, computing-based resources. In some examples, the physical server(s) may host any number of virtual machines. In some examples, physical server(s) in the computing resource network102may host the various network components of the computing resource network102, such as, for example, a hub node106, a spoke108(1)-(N), a spoke network112(1)-(N), and/or one or more tenant node(s)114(1)-(N), where N may be any integer greater than 1. Each of the spokes108may provide a hub node106of a given site104access to a spoke network112hosting one or more tenant node(s)114. For example, a first hub node106(A) provisioned in a first site104(A) of the computing resource network102may be configured to route network traffic associated with a first spoke108(1) and/or a second spoke108(2) provisioned in the first site104(A), such as, for example, network traffic originating from and/or destined to one or more tenant nodes114(1),114(2) hosted in one or more spoke networks112(1),112(2). Additionally, or alternatively, a second hub node106(B) provisioned in a second site104(B) of the computing resource network102may be configured to route network traffic associated with a third spoke108(3) and/or a fourth spoke108(4) provisioned in the second site104(B), such as, for example, network traffic originating from and/or destined to one or more tenant nodes114(3),114(4) hosted in one or more spoke networks112(3),112(4). Additionally, or alternatively, the first hub node106(A), the second hub node106(B), and/or one or more third hub node(s)106(N) may be communicatively coupled, such that they are configured to send network traffic to a given hub node106configured to route the network traffic to a destination spoke110provisioned in the same network site of the given hub node106. Additionally, or alternatively, each of the sites104may offer one or more services116implemented by way of a hub node106provisioned in a given site104. In some examples, the one or more services116may comprise a deep packet inspection (DPI) service, a cloud-delivered firewall (CDFW) service, a network address translation (NAT) service, a secure web gateway (SWG) service, a domain name service (DNS) layer security service, and/or a cloud access security broker (CASB) service. Take, for example, the computing resource network102configured as a hub and spoke overlay multiprotocol label switching (MPLS) network. As previously described, hub node A106(A) may be configured to route network traffic associated with one or more spoke(s)108(1),108(2) provisioned in network site A104(A) and/or hub node B106(B) may be configured to route network traffic associated with one or more spoke(s)108(3),108(4) provisioned in network site B104(B). That is, network traffic, destined for one or more tenant node(s)114(1),114(2) hosted in respective spoke network(s)112(1),112(2) connected to hub node A106(A) by way of the spoke(s)108(1),108(2) provisioned in site A104(A) and from one or more tenant nodes114(3)-(N) hosted in respective spoke network(s)112(3)-(N) and connected to respective hub node(s)106(B),106(N) by way of the spoke(s)108(3)-(N), is received at hub node A106(A) and routed to the spoke108connected to the spoke network112hosting the destination tenant node114. In other words, network traffic traversing the hub and spoke overlay network originating in site B104(B) and destined for site A104(A), is routed based on a first hop from the originating spoke108(3),108(4) to hub node B106(B), a second hop from hub node B106(B) to hub node A106(A), and a third hop from hub node A106(A) to the destination spoke108(1),108(2). However, by configuring a networking device associated with a spoke108to encode a border gateway protocol (BGP) large community string with network routing information associated with the spoke108and/or a tenant node114, a hub node106may utilize the network routing information to send packets directly to a remote spoke (e.g., from hub node A106(A) to spoke108(3)), thus eliminating the need for 3 network hops to route network traffic across network sites of the hub and spoke overlay network. For example, hub node A106(A) provisioned in network site A104(A) and configured to transmit network traffic associated with a first spoke108(1), may receive a network advertisement from the first spoke108(1). In some examples, the network advertisement may be configured as a data packet, a BGP advertisement packet, and/or an application programming interface (API) call, such as, for example, a representational state transfer (REST) API call including details for configuring a given spoke network112with the computing resource network102. Additionally, or alternatively, the network advertisement may comprise a BGP large community string. Additionally. or alternatively, the packet may be configured as a data packet including a BGP large community string. The hub node A106(A) may be configured to decode network routing information from the BGP large community string associated with the packet and may store the network routing information in the site A database110(A) provisioned in site A104(A). In some examples, hub node A106(A) may be configured to further advertise the network routing information to additional hub nodes (e.g., hub node B106(B) and/or one or more hub node(s) N106(N)) in the hub and spoke network. For example, hub node A106(A) may send the network routing information and/or the BGP large community string to hub node B106(B) provisioned in site B104(B) of the hub and spoke network, where hub node B106(B) may decode and/or store the network routing information the site B database110(B) provisioned in site B104(B). Additionally, or alternatively, hub node B106(B) may receive a packet from a source tenant node114(3) hosted in a spoke network112(3) connected to a second spoke108(3) provisioned in site B104(B), to be transmitted to a destination tenant node114(1) hosted in a spoke network112(1) connected to the first spoke108(1). In some examples, the packet may include an inner header including next hop information for a destination tenant node114(1) provisioned in a spoke network112(1) and/or an outer header including next hop information for a given spoke node108(1) of the hub and spoke overlay network102. Hub node B106(B) may receive the data packet from the second spoke108(3) and identify the network routing information associated with the first spoke108(1) that was previously stored in the site B database110(B). Hub node B106(B) may be configured to leverage the network routing information associated with the first spoke108(1) and send the data packet directly to the first spoke108(1), without intervention by hub node A106(A) (e.g., without routing the data packet via an extra hop from the hub node B106(B) to hub node A106(A)). The network routing information may be received as a data packet and/or a network advertisement packet, such as, for example, a border gateway protocol (BGP) advertisement packet comprising a BGP large community string encoded with various network routing information. The BGP large community string may comprise various portions encoded with the information, such as, for example, a first 4-byte portion encoded with a tenant VNI associated with a tenant node114provisioned in a spoke network112, a second 4-byte portion encoded with an indication of a virtual network instance (VNI) type associated with a spoke108and/or a spoke network112, and/or a third 4-byte portion encoded with an address associated with the spoke108and/or the spoke network112(e.g., an internet protocol (IP) address associated with the spoke network and/or a tunnel endpoint associated with the spoke network). In some examples, when a spoke108and/or a tenant node114provisioned in a spoke network112connects to a site104of the hub and spoke network (e.g., comes online), the spoke108may transmit network routing information associated with the spoke108and/or the tenant node114to a hub node106provisioned in the same site104of the hub and spoke network (e.g., hub node A104(A) and spoke108(1)). For example, one or more routing device(s) associated with a given spoke network112may receive a tenant VNI for a given tenant node114in the spoke network and may encode the first 4-byte portion of the BGP large community string with the tenant VNI. In some examples, a tenant VNI may be received from a hub node106provisioned locally to a given spoke108and/or from a database storing tenant VNIs. The tenant VNI may be configured as a VNI label used for tenant traffic, such as, for example, MPLS, VXLAN, GENEVE, and/or any other network overlay protocol. Additionally. or alternatively, the one or more routing device(s) may encode the second 4-byte portion of the BGP large community string with an indication of the VNI type of the given spoke108, such as, for example, MPLS. VXLAN, GENEVE, and/or any other network overlay protocol. Additionally, or alternatively, the one or more routing device(s) may encode the third 4-byte portion of the BGP large community string with an IP version 4 (IPv4) address and/or an identifier of an IP version 6 (IPv6) address of a network tunnel endpoint associated with the given spoke108(or the one or more routing devices provisioned in the given spoke network112). In examples where the network tunnel endpoint is an IPv6 address, the IPv6 address may be mapped to a 4-byte identifier. In such examples, a given hub node106may be configured to perform a lookup using the 4-byte identifier to determine the IPv6 address of the tunnel endpoint based on the VNI type included in the second portion of the BGP large community string indicating that the VNI type of a given spoke108is a VNI configured for IPv6 routing. A hub node106may be configured to decode the network routing information from the various portions of the BGP large community string. In some examples, while decoding the network routing information, a hub node106may be configured to assume that the IP address included in the third portion of the BGP large community string is an IPv4 address, unless an indication is otherwise provided. A hub node106may be configured to determine such an indication by decoding the second portion of the BGP large community, and determining, based at least in part on the VNI type, that the tunnel endpoint associated with given spoke108is an IPv6 address. In such examples, a hub node106may then perform a lookup, based on a 4-byte identifier encoded in the third portion of the BGP large community string (in place of the IPv4 address) to determine the IPv6 address associated with the tunnel endpoint of the given spoke network112. Additionally, or alternatively, a hub node106may be configured to store the network routing information associated with a given spoke108and/or a given tenant114of the given spoke108as a mapping between the tenant VNI, the indication of the VNI type of the given spoke108, and/or the address associated with the spoke108. Additionally, or alternatively, the mapping may include only the indication of the VNI type of the given spoke108and the address associated with the spoke108. By storing the network routing information in this way, a hub node106(A) may be configured to send a data packet, from a source node114(1) provisioned in a first spoke108(1), directly to a second spoke108(3) that is remote from the hub node106(A) and hosting a destination node114(3) associated with the packet. For example, the hub node106(A) may identify, based on the mapping, the tenant VNI associated with the destination node114(3), the VNI type associated with the second spoke108(3), and the address associated with the second spoke108(3), and utilize the network routing information to send the data packet directly to the second spoke108(3), thus eliminating an additional network hop by foregoing sending the data packet to the hub node106(B) associated with the second spoke108(3). At “1,” hub node A106(A) provisioned in network site A104(A) and configured to transmit network traffic associated with a first spoke108(1), may receive a network advertisement from the first spoke108(1). In some examples, the network advertisement may be configured as a data packet, a BGP advertisement packet, and/or an application programming interface (API) call, such as, for example, a representational state transfer (REST) API call including details for configuring a given spoke network112with the computing resource network102. Additionally, or alternatively, the network advertisement may include a BGP large community string. The BGP large community string may comprise various portions encoded with the information, such as, for example, a first 4-byte portion encoded with a tenant VNI associated with a tenant node114(1) provisioned in the first spoke network112(1), a second 4-byte portion encoded with an indication of a virtual network instance (VNI) type associated with the first spoke108(1) and/or the first spoke network112(1), and/or a third 4-byte portion encoded with an address associated with the first spoke108(1) and/or the first spoke network112(1) (e.g., an internet protocol (IP) address associated with the first spoke network112(1) and/or a tunnel endpoint associated with the first spoke network112(1)). At “2,” hub node A106(A) may decode the network routing information from various portions of the BGP large community string. In some examples, while decoding the network routing information, hub node A106(A) may be configured to assume that the IP address included in the third portion of the BGP large community string is an IPv4 address, unless an indication is otherwise provided. Hub node A106(A) may be configured to determine such an indication by decoding the second portion of the BGP large community, and determining, based at least in part on the VNI type, that the tunnel endpoint associated with the first spoke108(1) is an IPv6 address. In such examples, hub node A106(A) may then perform a lookup, based on a 4-byte identifier encoded in the third portion of the BGP large community string (in place of the IPv4 address) to determine the IPv6 address associated with the tunnel endpoint of the first spoke network112(1). At “3,” hub node A106(A) may be configured to store the network routing information associated with the first spoke108(1) and/or the tenant node114(1) provisioned in the first spoke108(1) as a mapping between the tenant VNI, the indication of the VNI type of the first spoke108(1), and/or the address associated with the first spoke108(1). Additionally, or alternatively, the mapping may include only the indication of the VNI type of the first spoke108(1) and the address associated with the first spoke108(1). At “4,” hub node A106(A) may be configured to further advertise the network routing information to additional hub nodes (e.g., hub node B106(B) and/or one or more hub node(s) N106(N)) in the hub and spoke network. For example, hub node A106(A) may send the network routing information and/or the BGP large community string to hub node B106(B) provisioned in site B104(B) of the hub and spoke network. In some examples, the example flow may continue from step “4” ofFIG.1Ato step “5” ofFIG.1B. Additionally, or alternatively, the example flow fromFIG.1Amay be executed simultaneously with the example flow ofFIG.1B. FIG.1Billustrates an example system-architecture diagram100and an example flow for a computing resource network102(e.g., a hub and spoke overlay network) including at least a first hub node106(A) provisioned in a first site104(A) and a second hub node106(B) provisioned in a second site104(B) having local spokes108(3),108(4) connected thereto to send data packets from a source node114(3),114(4) of one of the local spoke(s)108(3),108(4) directly to a destination node114(1),114(2) of a remote spoke108(1),108(2) connected to the first hub node106(A) in the first site104(A). The computing resource network102may comprise one or more data centers that include various networking components, such as, a network controller and/or network switches (also referred to as nodes) operating on physical servers. In some examples, physical server(s) may host one or more virtual machines. Each virtual machine may be configured to execute one of various operations and act as one or more virtual components for the computing resource network102, such as, for example, computing-based resources. In some examples, the physical server(s) may host any number of virtual machines. In some examples, physical server(s) in the computing resource network102may host the various network components of the computing resource network102, such as, for example, a hub node106, a spoke108(1)-(N), a spoke network112(1)-(N), and/or one or more tenant node(s)114(1)-(N), where N may be any integer greater than 1. Each of the spokes108may provide a hub node106of a given site104access to a spoke network112hosting one or more tenant node(s)114. For example, a first hub node106(A) provisioned in a first site104(A) of the computing resource network102may be configured to route network traffic associated with a first spoke108(1) and/or a second spoke108(2) provisioned in the first site104(A), such as, for example, network traffic originating from and/or destined to one or more tenant nodes114(1),114(2) hosted in one or more spoke networks112(1),112(2). Additionally, or alternatively, a second hub node106(B) provisioned in a second site104(B) of the computing resource network102may be configured to route network traffic associated with a third spoke108(3) and/or a fourth spoke108(4) provisioned in the second site104(B), such as, for example, network traffic originating from and/or destined to one or more tenant nodes114(3),114(4) hosted in one or more spoke networks112(3),112(4). Additionally, or alternatively, the first hub node106(A), the second hub node106(B), and/or one or more third hub node(s)106(N) may be communicatively coupled, such that they are configured to send network traffic to a given hub node106configured to route the network traffic to a destination spoke110provisioned in the same network site of the given hub node106. Additionally, or alternatively, each of the sites104may offer one or more services116implemented by way of a hub node106provisioned in a given site104. In some examples, the one or more services116may comprise a deep packet inspection (DPI) service, a cloud-delivered firewall (CDFW) service, a network address translation (NAT) service, a secure web gateway (SWG) service, a domain name service (DNS) layer security service, and/or a cloud access security broker (CASB) service. Take, for example, the computing resource network102configured as a hub and spoke overlay multiprotocol label switching (MPLS) network. As previously described, hub node A106(A) may be configured to route network traffic associated with one or more spoke(s)108(1),108(2) provisioned in network site A104(A) and/or hub node B106(B) may be configured to route network traffic associated with one or more spoke(s)108(3),108(4) provisioned in network site B104(B). That is, network traffic, destined for one or more tenant node(s)114(1),114(2) hosted in respective spoke network(s)112(1),112(2) connected to hub node A106(A) by way of the spoke(s)108(1),108(2) provisioned in site A104(A) and from one or more tenant nodes114(3)-(N) hosted in respective spoke network(s)112(3)-(N) and connected to respective hub node(s)106(B),106(N) by way of the spoke(s)108(3)-(N), is received at hub node A106(A) and routed to the spoke108connected to the spoke network112hosting the destination tenant node114. In other words, network traffic traversing the hub and spoke overlay network originating in site B104(B) and destined for site A104(A), is routed based on a first hop from the originating spoke108(3),108(4) to hub node B106(B), a second hop from hub node B106(B) to hub node A106(A), and a third hop from hub node A106(A) to the destination spoke108(1),108(2). However, by configuring a networking device associated with a spoke108to encode a border gateway protocol (BGP) large community string with network routing information associated with the spoke108and/or a tenant node114, a hub node106may utilize the network routing information to send packets directly to a remote spoke (e.g., from hub node A106(A) to spoke108(3)), thus eliminating the need for 3 network hops to route network traffic across network sites of the hub and spoke overlay network. At “5,” hub node B106(B) may receive a network advertisement from hub node A106(A). In some examples, the network advertisement may be configured as a data packet, a BGP advertisement packet, and/or an application programming interface (API) call, such as, for example, a representational state transfer (REST) API call including details for configuring a given spoke network112with the computing resource network102. Additionally, or alternatively, the network advertisement may include a BGP large community string. The BGP large community string may comprise various portions encoded with the information, such as, for example, a first 4-byte portion encoded with a tenant VNI associated with a tenant node114(1) provisioned in the first spoke network112(1), a second 4-byte portion encoded with an indication of a virtual network instance (VNI) type associated with the first spoke108(1) and/or the first spoke network112(1), and/or a third 4-byte portion encoded with an address associated with the first spoke108(1) and/or the first spoke network112(1) (e.g., an internet protocol (IP) address associated with the first spoke network112(1) and/or a tunnel endpoint associated with the first spoke network112(1)). At “6,” hub node B106(B) may decode the network routing information from various portions of the BGP large community string. In some examples, while decoding the network routing information, hub node B106(B) may be configured to assume that the IP address included in the third portion of the BGP large community string is an IPv4 address, unless an indication is otherwise provided. Hub node B106(B) may be configured to determine such an indication by decoding the second portion of the BGP large community, and determining, based at least in part on the VNI type, that the tunnel endpoint associated with the first spoke108(1) is an IPv6 address. In such examples, hub node B106(B) may then perform a lookup, based on a 4-byte identifier encoded in the third portion of the BGP large community string (in place of the IPv4 address) to determine the IPv6 address associated with the tunnel endpoint of the first spoke network112(1). At “7,” hub node B106(B) may be configured to store the network routing information associated with the first spoke108(1) and/or the tenant node114(1) provisioned in the first spoke108(1) as a mapping between the tenant VNI, the indication of the VNI type of the first spoke108(1), and/or the address associated with the first spoke108(1). Additionally, or alternatively, the mapping may include only the indication of the VNI type of the first spoke108(1) and the address associated with the first spoke108(1). At “8,” hub node B106(B) may receive a packet from a source tenant node114(4) hosted in a spoke network112(4) connected to a second spoke108(4) provisioned in site B104(B), to be transmitted to a destination tenant node114(1) hosted in a spoke network112(1) connected to the first spoke108(1). In some examples, the packet may include an inner header including next hop information for a destination tenant node114(1) provisioned in a spoke network112(1) and/or an outer header including next hop information for a given spoke108(1) of the hub and spoke overlay network102. Hub node B106(B) may receive the data packet from the second spoke108(4) and identify the network routing information associated with the first spoke108(1) that was previously stored in the site B database110(B). At “9,” hub node B106(B) may be configured to leverage the network routing information associated with the first spoke108(1) and send the data packet directly to the first spoke108(1), without intervention by hub node A106(A) (e.g., without routing the data packet via an extra hop from the hub node B106(B) to hub node A106(A)). For example, hub node B106(B) may identify, based on the mapping stored in the Site B datastore110(B), the tenant VNI associated with the destination node114(1), the VNI type associated with the first spoke108(1), and/or the address associated with the first spoke108(1), and utilize the network routing information to send the data packet directly to the first spoke108(1), thus eliminating an additional network hop by foregoing sending the data packet to hub node A106(A) associated with the first spoke108(1). FIG.2illustrates an example border gateway protocol (BGP) large community string200including a first 4-byte portion202, a second 4-byte portion204, and/or a third 4-byte portion206. In some examples, the portions202,204, and/or206of the BGP large community string200may indicate an encoded tenant VNI208associated with a tenant node provisioned in a spoke network, an encoded VNI type and/or VNI associated with a spoke210, and/or an encoded internet protocol (IP) address212associated with a spoke. As described with respect toFIGS.1A and1B, one or more routing device(s) associated with a given spoke network may receive a tenant VNI for a given tenant node in the spoke network and may encode the first 4-byte portion of the BGP large community string with the tenant VNI. The tenant VNI may be configured as a VNI label used for tenant traffic, such as, for example, MPLS, VXLAN, GENEVE, and/or any other network overlay protocol. Additionally, or alternatively, the first 4-byte portion may be configured as a reserved 4-byte portion of the BGP large community string. Additionally, or alternatively, the one or more routing device(s) may encode the second 4-byte portion of the BGP large community string with an indication of the VNI type of the given spoke, such as, for example, MPLS, VXLAN, GENEVE, and/or any other network overlay protocol. Additionally, or alternatively, the one or more routing device(s) may encode the third 4-byte portion of the BGP large community string with an IP version 4 (IPv4) address and/or an identifier of an IP version 6 (IPv6) address of a network tunnel endpoint associated with the given spoke (or the one or more routing devices provisioned in the given spoke network). In examples where the network tunnel endpoint is an IPv6 address, the IPv6 address may be mapped to a 4-byte identifier. By encoding the network routing information into a BGP large community string in this way, a given hub node may be configured to perform a lookup using the 4-byte identifier to determine the IPv6 address of the tunnel endpoint based on the VNI type included in the second portion of the BGP large community string indicating that the VNI type of a given spoke is a VNI configured for IPv6 routing. FIGS.3and4illustrate flow diagrams of example methods300and400and that illustrate aspects of the functions performed at least partly by the computing resource network102, the hub node(s)106, the spoke(s)108, the site database(s)110, and/or the spoke network(s)112as described inFIGS.1A and1B. The logical operations described herein with respect toFIGS.3and4may be implemented (1) as a sequence of computer-implemented acts or program modules running on a computing system and/or (2) as interconnected machine logic circuits or circuit modules within the computing system. In some examples, the method(s)300and/or400may be performed by a system comprising one or more processors and one or more non-transitory computer-readable media storing computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform the method(s)300and/or400. The implementation of the various components described herein is a matter of choice dependent on the performance and other requirements of the computing system. Accordingly, the logical operations described herein are referred to variously as operations, structural devices, acts, or modules. These operations, structural devices, acts, and modules can be implemented in software, in firmware, in special purpose digital logic, and any combination thereof. It should also be appreciated that more or fewer operations might be performed than shown in theFIGS.3and4and described herein. These operations can also be performed in parallel, or in a different order than those described herein. Some or all of these operations can also be performed by components other than those specifically identified. Although the techniques described in this disclosure is with reference to specific components, in other examples, the techniques may be implemented by less components, more components, different components, or any configuration of components. FIG.3illustrates a flow diagram of an example method300for a first hub node provisioned in a first site of a computing resource network (e.g., a hub and spoke overlay network) to receive a border gateway protocol (BGP) advertisement packet from a local spoke and forward the BGP advertisement packet to a second hub node provisioned in a second site that is remote from the first site. In some examples, the computing resource network, the first site, the second site, the first hub node, the second hub node, and/or the local spoke may be configured as the computing resource network102, site A104(A), site B104(B), hub node A106(A), hub node B106(B), and/or the first spoke108(1) as described with respect toFIG.1A, respectively. At302, the method300includes receiving, at a first hub node of a hub and spoke network and from a first spoke of the hub and spoke network, a first network advertisement including a first border gateway protocol (BGP) large community associated with a first tenant node of the first spoke. In some examples, the first spoke may be associated with the first hub node, such that the first hub node is configured to router network traffic to and/or from the first spoke. Additionally, or alternatively, the BGP large community may be configured as the BGP large community string200as described with respect toFIG.2. At304, the method300includes decoding, from a first portion of the first BGP large community, a first indication of a first virtual network instance (VNI) type associated with the first spoke. In some examples, the first hub node may be configured to decode the first indication of the first VNI type associated with the first spoke from the first BGP large community. Additionally, or alternatively, the first portion and/or the first indication of the first VNI type may correspond to the second portion204and/or the encoded VNI of spoke210as described with respect toFIG.2, respectively. At306, the method300includes decoding, from a second portion of the first BGP large community, a first address of a first tunnel endpoint associated with the first spoke. In some examples, the first hub node may be configured to decode the first address of the first tunnel endpoint associated with the first spoke from the first BGP large community. Additionally, or alternatively, the second portion and/or the first address of the first tunnel endpoint may correspond to the third portion206and/or the encoded IP address of the spoke212as described with respect toFIG.2, respectively. At308, the method300includes storing a mapping between the first VNI type and/or the first address of the first tunnel endpoint. In some examples, the mapping may be stored in a database associated with the first hub node. At310, the method300includes sending, from the first hub node and to one or more second hub nodes of the hub and spoke network, the first network advertisement. In some examples, the one or more second hub nodes may be associated with one or more respective spokes of the hub and spoke network. In some examples, the first address of the first tunnel endpoint may be an Internet Protocol version 4 (IPv4) address. In some examples, the first address of the first tunnel endpoint may include an indication of an Internet Protocol version 6 (IPv6) address. In some examples, the IPv6 address may be mapped to a 4-byte identifier (e.g., the indication). Additionally, or alternatively, the first hub node may be configured to perform a lookup using the 4-byte identifier to determine the IPv6 address of the first tunnel endpoint based on the VNI type included in the second portion of the BGP large community indicating that the VNI type of the first spoke is a VNI configured for IPv6 routing. In examples where the network tunnel endpoint is an IPv6 address. In such examples, a given hub node106may be configured to perform a lookup using the 4-byte identifier to determine the IPv6 address of the tunnel endpoint based on the VNI type included in the second portion of the BGP large community string indicating that the VNI type of a given spoke108is a VNI configured for IPv6 routing. Additionally, or alternatively, the method300includes receiving, at the first hub node and from a second hub node of the one or more second hub nodes, a second BGP advertisement packet including a second BGP large community associated with a second tenant node of a second spoke of the hub and spoke network. In some examples, the second spoke may be associated with the second hub node, such that the second hub node may be configured to route network traffic to and/or from the second spoke. Additionally, or alternatively, the method300includes decoding, from a first portion of the second BGP large community, a second indication of a second VNI type associated with the second spoke. Additionally, or alternatively, the method300includes decoding, from a second portion of the second BGP large community, a second address of a second tunnel endpoint associated with the second spoke. Additionally, or alternatively, the method300includes storing, in the database associated with the first hub node, a second mapping between the second VNI type and the second address of the second tunnel endpoint. Additionally, or alternatively, the method300includes receiving, at the first hub node and from the first spoke, a data packet to be transmitted to the second tenant node. Additionally, or alternatively, the method300includes sending, based at least in part on the second mapping, the data packet from the first hub node and to the second spoke. Additionally, or alternatively, the method300includes receiving the second data packet at the second spoke and from the first hub node. In some examples, the second hub node may be configured to transmit network traffic associated with the second spoke and/or the data packet may be sent from the first hub node directly to the second spoke. FIG.4illustrates a flow diagram of an example method400for a first hub node provisioned in a first site of a computing resource network (e.g., a hub and spoke overlay network) having local spokes connected thereto to receive a border gateway protocol (BGP) advertisement packet from a second hub node provisioned in a second site of the network, and send data packets from a source node of one of the local spoke(s) directly to a destination node of a remote spoke connected to the second hub node in the second site. In some examples, the first hub node and/or the first site may be configured as hub node B106(B) and/or site B104(B) as described with respect toFIG.1B, respectively. Additionally, or alternatively, the second hub node and/or the second site may be configured as hub node A106(A) and/or site B104(B) as described with respect toFIG.1B, respectively. Additionally, or alternatively, the computing resource network and/or the local spoke(s) may be configured as the computing resource network102and/or the spokes108(3),108(4) as described with respect toFIG.1B, respectively. At402, the method400includes receiving, at a first hub node of a hub and spoke network and from a second hub node associated with the hub and spoke network, a first border gateway protocol (BGP) advertisement packet. In some examples, the first BGP advertisement packet may include a first BGP large community associated with a first tenant node of a first spoke of the hub and spoke network. Additionally, or alternatively, the first hub node may be remote from the second hub node and/or the first spoke may be associated with the second hub node. Additionally, or alternatively, the BGP large community may be configured as the BGP large community string200as described with respect toFIG.2. At404, the method400includes decoding, from a first portion of the first BGP large community, a first indication of a first VNI type associated with the first spoke. In some examples, the first hub node may be configured to decode the first indication of the first VNI type associated with the first spoke from the first BGP large community. Additionally, or alternatively, the first portion and/or the first indication of the first VNI type may correspond to the second portion204and/or the encoded VNI of spoke210as described with respect toFIG.2, respectively. At406, the method400includes decoding, from a second portion of the first BGP large community, a first address of a first tunnel endpoint associated with the first spoke. In some examples, the first hub node may be configured to decode the first address of the first tunnel endpoint associated with the first spoke from the first BGP large community. Additionally, or alternatively, the second portion and/or the first address of the first tunnel endpoint may correspond to the third portion206and/or the encoded IP address of the spoke212as described with respect toFIG.2, respectively. At408, the method400includes storing a first mapping between the first VNI type and/or the first address of the first tunnel endpoint. In some examples, the mapping may be stored in a database associated with the first hub node. At410, the method400includes receiving, at the first hub node and from a second spoke of the hub and spoke network, a data packet to be transmitted to the first tenant node. In some examples, the second spoke may be associated with the first hub, such that the first hub may be configured to route network traffic to and/or from the second spoke. In some examples, the packet may include an inner header including next hop information for a destination tenant node provisioned in a spoke network and/or an outer header including next hop information for a given spoke node of the hub and spoke overlay network. At412, the method400includes sending the data packet from the first hub node and to the first spoke. In some examples, sending the data packet from the first hub node and to the first spoke may be based at least in part on the first mapping. In some examples, the first address of the first tunnel endpoint may be an Internet Protocol version 4 (IPv4) address. In some examples, the first address of the first tunnel endpoint may include an indication of an Internet Protocol version 6 (IPv6) address. In some examples, the IPv6 address may be mapped to a 4-byte identifier (e.g., the indication). Additionally, or alternatively, the first hub node may be configured to perform a lookup using the 4-byte identifier to determine the IPv6 address of the first tunnel endpoint based on the VNI type included in the second portion of the BGP large community indicating that the VNI type of the first spoke is a VNI configured for IPv6 routing. In some examples, the second hub node may be configured to transmit network traffic associated with the first spoke and/or the data packet may be sent from the first hub node directly to the first spoke. In some examples, the first hub node may be configured to transmit network traffic associated with the second spoke. Additionally, or alternatively, the method400includes receiving, at the first hub node and from the second spoke, a second BGP advertisement packet including a second BGP large community associated with the second tenant node. Additionally, or alternatively, the method400includes decoding, from a first portion of the second BGP large community, a second indication of a second VNI type associated with the second spoke. Additionally, or alternatively, the method400includes decoding, from a second portion of the second BGP large community, a second address of a second tunnel endpoint associated with the second spoke. Additionally, or alternatively, the method400includes storing, in the database associated with the first hub node, a second mapping between the second VNI type and the second address of the second tunnel endpoint. Additionally, or alternatively, the method400includes sending, from the first hub node and to the second hub node, the second BGP advertisement packet. In some examples, the data packet may be a first data packet. Additionally, or alternatively, the method400may include receiving a second data packet at the second spoke and from the second hub node. Additionally, or alternatively, the method400may include sending the second BGP advertisement packet from the first hub node and to one or more third hub nodes associated with the hub and spoke network. In some examples, the hub and spoke network may be an overlay network. FIG.5is a computing system diagram illustrating a configuration for a data center500that can be utilized to implement aspects of the technologies disclosed herein. The example data center500shown inFIG.5includes several server computers502A-502E (which might be referred to herein singularly as “a server computer502” or in the plural as “the server computers502”) for providing computing resources. In some examples, the server computers502may include, or correspond to, the servers associated with the data center104described herein with respect toFIG.1. The server computers502can be standard tower, rack-mount, or blade server computers configured appropriately for providing the computing resources described herein. As mentioned above, the computing resources provided by the computing resource network102can be data processing resources such as VM instances or hardware computing systems, database clusters, computing clusters, storage clusters, data storage resources, database resources, networking resources, and others. Some of the servers502can also be configured to execute a resource manager capable of instantiating and/or managing the computing resources. In the case of VM instances, for example, the resource manager can be a hypervisor or another type of program configured to enable the execution of multiple VM instances on a single server computer502. Server computers502in the data center500can also be configured to provide network services and other types of services. In the example data center500shown inFIG.5, an appropriate LAN508is also utilized to interconnect the server computers502A-502E. It should be appreciated that the configuration and network topology described herein has been greatly simplified and that many more computing systems, software components, networks, and networking devices can be utilized to interconnect the various computing systems disclosed herein and to provide the functionality described above. Appropriate load balancing devices or other types of network infrastructure components can also be utilized for balancing a load between data centers500, between each of the server computers502A-502E in each data center500, and, potentially, between computing resources in each of the server computers502. It should be appreciated that the configuration of the data center500described with reference toFIG.5is merely illustrative and that other implementations can be utilized. In some examples, the server computers502may each execute a hub node106one or more spoke(s)108, one or more datastore(s)110, and/or one or more services116. In some instances, the computing resource network102may provide computing resources, like application containers, VM instances, and storage, on a permanent or an as-needed basis. Among other types of functionality, the computing resources provided by the computing resource network102may be utilized to implement the various services described above. The computing resources provided by the computing resource network102can include various types of computing resources, such as data processing resources like application containers and VM instances, data storage resources, networking resources, data communication resources, network services, and the like. Each type of computing resource provided by the computing resource network102can be general-purpose or can be available in a number of specific configurations. For example, data processing resources can be available as physical computers or VM instances in a number of different configurations. The VM instances can be configured to execute applications, including web servers, application servers, media servers, database servers, some or all of the network services described above, and/or other types of programs. Data storage resources can include file storage devices, block storage devices, and the like. The computing resources network102can also be configured to provide other types of computing resources not mentioned specifically herein. The computing resources provided by the computing resource network102may be enabled in one embodiment by one or more data centers500(which might be referred to herein singularly as “a data center500” or in the plural as “the data centers500”). The data centers500are facilities utilized to house and operate computer systems and associated components. The data centers500typically include redundant and backup power, communications, cooling, and security systems. The data centers500can also be located in geographically disparate locations. One illustrative embodiment for a data center500that can be utilized to implement the technologies disclosed herein will be described below with regard toFIG.6. FIG.6shows an example computer architecture for a computing device (or network routing device)502capable of executing program components for implementing the functionality described above. The computer architecture shown inFIG.6illustrates a conventional server computer, workstation, desktop computer, laptop, tablet, network appliance, e-reader, smartphone, or other computing device, and can be utilized to execute any of the software components presented herein. The computing device502may, in some examples, correspond to a physical server of a data center described herein with respect toFIGS.1Aand/orFIG.1B. The computing device502includes a baseboard602, or “motherboard,” which is a printed circuit board to which a multitude of components or devices can be connected by way of a system bus or other electrical communication paths. In one illustrative configuration, one or more central processing units (“CPUs”)604operate in conjunction with a chipset606. The CPUs604can be standard programmable processors that perform arithmetic and logical operations necessary for the operation of the computing device502. The CPUs604perform operations by transitioning from one discrete, physical state to the next through the manipulation of switching elements that differentiate between and change these states. Switching elements generally include electronic circuits that maintain one of two binary states, such as flip-flops, and electronic circuits that provide an output state based on the logical combination of the states of one or more other switching elements, such as logic gates. These basic switching elements can be combined to create more complex logic circuits, including registers, adders-subtractors, arithmetic logic units, floating-point units, and the like. The chipset606provides an interface between the CPUs604and the remainder of the components and devices on the baseboard602. The chipset606can provide an interface to a RAM608, used as the main memory in the computing device502. The chipset606can further provide an interface to a computer-readable storage medium such as a read-only memory (“ROM”)610or non-volatile RAM (“NVRAM”) for storing basic routines that help to startup the computing device502and to transfer information between the various components and devices. The ROM610or NVRAM can also store other software components necessary for the operation of the computing device502in accordance with the configurations described herein. The computing device502can operate in a networked environment using logical connections to remote computing devices and computer systems through a network, such as the network626. The chipset606can include functionality for providing network connectivity through a NIC612, such as a gigabit Ethernet adapter. The NIC612is capable of connecting the computing device502to other computing devices over the network626. It should be appreciated that multiple NICs612can be present in the computing device502, connecting the computer to other types of networks and remote computer systems. The computing device502can be connected to a storage device618that provides non-volatile storage for the computing device502. The storage device618can store an operating system620, programs622, and data, which have been described in greater detail herein. The storage device618can be connected to the computing device502through a storage controller614connected to the chipset606. The storage device618can consist of one or more physical storage units. The storage controller614can interface with the physical storage units through a serial attached SCSI (“SAS”) interface, a serial advanced technology attachment (“SATA”) interface, a fiber channel (“FC”) interface, or other type of interface for physically connecting and transferring data between computers and physical storage units. The computing device502can store data on the storage device618by transforming the physical state of the physical storage units to reflect the information being stored. The specific transformation of physical state can depend on various factors, in different embodiments of this description. Examples of such factors can include, but are not limited to, the technology used to implement the physical storage units, whether the storage device618is characterized as primary or secondary storage, and the like. For example, the computing device502can store information to the storage device618by issuing instructions through the storage controller614to alter the magnetic characteristics of a particular location within a magnetic disk drive unit, the reflective or refractive characteristics of a particular location in an optical storage unit, or the electrical characteristics of a particular capacitor, transistor, or other discrete component in a solid-state storage unit. Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this description. The computing device502can further read information from the storage device618by detecting the physical states or characteristics of one or more particular locations within the physical storage units. In addition to the mass storage device618described above, the computing device502can have access to other computer-readable storage media to store and retrieve information, such as program modules, data structures, or other data. It should be appreciated by those skilled in the art that computer-readable storage media is any available media that provides for the non-transitory storage of data and that can be accessed by the computing device502. In some examples, the operations performed by the computing resource network102, and or any components included therein, may be supported by one or more devices similar to computing device502. Stated otherwise, some or all of the operations performed by the computing resource network102, and or any components included therein, may be performed by one or more computing device502operating in a cloud-based arrangement. By way of example, and not limitation, computer-readable storage media can include volatile and non-volatile, removable and non-removable media implemented in any method or technology. Computer-readable storage media includes, but is not limited to, RAM, ROM, erasable programmable ROM (“EPROM”), electrically-erasable programmable ROM (“EEPROM”), flash memory or other solid-state memory technology, compact disc ROM (“CD-ROM”), digital versatile disk (“DVD”), high definition DVD (“HD-DVD”), BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information in a non-transitory fashion. As mentioned briefly above, the storage device618can store an operating system620utilized to control the operation of the computing device502. According to one embodiment, the operating system comprises the LINUX operating system. According to another embodiment, the operating system comprises the WINDOWS® SERVER operating system from MICROSOFT Corporation of Redmond, Washington. According to further embodiments, the operating system can comprise the UNIX operating system or one of its variants. It should be appreciated that other operating systems can also be utilized. The storage device618can store other system or application programs and data utilized by the computing device502. In one embodiment, the storage device618or other computer-readable storage media is encoded with computer-executable instructions which, when loaded into the computing device502, transform the computer from a general-purpose computing system into a special-purpose computer capable of implementing the embodiments described herein. These computer-executable instructions transform the computing device502by specifying how the CPUs604transition between states, as described above. According to one embodiment, the computing device502has access to computer-readable storage media storing computer-executable instructions which, when executed by the computing device502, perform the various processes described above with regard toFIGS.1A,1B,3, and/or4. The computing device502can also include computer-readable storage media having instructions stored thereupon for performing any of the other computer-implemented operations described herein. The computing device502can also include one or more input/output controllers616for receiving and processing input from a number of input devices, such as a keyboard, a mouse, a touchpad, a touch screen, an electronic stylus, or other type of input device. Similarly, an input/output controller616can provide output to a display, such as a computer monitor, a flat-panel display, a digital projector, a printer, or other type of output device. It will be appreciated that the computing device502might not include all of the components shown inFIG.6, can include other components that are not explicitly shown inFIG.6, or might utilize an architecture completely different than that shown inFIG.6. The server computer502may support a virtualization layer624, such as one or more components associated with the computing resource network102, such as, for example, the hub node106, the service(s)116, the spoke(s)108, and/or the data store110. At “1,” a first hub node106may receive a BGP large community advertisement packet and/or a packet including a BGP large community string from a first spoke local to the first hub node. The first hub node106may decode network routing information from various portions of the BGP large community string and/or store the network routing information. At “2,” the first hub node may forward the packet, including the BGP large community string, to a second hub node106that is remote from the first hub node106(e.g., provisioned in a network site separate from the first hub node106). The second hub node106may decode network routing information from various portions of the BGP large community string and/or store the network routing information in a datastore110associated with the second hub node106. At “3,” the second hub node106may receive a data packet from a second spoke108local to the second hub node106, and the second hub node106may send the data packet from the second hub node106directly to the first spoke108that is remote from the second hub node106, based on the network routing information associated with the first spoke stored in the datastore110associated with the second hub node106. While the invention is described with respect to the specific examples, it is to be understood that the scope of the invention is not limited to these specific examples. Since other modifications and changes varied to fit particular operating requirements and environments will be apparent to those skilled in the art, the invention is not considered limited to the example chosen for purposes of disclosure, and covers all changes and modifications which do not constitute departures from the true spirit and scope of this invention. Although the application describes embodiments having specific structural features and/or methodological acts, it is to be understood that the claims are not necessarily limited to the specific features or acts described. Rather, the specific features and acts are merely illustrative some embodiments that fall within the scope of the claims of the application.
76,287
11943079
With respect to the description of the drawings, the same or similar reference signs may be used for the same or similar elements. MODE FOR CARRYING OUT THE INVENTION Hereinafter, certain embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. However, it should be understood that the present disclosure is not limited to specific embodiments, but rather includes various modifications, equivalents and/or alternatives of certain embodiments of the present disclosure. FIG.1is a block diagram illustrating an electronic device101in a network environment100according to certain embodiments. Referring toFIG.1, the electronic device101in the network environment100may communicate with an electronic device102via a first network198(e.g., a short-range wireless communication network), or an electronic device104or a server108via a second network199(e.g., a long-range wireless communication network). According to an embodiment, the electronic device101may communicate with the electronic device104via the server108. According to an embodiment, the electronic device101may include a processor120, memory130, an input device150, a sound output device155, a display device160, an audio module170, a sensor module176, an interface177, a haptic module179, a camera module180, a power management module188, a battery189, a communication module190, a subscriber identification module(SIM)196, or an antenna module197. In some embodiments, at least one (e.g., the display device160or the camera module180) of the components may be omitted from the electronic device101, or one or more other components may be added in the electronic device101. In some embodiments, some of the components may be implemented as single integrated circuitry. For example, the sensor module176(e.g., a fingerprint sensor, an iris sensor, or an illuminance sensor) may be implemented as embedded in the display device160(e.g., a display). The processor120may execute, for example, software (e.g., a program140) to control at least one other component (e.g., a hardware or software component) of the electronic device101coupled with the processor120, and may perform various data processing or computation. According to one embodiment, as at least part of the data processing or computation, the processor120may load a command or data received from another component (e.g., the sensor module176or the communication module190) in volatile memory132, process the command or the data stored in the volatile memory132, and store resulting data in non-volatile memory134. According to an embodiment, the processor120may include a main processor121(e.g., a central processing unit (CPU) or an application processor (AP)), and an auxiliary processor123(e.g., a graphics processing unit (GPU), an image signal processor (ISP), a sensor hub processor, or a communication processor (CP)) that is operable independently from, or in conjunction with, the main processor121. Additionally or alternatively, the auxiliary processor123may be adapted to consume less power than the main processor121, or to be specific to a specified function. The auxiliary processor123may be implemented as separate from, or as part of the main processor121. The auxiliary processor123may control at least some of functions or states related to at least one component (e.g., the display device160, the sensor module176, or the communication module190) among the components of the electronic device101, instead of the main processor121while the main processor121is in an inactive (e.g., sleep) state, or together with the main processor121while the main processor121is in an active state (e.g., executing an application). According to an embodiment, the auxiliary processor123(e.g., an image signal processor or a communication processor) may be implemented as part of another component (e.g., the camera module180or the communication module190) functionally related to the auxiliary processor123. The memory130may store various data used by at least one component (e.g., the processor120or the sensor module176) of the electronic device101. The various data may include, for example, software (e.g., the program140) and input data or output data for a command related thereto. The memory130may include the volatile memory132or the non-volatile memory134. The program140may be stored in the memory130as software, and may include, for example, an operating system (OS)142, middleware144, or an application146. The input device150may receive a command or data to be used by another component (e.g., the processor120) of the electronic device101, from the outside (e.g., a user) of the electronic device101. The input device150may include, for example, a microphone, a mouse, a keyboard, or a digital pen (e.g., a stylus pen). The sound output device155may output sound signals to the outside of the electronic device101. The sound output device155may include, for example, a speaker or a receiver. The speaker may be used for general purposes, such as playing multimedia or playing record, and the receiver may be used for an incoming call. According to an embodiment, the receiver may be implemented as separate from, or as part of the speaker. The display device160may visually provide information to the outside (e.g., a user) of the electronic device101. The display device160may include, for example, a display, a hologram device, or a projector and control circuitry to control a corresponding one of the display, hologram device, and projector. According to an embodiment, the display device160may include touch circuitry adapted to detect a touch, or sensor circuitry (e.g., a pressure sensor) adapted to measure the intensity of force incurred by the touch. The audio module170may convert a sound into an electrical signal and vice versa. According to an embodiment, the audio module170may obtain the sound via the input device150, or output the sound via the sound output device155or a headphone of an external electronic device (e.g., an electronic device102) directly (e.g., wiredly) or wirelessly coupled with the electronic device101. The sensor module176may detect an operational state (e.g., power or temperature) of the electronic device101or an environmental state (e.g., a state of a user) external to the electronic device101, and then generate an electrical signal or data value corresponding to the detected state. According to an embodiment, the sensor module176may include, for example, a gesture sensor, a gyro sensor, an atmospheric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an infrared (IR) sensor, a biometric sensor, a temperature sensor, a humidity sensor, or an illuminance sensor. The interface177may support one or more specified protocols to be used for the electronic device101to be coupled with the external electronic device (e.g., the electronic device102) directly (e.g., wiredly) or wirelessly. According to an embodiment, the interface177may include, for example, a high-definition multimedia interface (HDMI), a universal serial bus (USB) interface, a secure digital (SD) card interface, or an audio interface. A connecting terminal178may include a connector via which the electronic device101may be physically connected with the external electronic device (e.g., the electronic device102). According to an embodiment, the connecting terminal178may include, for example, a HDMI connector, a USB connector, a SD card connector, or an audio connector (e.g., a headphone connector). The haptic module179may convert an electrical signal into a mechanical stimulus (e.g., a vibration or a movement) or electrical stimulus which may be recognized by a user via his tactile sensation or kinesthetic sensation. According to an embodiment, the haptic module179may include, for example, a motor, a piezoelectric element, or an electric stimulator. The camera module180may capture a still image or moving images. According to an embodiment, the camera module180may include one or more lenses, image sensors, image signal processors, or flashes. The power management module188may manage power supplied to the electronic device101. According to one embodiment, the power management module188may be implemented as at least part of, for example, a power management integrated circuit (PMIC). The battery189may supply power to at least one component of the electronic device101. According to an embodiment, the battery189may include, for example, a primary cell which is not rechargeable, a secondary cell which is rechargeable, or a fuel cell. The communication module190may support establishing a direct (e.g., wired) communication channel or a wireless communication channel between the electronic device101and the external electronic device (e.g., the electronic device102, the electronic device104, or the server108) and performing communication via the established communication channel. The communication module190may include one or more communication processors that are operable independently from the processor120(e.g., the application processor (AP)) and supports a direct (e.g., wired) communication or a wireless communication. According to an embodiment, the communication module190may include a wireless communication module192(e.g., a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module194(e.g., a local area network (LAN) communication module or a power line communication (PLC) module). A corresponding one of these communication modules may communicate with the external electronic device via the first network198(e.g., a short-range communication network, such as Bluetooth™, wireless-fidelity (Wi-Fi) direct, or infrared data association (IrDA)) or the second network199(e.g., a long-range communication network, such as a cellular network, the Internet, or a computer network (e.g., LAN or wide area network (WAN)). These various types of communication modules may be implemented as a single component (e.g., a single chip), or may be implemented as multi components (e.g., multi chips) separate from each other. The wireless communication module192may identify and authenticate the electronic device101in a communication network, such as the first network198or the second network199, using subscriber information (e.g., international mobile subscriber identity (IMSI)) stored in the subscriber identification module196. The antenna module197may transmit or receive a signal or power to or from the outside (e.g., the external electronic device) of the electronic device101. According to an embodiment, the antenna module197may include an antenna including a radiating element implemented using a conductive material or a conductive pattern formed in or on a substrate (e.g., PCB). According to an embodiment, the antenna module197may include a plurality of antennas. In such a case, at least one antenna appropriate for a communication scheme used in the communication network, such as the first network198or the second network199, may be selected, for example, by the communication module190(e.g., the wireless communication module192) from the plurality of antennas. The signal or the power may then be transmitted or received between the communication module190and the external electronic device via the selected at least one antenna. According to an embodiment, another component (e.g., a radio frequency integrated circuit (RFIC)) other than the radiating element may be additionally formed as part of the antenna module197. At least some of the above-described components may be coupled mutually and communicate signals (e.g., commands or data) therebetween via an inter-peripheral communication scheme (e.g., a bus, general purpose input and output (GPIO), serial peripheral interface (SPI), or mobile industry processor interface (MIPI)). According to an embodiment, commands or data may be transmitted or received between the electronic device101and the external electronic device104via the server108coupled with the second network199. Each of the electronic devices102and104may be a device of a same type as, or a different type, from the electronic device101. According to an embodiment, all or some of operations to be executed at the electronic device101may be executed at one or more of the external electronic devices102,104, or108. For example, if the electronic device101should perform a function or a service automatically, or in response to a request from a user or another device, the electronic device101, instead of, or in addition to, executing the function or the service, may request the one or more external electronic devices to perform at least part of the function or the service. The one or more external electronic devices receiving the request may perform the at least part of the function or the service requested, or an additional function or an additional service related to the request, and transfer an outcome of the performing to the electronic device101. The electronic device101may provide the outcome, with or without further processing of the outcome, as at least part of a reply to the request. To that end, a cloud computing, distributed computing, or client-server computing technology may be used, for example. FIG.2illustrates a topology200of a Bluetooth network environment according to certain embodiments. Referring toFIG.2, the devices (e.g.,201,202-1,202-2, and202-3) included in the topology200may include components, at least some of which are the same as or similar to those of the electronic device101illustrated inFIG.1, and may execute functions, at least some of which are the same as or similar to those of the electronic device101. For example, the devices (e.g.,201,202-1,202-2, and202-3) may perform wireless communication at a short range according to a Bluetooth network defined by the Bluetooth SIG. The Bluetooth network may include, for example, a Bluetooth legacy network and a Bluetooth low energy (BLE) network. According to an embodiment, the devices (e.g.,201,202-1,202-2, and202-3) may perform wireless communication via one or both of the Bluetooth legacy network and the BLE network. According to an embodiment, the user device201may operate as a data source device. For example, the user device201may be an electronic device, which transmits media data such as music to another electronic device connected to the user device201. According to an embodiment, the first device202-1, the second device202-2, and the third device202-3may be configured to output media data (e.g., audio signal). For example, the user device201may include, a user terminal such as a smartphone, a tablet computer, a desktop computer, or a laptop computer, and the other devices202-1,202-2, and202-3may include an accessory device such as an earphone, a headset, a speaker, a mouse, a keyboard, or a display device. According to an embodiment, the user device201may be connected to one or more external devices (e.g., the first device202-1, the second device202-2, and/or the third device202-3) based on a Bluetooth communication protocol. For example, the user device201may be connected to an external device by exchanging information with the external device according to a specified sequence. When connecting to a plurality of external devices, the user device201may perform a connection operation with each of the plurality of external devices. For example, the user device201may recognize the first device202-1using wireless communication (e.g., Bluetooth (e.g., Bluetooth legacy and/or BLE)). The user device201may recognize the first device202-1by receiving an advertising signal broadcast or multicast by the first device202-1. For example, the first device202-1may transmit an advertising signal including information related to a connection or a connection request. For example, the advertising signal may include at least one of identification information about the first device202-1, user's account information, information indicating whether another device is currently paired, information about a device capable of being simultaneously paired, or information about transmission power, sensing area, or battery remaining capacity. The user device201may perform a connection operation with the recognized first device202-1based on a Bluetooth or BLE protocol. For example, the user device201may recognize the first device202-1based on the Bluetooth and/or BLE protocol and may perform a connection operation with the first device202-1based on the Bluetooth or BLE protocol. According to an embodiment, upon receiving the advertising signal, the user device201may output, to a display, a user interface for connecting to the first device202-1. For example, the user interface may include identification information about an identified device (e.g., the first device202-1). For example, the user interface may include an indicator indicating whether the identified device has been paired with the user device201in the past. For example, the user interface may include information about a user account associated with the user device201and/or the first device202-1. For example, upon receiving a user input (e.g., Bluetooth function activation input), the user device201may identify another electronic device and may provide identified electronic device information to a user. The user device201may perform connection with an external electronic device (e.g., the first device202-1) based on a user input (e.g., input for selecting a connection device) to at least one of identified electronic devices. For example, the user device201may transmit a connection request to the selected first device202-1and may connect to the first device202-1upon receiving an acceptance response from the first device202-1. According to an embodiment, when connected to the first device202-1, the user device201may transmit information associated with the first device202-1to an external server203(e.g., Internet-of-things (IoT) server or Samsung Connect™ server). For example, when the user device201is connected to, among other external electronic devices linked with an account of the user device201, the first device202-1for the first time, the user device201may register information about the connected first device202-1in the external server203. For another example, if the user device201has been connected to the first device202-1in the past, the user device201may transmit state information associated with the first device202-1and/or first link205to the external server203. For example, the user device201may manage other electronic devices based on an account (e.g., user account associated with the user device201) using information of the external server203. According to an embodiment, the user device201may store, in a memory, information about a connected device (e.g., the first device202-1) and/or link (e.g., the first link205). For example, the user device201may store information of the first device202-1in a memory or update the same. According to an embodiment, the user device201may receive information of another electronic device from the external server203. For example, the second device202-2may be connected to the first device202-1via a third link215, and may transmit information of the first device202-1to the external server203. The user device201may receive, from the external server203, the information of the first device202-1transmitted from the second device202-2. For example, the user device201, the first device202-1, and the second device202-2may be electronic devices associated with the same account in the external server203. According to an embodiment, the user device201may transmit data to a connected external device. For example, the user device201may transmit data to the first device202-1via the first link205. The user device201or the first device202-1may be configured as a master device during a connection procedure. For example, in the case of legacy Bluetooth communication, the user device201may transmit data using a channel resource (e.g., frequency hopping channel) generated based on a clock of a master device of the first link205. For another example, in the case of BLE communication, the user device201may transmit data to the first device202-1in each connection interval. According to an embodiment, the first device202-1may receive data from the user device201via the first link205. For example, the first device202-1may transmit, to the user device201, acknowledgement (ACK)/negative ACK (NACK) at least partially based on the data received via the first link205. The first device202-1may transmit ACK/NACK to the user device201via the first link205. According to an embodiment, the user device201may form the first link205with the first device202-1, and may transmit a data packet via the first link205. For example, at least one other electronic device (e.g., the second device202-2and/or the third device202-3) may monitor the first link205in order to receive (e.g., listen for) a data packet transmitted from the user device201to the first device202-1via the first link205. In this case, the user device201unicasts data to the first device202-1, but the data transmitted from the user device201may be received by not only the first device202-1but also another electronic device (e.g., the second device202-2and/or the third device202-3). Hereinafter, the second device202-2is described as monitoring the first link205, but embodiments of the present disclosure are not limited thereto. Another electronic device (e.g., the third device202-3) may also monitor the first link205in a manner similar to that in which the second device202-2monitors the first link205. According to an embodiment, the user device201may be connected to the second device202-2based on a Bluetooth protocol. For example, the user device201may be connected to the second device202-2via a second link210. According to an embodiment, the user device201may transmit first link information to the second device202-2via the second link210. The second device202-2, for example, upon receiving the first link information, may transmit ACK/NACK for the first link information to the user device201via the second link210. For example, the first link information may include information of the first link205for the second device202-2to receive data transferred via the first link205. For example, the first link information may include address information (e.g., the Bluetooth address of the master device of the first link205, the Bluetooth address of the user device201, and/or the Bluetooth address of the first device202-1), piconet clock information (e.g. clock native (CLKN) of the master device of the first link205), logical transport (LT) address information (e.g., information allocated by the master device of the first link205), used channel map information, link key information, service discovery protocol (SDP) information (e.g., service and/or profile information associated with the first link205), and/or supported feature information. The first link information may further include, for example, an extended inquiry response (EIR) packet. The EIR packet may include resource control information about the first link205and/or information about a manufacturer. According to an embodiment, the second device202-2may receive (e.g., listen on) communication between the user device201and the first device202-1via the first link205using the first link information. For example, the second device202-2may determine a first link205hopping channel (or frequency hopping channel) using the first link information, and may decrypt encrypted data packets through key information. According to an embodiment, the first device202-1may transmit a response message to the user device201in order to notify that a data packet has been received normally. The response message may include a positive acknowledgement (ACK) message indicating that the data packet has been received normally or a negative acknowledgement (NACK) message indicating that the data packet has not been received normally. The response message may include, for example, 1-bit information. If an ACK message is received after transmitting a data packet, the user device201may transmit a next data packet, and if a NACK message is received or a response message is not received within a specified time, the user device201may retransmit the same data packet. According to an embodiment, the first device202-1may transmit the response message to the user device201based on ACK/NACK from the second device202-2. For example, the second device202-2may monitor a data packet transmitted from the user device201via the first link205, and may transmit a reception result of the transmitted data packet to the first device202-1via the third link215. For example, the second device202-2may transmit ACK to the first device202-1via the third link215if the data packet is successfully received from the user device201, and may transmit NACK to the first device202-1via the third link215if the data packet is not successfully received. When NACK is received from the second device202-2, the first device202-1may transmit a response message including NACK to the user device201via the first link205. When data is not successfully received from the user device201, the first device202-1may transmit a response message including NACK to the user device201via the first link205. The first device202-1may transmit a response message including ACK to the user device201via the first link205specifically when data is successfully received from the user device201and ACK for the data is received from the second device202-2. According to an embodiment, the second device202-2may transmit a response message to the user device201based on ACK/NACK from the first device202-1. For example, the first device202-1may transmit a reception result of a data packet transmitted via the first link205to the second device202-2via the third link215. For example, the first device202-1may transmit ACK to the second device202-2via the third link215if the data packet is successfully received from the user device201, and may transmit NACK to the second device202-2via the third link215if the data packet is not successfully received. For another example, the first device202-1may not transmit ACK to the second device202-2if the packet is not successfully received. When NACK is received or ACK is not received from the first device202-1, the second device202-2may transmit a response message including NACK to the user device201via the second link210. When data is not successfully received from the user device201, the second device202-2may transmit a response message including NACK to the user device201via the second link210. The second device202-2may transmit a response message including ACK to the user device201via the second link210specifically when data is successfully received from the user device201and ACK for the data is received from the first device202-1. According to an embodiment, the first device202-1and the second device202-2may transmit ACK/NACK at different timings. For example, the first device202-1and the second device202-2may be configured to transmit ACK/NACK for data from the user device201at different timings within one time interval (e.g., slot or time interval). The first device202-1and the second device202-2may transmit ACK/NACK to the user device201using different sub-slots among a plurality of sub-slots including one slot. The first device202-1and the second device202-2may transmit ACK/NACK at non-overlapping timings within one time interval (e.g., time interval set for data transmission). For example, the first device202-1and the second device202-2may transmit ACK/NACK using the first link205or the second link210. According to an embodiment, the second device202-2may transmit NACK using a time margin interval (e.g., Rx cycle) set between slots. For example, if reception of data from the user device201succeeds, the second device202-2may not transmit ACK and NACK within the time margin interval. In this case, the first device202-1may transmit ACK to the user device201via the first link205after elapse of the time margin interval. For another example, if reception of data from the user device201fails, the second device202-2may transmit NACK to the user device201within the time margin interval. Therefore, the second device202-2may transmit NACK to the user device201prior to the first device202-1when data is not successfully received from the user device201. According to an embodiment, the types of response messages associated with the first link205and the second link210may be different from each other. For example, if data is successfully received from the user device201, the first device202-1may transmit ACK to the second device202-2. In this case, the second device202-2may transmit a message (e.g., message corresponding to ACK) to the user device201via the second link210if ACK is received from the first device202-1and data is successfully received from the user device201. For another example, the second device202-2may transmit no message to the user device201if ACK is received from the first device202-1but reception of data from the user device201fails. For another example, the first device202-1may transmit a message (e.g., message corresponding to NACK) to the user device201via the first link205if reception of data from the user device201fails. In this case, the second device202-2may transmit no message to the user device201. Therefore, the user device201may receive a message associated with ACK via the second link210, and may receive a message associated with NACK via the first link205. The user device201may recognize a case corresponding to NACK even if there is no response. According to an embodiment, the first device202-1and the second device202-2may alternately transmit ACK/NACK. For example, ACK/NACK for first data from the user device201may be transmitted by the first device202-1, and ACK/NACK for subsequent second data may be transmitted by the second device202-2. For example, a device (e.g., the first device202-1or the second device202-2) configured to transmit ACK/NACK may collect ACK/NACK from another device (e.g., the second device202-2or the first device202-1) and transmit the collected ACK/NACK to the user device201. According to an embodiment, the first device202-1and the second device202-2may transmit ACK/NACK with different powers using the same link (e.g., the first link205). For example, the first device202-1may transmit ACK/NACK to the user device201with higher transmission power than that of the second device202-2. The user device201may be unable to receive ACK/NACK of the second device202-2. For another example, the second device202-2may transmit ACK/NACK to the user device201with higher transmission power than that of the first device202-1. The user device201may be unable to receive ACK/NACK of the first device202-1. For example, lower transmission power among the transmission powers of the first device202-1and the second device202-2may be transmission power that may be unable to be received by the user device201. For another example, lower transmission power among the transmission powers of the first device202-1and the second device202-2may be transmission power that may be unable to be received by the user device201but may be able to be received by another device (e.g., the first device202-1or the second device202-2). According to certain embodiments, when the first device202-1and/or the second device202-2fail to receive data from the user device201, the data may be retransmitted. According to an embodiment, the user device201may retransmit data in response to reception of NACK or non-reception of ACK. According to an embodiment, the first device202-1or the second device202-2may perform retransmission. For example, when the first device202-1has received data from the user device201but the second device202-2fails to receive data, the first device202-1may retransmit the data to the second device202-2via the third link215. For another example, when the second device202-2has received data from the user device201but the first device202-1fails to receive data, the second device202-2may retransmit the data to the first device202-1via the third link215. For example, the user device201or the second device202-2may disconnect the second link210after transmitting the first link information. For another example, the user device201or the second device202-2may maintain a connection to the second link210even after transmitting the first link information. In the above-mentioned examples, the first link information may be transmitted from the user device201to the second device202-2. A device having the first link information may monitor (e.g., listen on) data transmitted/received via the first link205. Therefore, in order to prevent an unwanted device from eavesdropping on the first link205, the user device201may prudently determine whether to share the first link information. Hereinafter, methods for sharing first link information according to certain embodiments will be described. As described above, the methods for sharing first link information described below may be similarly applied to other electronic devices (e.g., the third device202-3). Furthermore, the first device202-1and the second device202-2are distinguished from each other for convenience, and embodiments of the present disclosure are not limited thereto. For example, the first device202-1may perform the embodiments of the present disclosure related to the second device202-2. The second device202-2may perform the embodiments of the present disclosure related to the first device202-1. FIG.3is a signal flowchart300illustrating a link information sharing method based on determination of a user device according to an embodiment. According to an embodiment, in operation305, the first device202-1and the user device201may establish a first link (e.g., the first link205ofFIG.2) for communication. The descriptions provided above with reference toFIG.2apply to a procedure for creating the first link205. According to an embodiment, in operation310, the user device201and the second device202-2may establish a second link (e.g., the second link210ofFIG.2) for communication. For example, a creation procedure of the second link210may be similar to the creation procedure of the first link205. For another example, the second link210may be configured in a different manner from that in which the first link205is configured. The user device201and the second device202-2may create the second link210based on a communication protocol that is different from that of the first link205. Although operation310is performed after operation305inFIG.3, embodiments of the present disclosure are not limited thereto. For example, the first link205may be created after the second link210is created. For another example, the first link205and the second link210may be created substantially simultaneously. According to certain embodiments, in operation315, the user device201may determine whether to share first link information with the second device202-2. The user device201may determine whether to share the first link information with the second device202-2at least partially based on a name, address, supported feature, manufacturer, sharing request, setting, sharing history, and/or account. For example, the user device201may obtained the foregoing pieces of information while creating the first link205and the second link210. According to an embodiment, the user device201may determine whether to share the first link information at least partially based on identification information (e.g., name) of the first device202-1and the second device202-2. For example, the user device201may determine whether to share the first link information at least partially based on similarity and/or regularity of the identification information between the first device202-1and the second device202-2. For example, if the first device202-1has a name of AAA_left and the second device202-2has a name of AAA_Right, the user device201may recognize that the first device202-1and the second device202-2are electronic devices supporting the same service based on the identification information. The user device201may determine that the first device202-1and the second device202-2are electronic devices providing the same service based on the similarity (e.g., AAA) and/or regularity (e.g., left and right) of the names between the first device202-1and the second device202-2. If the first device202-1and the second device202-2are determined to be electronic devices providing the same service, the user device201may determine to share the first link information with the second device202-2. According to an embodiment, the user device201may determine whether to share the first link information at least partially based on addresses (e.g., Bluetooth address) of the first device202-1and the second device202-2. For example, a Bluetooth address may be configured with6bytes. The Bluetooth address may be configured with 2-byte non-significant address part (NAP), 1-byte upper address part (UAP), and 3-byte lower address part (LAP). For example, the NAP and UAP may include manufacturer identification information (e.g., organizationally unique identifier (OUI)). For example, the LAP may be a value specified by a manufacturer. The user device201may determine to share the first link information with the second device202-2if the first device202-1and the second device202-2have the same manufacturer identification information and similar LAP values. For example, the user device201may share the first link information with the second device202-2if the addresses of the first device202-1and the second device202-2indicate the same manufacturer and a difference of the LAPs falls within a preset range. According to an embodiment, the user device201may determine whether to share the first link information at least partially based on at least one of supported features or manufacturers of the first device202-1and the second device202-2. For example, the user device201may identify the first device202-1and the second device202-2as electronic devices supporting the same service by using supported feature information of each of the first device202-1and the second device202-2, and may determine to share the first link information with the second device202-2that supports the same service. For example, the user device201may identify the first device202-1and the second device202-2as electronic devices supporting the same service by using information about manufacturers of the first device202-1and the second device202-2, and may determine to share the first link information with the second device202-2that supports the same service. For another example, the user device201may identify the first device202-1and the second device202-2as electronic devices supporting the same service by using the supported feature information and manufacturer information of the first device202-1and the second device202-2, and may determine to share the first link information with the second device202-2that supports the same service. According to an embodiment, the user device201may determine whether to share the first link information based on at least the identification information, addresses, supported features, and/or manufacturers of the first device202-1and the second device202-2. For example, the user device201may determine whether to share the first link information using at least one of the identification information, addresses, supported features, and/or manufacturers. For another example, the user device201may determine whether to share the first link information using all of the identification information, addresses, supported features, and manufacturers. According to an embodiment, the user device201may determine whether to share the first link information at least partially based on a request of the first device202-1or the second device202-2. For example, the user device201may transmit the first link information to the second device202-2upon receiving a first link information sharing request to the second device202-2from the first device202-1. For another example, the user device201may transmit the first link information to the second device202-2upon receiving the first link information sharing request from the second device202-2. According to an embodiment, if the first device202-1and the second device202-2are connected via an additional link (e.g., the third link215ofFIG.2), the first device202-1or the second device202-2may transmit the first link information sharing request to the user device201. According to an embodiment, even if the first device202-1and the second device202-2are not additionally connected, the first device202-1or the second device202-2may transmit the first link information sharing request to the user device201at an arbitrary time after connecting to the user device201. According to an embodiment, the user device201may determine to share a link between the first device202-1and the second device202-2at least partially based on a user setting. For example, the user device201may provide a list of connected and/or paired electronic devices to the user. The user device201, for example, may receive an input related to link sharing between the first device202-1and the second device202-2through a user input. If the first device202-1and the second device202-2are connected to the user device201, the user device201may determine whether to share a link based on setting information. For example, when the second device202-2is connected after the first link205with the first device202-1is created, the user device201may transmit the first link information to the second device202-2based on the setting information. For another example, when the first link205with the first device202-1is created after the second link210with the second device202-2is created, the user device201may transmit second link information of the second link210to the first device202-1based on the setting information. According to an embodiment, the user device201may determine to share a link between the first device202-1and the second device202-2at least partially based on a sharing history. For example, if the first device202-1and the second device202-2have shared a link before, the user device201may transmit the first link information to the second device202-2. According to an embodiment, the user device201may determine to share a link at least partially based on account information associated with the first device202-1and the second device202-2. For example, if the first device202-1and the second device202-2are associated with the same account (e.g., the same account of the external server203), the user device201may determine to share a link. For another example, if a first account of the first device202-1and a second account of the second device202-2are associated accounts (e.g., accounts belonging to the same group), the user device201may determine to share a link. According to an embodiment, the user device201may obtain account information of the first device202-1and/or the second device202-2based on account information stored in a memory of the user device201or account information of the external server203. The user device201may provide a list of connected and/or paired electronic devices to the user. The user device201, for example, may receive an input related to link sharing between the first device202-1and the second device202-2through a user input. For example, the user device201may transmit information about link sharing to an external server (e.g., the external server203ofFIG.2). If the first device202-1and the second device202-2are connected to the user device201, the user device201may determine whether to share a link based on setting information. For example, when the second device202-2is connected after the first link205with the first device202-1is created, the user device201may transmit the first link information to the second device202-2based on the setting information. For another example, when the first link205with the first device202-1is created after the second link210with the second device202-2is created, the user device201may transmit second link information of the second link210to the first device202-1based on the setting information. When the first link information is determined to be shared, the user device201may transmit the first link information to the second device202-2in operation320. For example, the user device201may transmit the first link information to the second device202-2via the second link210. For example, the first link information may include address information, piconet clock information, logical transport address information, used channel map information, link key information, SDP information, and/or supported feature information. The first link information may further include, for example, an EIR packet. According to an embodiment, the user device201may transmit information about link sharing to an external server (e.g., the external server203ofFIG.2). For example, if the first link information is determined to be shared, the user device201may transmit the information about link sharing to the external server203. According to an embodiment, the user device201may communicate with the first device202-1via the first link205in operation325. For example, the user device201may transmit (e.g., unicast) a data (e.g., data including media) packet to the first device202-1via the first link205. In operation330, the second device202-2may monitor (e.g., listen on) the first link205. According to an embodiment, the second device202-2may monitor the first link205using the first link information. For example, the second device202-2may monitor the first link205upon receiving an additional request from the user device201after continuously performing page scanning by using the Bluetooth address of the first device202-1. According to another embodiment, the second device202-2may monitor the first link205upon receiving the first link information. The second device202-2may identify a hopping channel of the first link205using clock information and a Bluetooth address of a master device of the first link205. The second device202-2may monitor the first link205using the identified hopping channel. For example, the second device202-2may open a radio frequency (RF) reception unit of the second device202-2in synchronization with a clock time of the master device of the first link205, and may receive packets between the first device202-1and the user device201connected via the first link205by using the identified hopping channel. FIG.4is a signal flowchart400illustrating a link information sharing method based on determination of the first device202-1according to an embodiment. In operation405, the first device202-1and the user device201may establish a first link (e.g., the first link205ofFIG.2) for communication. In operation410, the user device201and the second device202-2may establish a second link (e.g., the second link210ofFIG.2) for communication. The above descriptions of operation305and operation310ofFIG.3may be referenced for operation405and operation410. According to certain embodiments, in operation415, the user device201may transmit information of the second device202-2to the first device202-1. For example, if the user device201is connected to the second device202-2, the user device201may transmit information regarding the second device202-2to the first device202-1. For example, the user device201may transmit, to the first device202-1, information of a plurality of electronic devices connected to the user device201. For example, the information of the second device202-2may include at least one of the address of the second device202-2, the identification information of the second device202-2, and/or information about the manufacturer of the second device202-2. According to certain embodiments, in operation420, the first device202-1may determine whether to share first link information to the second device202-2. The first device202-1may determine to share the first link information at least partially based on a name, address, supported feature, manufacturer, and/or setting. For example, the first device202-1may obtain the foregoing pieces of information from the user device201. According to an embodiment, the first device202-1may determine whether to share the first link information at least partially based on identification information (e.g., name) of the first device202-1and the second device202-2. For example, the first device202-1may determine whether to share the first link information at least partially based on similarity and/or regularity of the identification information between the first device202-1and the second device202-2. For example, if the first device202-1has a name of AAA_left and the second device202-2has a name of AAA_Right, the first device202-1may identify that the first device202-1and the second device202-2are electronic devices supporting the same service based on the identification information. The first device202-1may determine that the first device202-1and the second device202-2are electronic devices providing the same service based on the similarity (e.g., AAA) and/or regularity (e.g., left and right) of the names between the first device202-1and the second device202-2. If the first device202-1and the second device202-2are determined to be electronic devices providing the same service, the first device202-1may determine to share the first link information with the second device202-2. According to an embodiment, the first device202-1may determine whether to share the first link information at least partially based on addresses (e.g., Bluetooth address) of the first device202-1and the second device202-2. For example, a Bluetooth address may be configured with6bytes. The first device202-1may determine to share the first link information with the second device202-2if the first device202-1and the second device202-2have the same manufacturer identification information and similar LAP values. For example, the first device202-1may determine to share the first link information with the second device202-2if the addresses of the first device202-1and the second device202-2indicate the same manufacturer and a difference of the LAPs falls within a preset range. According to an embodiment, the first device202-1may determine whether to share the first link information at least partially based on at least one of supported features or manufacturers of the first device202-1and the second device202-2. For example, the first device202-1may identify the first device202-1and the second device202-2as electronic devices supporting the same service by using supported feature information of each of the first device202-1and the second device202-2, and may determine to share the first link information with the second device202-2that supports the same service. For example, the first device202-1may identify the first device202-1and the second device202-2as electronic devices supporting the same service by using information about manufacturers of the first device202-1and the second device202-2, and may determine to share the first link information with the second device202-2that supports the same service. For another example, the first device202-1may identify the first device202-1and the second device202-2as electronic devices supporting the same service by using the supported feature information and manufacturer information of the first device202-1and the second device202-2, and may determine to share the first link information with the second device202-2that supports the same service. According to an embodiment, the first device202-1may determine whether to share the first link information based on at least the identification information, addresses, supported features, and/or manufacturers of the first device202-1and the second device202-2. For example, the first device202-1may determine whether to share the first link information using at least one of the identification information, addresses, supported features, and/or manufacturers. For another example, the first device202-1may determine whether to share the first link information using all of the identification information, addresses, supported features, and manufacturers. According to an embodiment, the first device202-1may determine to share the first link information at least partially based on a setting. For example, the second device202-2may be an electronic device configured to share a link with the first device202-1. For example, the first device202-1may obtain a sharing setting from an external server (e.g., the external server203), the user device201, or a user input. When determined to share the first link information, the first device202-1may transmit a sharing request to the user device201in operation425. For example, the first device202-1may transmit the sharing request via the first link205(e.g., to request sharing of the first link information to the second device202-2). According to an embodiment, in operation430, the user device201may transmit the first link information to the second device202-2upon receiving the sharing request. For example, the user device201may transmit the first link information to the second device202-2via the second link210. For example, the first link information may include address information, piconet clock information, logical transport address information, used channel map information, link key information, SDP information, and/or supported feature information. The first link information may further include, for example, an EIR packet. According to an embodiment, the user device201may communicate with the first device202-1via the first link205in operation435. In operation440, the second device202-2may monitor (e.g., listen on) the first link205. The above descriptions of operations325and330ofFIG.3may be referenced for operation435and operation440. FIG.5is a signal flowchart500illustrating a link information sharing method based on determination of the second device202-2according to an embodiment. In operation505, the first device202-1and the user device201may establish a first link (e.g., the first link205ofFIG.2). In operation510, the user device201and the second device202-2may establish a second link (e.g., the second link210ofFIG.2). The above descriptions of operation305and operation310ofFIG.3may be referenced for operation505and operation510. According to certain embodiments, in operation515, the second device202-2may transmit a connected electronic device information request to the user device201. For example, the second device202-2may transmit the connected electronic device information request when the second link210is created. For another example, the second device202-2may transmit the connected electronic device information request if the second link210and a third link (e.g., the third link215ofFIG.2) are created. The connected electronic device information request, for example, may be a message requesting information of electronic devices connected to the user device201. According to certain embodiments, in operation520, the user device201may transmit, to the second device202-2, information including information of at least one of electronic devices connected to the user device201(i.e., “connected electronic device information”). For example, the connected electronic device information may include information of the first device202-1. For example, the information of the first device202-1may include at least one of the address of the first device202-1, the identification information of the first device202-1, and/or the manufacturer of the first device202-1. According to certain embodiments, in operation525, the second device202-2may determine whether to request sharing of the first link information. The second device202-2may determine whether to generate the request for sharing the first link information at least partially based on the name, address, supported feature, manufacturer, and/or setting of the first device202-1received from the user device201. According to an embodiment, the second device202-2may determine whether to request to share the first link information at least partially based on identification information (e.g., name) of the first device202-1and the second device202-2. For example, the second device202-2may determine whether to request to share the first link information at least partially based on similarity and/or regularity of the identification information between the first device202-1and the second device202-2. If the first device202-1and the second device202-2are determined to be electronic devices providing the same service, the second device202-2may determine to request to share the first link information. According to an embodiment, the second device202-2may determine whether to request to share the first link information at least partially based on addresses (e.g., Bluetooth address) of the first device202-1and the second device202-2. For example, the second device202-2may determine to request to share the first link information if the addresses of the first device202-1and the second device202-2indicate the same manufacturer and a difference of the LAPs falls within a preset range. According to an embodiment, the second device202-2may determine whether to request to share the first link information at least partially based on at least one of supported features or manufacturers of the first device202-1and the second device202-2. For example, the second device202-2may determine to request to share the first link information if the first device202-1and the second device202-2are identified as electronic devices supporting the same service by using the supported feature information and/or manufacturer information of the first device202-1and the second device202-2. According to an embodiment, the second device202-2may determine whether to request to share the first link information based on at least the identification information, addresses, supported features, and/or manufacturers of the first device202-1and the second device202-2. According to an embodiment, the second device202-2may determine whether to request to share the first link information at least partially based on a setting. For example, the second device202-2may be an electronic device configured to share a link with the first device202-1. For example, the second device202-2may obtain a sharing setting from an external server (e.g., the external server203), the user device201, or a user input. When it is determined to request to share the first link information, the second device202-2may transmit a first link information request to the user device201in operation535. For example, the second device202-2may transmit the first link information request to the user device201via the second link210. According to an embodiment, in operation540, the user device201may transmit the first link information to the second device202-2upon receiving the sharing request. For example, the user device201may transmit the first link information to the second device202-2via the second link210. For example, the first link information may include address information, piconet clock information, logical transport address information, used channel map information, link key information, SDP information, and/or supported feature information. The first link information may further include, for example, an EIR packet. According to an embodiment, the user device201may communicate with the first device202-1via the first link205in operation545. In operation550, the second device202-2may monitor (e.g., listen on) the first link205. The above descriptions of operations325and330ofFIG.3may be referenced for operation545and operation550. FIG.6is a flowchart600illustrating a link information sharing method of a user device according to an embodiment. According to certain embodiments, a user device (e.g., the user device201ofFIG.2) may include a wireless communication circuit (e.g., the communication module190ofFIG.1), a memory (e.g., the memory130ofFIG.1), and a processor (e.g., application processor and/or communication processor). For example, the processor may be operatively connected to the wireless communication circuit and the memory. The memory may store one or more instructions that, when executed, cause the processor to perform the operations of the user device201described below. According to certain embodiments, in operation605, the user device201may establish a first link (e.g., the first link205ofFIG.2) for communication with a first device (e.g., the first device202-1ofFIG.2) through the processor and the wireless communication circuit. For example, the above descriptions of operation305ofFIG.3may be referenced for operation605. According to certain embodiments, in operation610, the user device201may establish a second link (e.g., the second link210ofFIG.2) for communication with a second device (e.g., the second device202-2ofFIG.2) through the processor and the wireless communication circuit. For example, the above descriptions of operation310ofFIG.3may be referenced for operation610. According to certain embodiments, in operation615, the user device201may determine whether to share first link information with the second device202-2through the processor. For example, the above descriptions of operation315ofFIG.3may be referenced for operation615. If it is determined whether to share the first link information, the user device201may transmit the first link information to the second device202-2in operation620. For example, the above descriptions of operation320ofFIG.3may be referenced for operation620. FIG.7is a flowchart700illustrating a link information sharing method of a first device according to an embodiment. According to certain embodiments, a first device (e.g., the first device202-1ofFIG.2) may include a wireless communication circuit (e.g., the communication module190ofFIG.1), a memory (e.g., the memory130ofFIG.1), and a processor (e.g., application processor and/or communication processor). For example, the processor may be operatively connected to the wireless communication circuit and the memory. The memory may store one or more instructions that, when executed, cause the processor to perform the operations of the first device202-1described below. According to certain embodiments, in operation705, the first device202-1may establish a first link (e.g., the first link205ofFIG.2) for communication with a user device (e.g., the user device201ofFIG.2) through the processor and the wireless communication circuit. For example, the above descriptions of operation405ofFIG.4may be referenced for operation705. According to certain embodiments, in operation710, the first device202-1may receive information of a second device (e.g., the second device202-2ofFIG.2) from the user device201through the processor and the wireless communication circuit. For example, the first device202-1may receive the second device information via the first link205. The descriptions of operation415provided above with reference toFIG.4apply to the second device information. According to an embodiment, the first device202-1may generate a request for the user device201to provide the information of the second device202-2, and may receive the second device information in response to the second device information request. According to certain embodiments, in operation715, the first device202-1may determine whether to share first link information with other devices through the processor. The above descriptions of operation420ofFIG.4may be referenced for operation715. According to certain embodiments, in operation720, the first device202-1may transmit a sharing request to the user device201through the wireless communication circuit when determining that the first link information is to be shared. For example, the above descriptions of operation425ofFIG.4may be referenced for operation720. FIG.8is a flowchart800illustrating a link information sharing method of a second device according to an embodiment. According to certain embodiments, a second device (e.g., the second device202-2ofFIG.2) may include a wireless communication circuit (e.g., the communication module190ofFIG.1), a memory (e.g., the memory130ofFIG.1), and a processor (e.g., application processor and/or communication processor). For example, the processor may be operatively connected to the wireless communication circuit and the memory. The memory may store one or more instructions that, when executed, cause the processor to perform the operations of the second device202-2described below. According to certain embodiments, in operation805, the second device202-2may establish a second link (e.g., the second link215ofFIG.2) for communication with a user device (e.g., the user device201ofFIG.2) through the processor and the wireless communication circuit. The above descriptions of operation510ofFIG.5may be referenced for operation805. According to certain embodiments, in operation810, the second device202-2may transmit an information request regarding a connected electronic device to the user device201through the processor and the wireless communication circuit. For example, the above descriptions of operation515ofFIG.5may be referenced for operation810. According to certain embodiments, in operation815, the second device202-2may determine, through the processor, whether connected electronic device information has been received. For example, the second device202-2may determine whether the connected electronic device information is received from the user device201within a specified time after transmitting the connected electronic device information request. If the connected electronic device information is not received from the user device201within the specified time or a rejection response is received from the user device201, link information sharing may be ended. According to certain embodiments, if the connected electronic device information is received, the second device202-2may determine whether to request sharing of the first link information through the processor in operation820. For example, the above descriptions of operation525ofFIG.5may be referenced for operation820. According to certain embodiments, if it is determined to request sharing of the first link information, the second device202-2may generate and transmit a request to share first link information to the user device201through the processor and the wireless communication circuit in operation825. For example, the above descriptions of operation535ofFIG.5may be referenced for operation825. According to certain embodiments, in operation830, the second device202-2may receive the first link information, as transmitted from the user device201through the processor and the wireless communication circuit, responsive to the request. For example, the above descriptions of operation540ofFIG.5may be referenced for operation830. The second device202-2may monitor the first link205using the received first link information. FIG.9is a flowchart illustrating a link information sharing method900of a user device according to an embodiment. According to certain embodiments, a user device (e.g., the user device201ofFIG.2) may include a wireless communication circuit (e.g., the communication module190ofFIG.1), a memory (e.g., the memory130ofFIG.1), and a processor (e.g., application processor and/or communication processor). For example, the processor may be operatively connected to the wireless communication circuit and the memory. The memory may store one or more instructions that, when executed, cause the processor to perform the operations of the user device201described below. According to certain embodiments, in operation905, the user device201may create a first link (e.g., the first link205ofFIG.2) with a first device (e.g., the first device202-1ofFIG.2) through the processor and the wireless communication circuit. For example, the above descriptions of operation505ofFIG.5may be referenced for operation905. According to certain embodiments, in operation910, the user device201may create a second link (e.g., the second link210ofFIG.2) with a second device (e.g., the second device202-2ofFIG.2) through the processor and the wireless communication circuit. For example, the above descriptions of operation510ofFIG.5may be referenced for operation910. According to certain embodiments, in operation915, the user device201may receive a connected electronic device information request from the second device202-2through the processor and the wireless communication circuit. The above descriptions of operation515ofFIG.5may be referenced for operation915. According to certain embodiments, in operation920, the user device201may transmit connected electronic device information to the second device202-2through the processor and the wireless communication circuit. The above descriptions of operation520ofFIG.5may be referenced for operation920. According to an embodiment, operation915may be skipped. For example, even if the connected electronic device information request is not received, the user device201may transmit the connected electronic device information to the second device202-2if the second link210is generated. According to certain embodiments, in operation925, the user device201may determine, through the processor, whether a link information request is received. For example, the user device201may determine whether a signal that requests information about a link (e.g., first link information) with another electronic device and is based on the connected electronic device information transmitted in operation920is received via the second link210from the second device202-2. According to certain embodiments, in operation930, the user device201may determine whether to share first link information through the processor. For example, as described above in relation to operation315ofFIG.3, the user device201may determine whether to share the first link information. According to an embodiment, operation930may be skipped. In this case, the user device201may perform operation935if the link information request is received. According to certain embodiments, in operation935, the user device201may transmit the first link information through the processor and the wireless communication circuit. For example, the user device201may transmit the first link information to the second device202-2if the first link is determined to be shared. For example, the above descriptions of operation540ofFIG.5may be referenced for operation935. According to an embodiment, in operation925or operation930, the user device201may reject a link information sharing request received from the second device202-2. For example, the user device201may reject the link information sharing request at least partially based on the name, address, supported feature, manufacturer, sharing request, setting, sharing history, and/or account of each of the second device202-2and the first device202-1. FIG.10is a signal flowchart1000of a link monitoring method of the first device202-1according to an embodiment. It has been described with reference toFIGS.2and9that the user device201and the first device202-1create the first link205, and the second device202-2monitors a first link (e.g., the first link205ofFIG.2) using the first link information. However, embodiments of the present disclosure are not limited thereto. According to an embodiment, the user device201may transmit a data packet to the second device202-2via a second link (e.g., the second link210ofFIG.2), and the first device202-1may receive the data packet by monitoring the second link210. According to certain embodiments, in operation1005, the user device201and the second device202-2may establish the second link210. For example, the user device201and the second device202-2may create the second link210according to the method described above with reference toFIG.3. According to certain embodiments, in operation1010, the user device201and the first device202-1may establish the first link205. For example, the user device201and the first device202-1may create the first link205according to the method described above with reference toFIG.3. In the example ofFIG.10, the user device201may transmit data to the second device202-2connected earlier among the first device202-1and the second device202-2, and the first device202-1may monitor the data. According to certain embodiments, in operation1015, the user device201may transmit second device information to the first device202-1. For example, if the first link205is established after the second link210is established, the user device201may transmit the second device information to the first device202-1based on information of the second device202-2and/or the first device202-1. The second device information may include, for example, at least one of the address of the second device202-2, the identification information of the second device202-2, and/or the manufacturer of the second device202-2. According to certain embodiments, in operation1020, the first device202-1may determine whether to monitor the second link. For example, the first device202-1may determine whether to monitor at least partially based on the identification information, address, supported feature, manufacture, and/or setting of each of the first device202-1and the second device202-2. If the second link210is determined to be monitored, the first device202-1may transmit a second link information request to the user device201via the first link205in operation1025. In operation1030, the user device201may transmit the second link information to the first device202-1in response to the second link information request. For example, the second link information may include address information (e.g., Bluetooth information) of the user device201, piconet clock information (e.g. clock native (CLKN) of the master device of the second link210), logical transport address information (e.g., information of a slave device of the second link210), used channel map information, link key information, service discovery protocol (SDP) information (e.g., service and/or profile information associated with the second link210), and/or supported feature information. The second link information may further include, for example, an extended inquiry response (EIR) packet. The EIR packet may include resource control information about the second link210. According to an embodiment, the user device201may communicate with the second device202-2via the second link210in operation1035. For example, the user device201may transmit (e.g., unicast) a data (e.g., data including media) packet to the second device202-2via the second link210. In operation1040, the first device202-1may monitor (e.g., listen on) the second link210. According to an embodiment, the first device202-1may monitor the second link210using the second link information. For example, the first device202-1may monitor the second link210upon receiving an additional request from the user device201after continuously performing page scanning by using the Bluetooth address of the second device202-2. For another example, the first device202-1may monitor the second link210upon receiving the second link information. The first device202-1may identify a hopping channel of the second link210using clock information and a Bluetooth address of a master device of the second link210. The first device202-1may monitor the second link210using the identified hopping channel. In the example ofFIG.10, the second device202-2may transmit, to the first device202-1, a response message for the data packet received from the user device201via the second link210. For example, the second device202-2may transmit ACK/NACK to the first device202-1via a third link (e.g., the third link215ofFIG.2). According to an embodiment, the first device202-1may transmit a response message to the user device201at least partially based on ACK/NACK received from the second device202-2. For example, if the first device202-1successfully monitors (e.g., listens for) data transferred via the second link210, and receives ACK from the second device202-2, the first device202-1may transmit a response message indicating ACK to the user device201via the first link205. If the first device202-1fails to monitor the second link210or receives NACK from the second device202-2, the first device202-1may transmit a response message indicating NACK to the user device201. According to certain embodiments, an electronic device (e.g., the user device201ofFIG.2) may include a wireless communication circuit (e.g., the communication module190ofFIG.1), a processor (e.g., the processor120ofFIG.1) operatively connected to the wireless communication circuit, and a memory (e.g., the memory130ofFIG.1) operatively connected to the processor. According to an embodiment, the memory may store one or more instructions that, when executed, cause the at least one processor to: create a first link (e.g., the first link205ofFIG.2) with a first external electronic device (e.g., the first device202-1ofFIG.2) using the wireless communication circuit; create a second link (e.g., the second link210ofFIG.2) with a second external electronic device (e.g., the second device202-2ofFIG.2) through the wireless communication circuit; determine whether to share, with the second external electronic device, first link information for estimating a channel of the first link at least partially based on information of the second external electronic device and information of the first external electronic device; transmit the first link information to the second external electronic device via the second link based on the determination; and transmit a data packet to the first external electronic device via the first link. According to an embodiment, the one or more instructions, when executed, may cause the processor to determine whether to share the first link information at least partially based on similarity of identification information between the first external electronic device and the second external electronic device. According to an embodiment, the one or more instructions, when executed, may cause the processor to determine whether to share the first link information at least partially based on addresses of the first external electronic device and the second external electronic device. According to an embodiment, the one or more instructions, when executed, may cause the processor to determine to share the first link information when the addresses of the first external electronic device and the second external electronic device have the same organizationally unique identifier (OUI) and a difference of lower address part (LAP) between the addresses falls within a preset range. According to an embodiment, the one or more instructions, when executed, may cause the processor to determine to share the first link information based on manufacturers and supported service information of the first external electronic device and the second external electronic device. According to an embodiment, the first link information may include at least one of address information, piconet clock information, logical transport address information, used channel map information, link key information, service discovery protocol (SDP) information, or supported feature information of at least one electronic device associated with the first link. According to an embodiment, the electronic device may transmit the first link information so as to allow the second external electronic device to monitor the data packet using the first link information. For example, the one or more instructions, when executed, may cause the processor to determine whether to retransmit the data packet based on a response message for the data packet from the first external electronic device. According to an embodiment, the one or more instructions, when executed, may cause the processor to transmit a subsequent data packet of the data packet when the response message received from the first external electronic device indicates positive acknowledgement. For example, the positive acknowledgement may indicate that the first external electronic device and the second external electronic device have successfully received the data packet. According to certain embodiments, an electronic device (e.g., the first device202-1ofFIG.2) may include a wireless communication circuit (e.g., the communication module190ofFIG.1), a processor (e.g., the processor120ofFIG.1) operatively connected to the wireless communication circuit, and a memory (e.g., the memory130ofFIG.1) operatively connected to the processor. According to an embodiment, the memory may store one or more instructions that, when executed, cause the processor to: create a first link (e.g., the first link205ofFIG.2) with a user device (e.g., the user device201ofFIG.2) using the wireless communication circuit; receive information of an external device (e.g., the second device202-2ofFIG.2) connected to the user device from the user device using the wireless communication circuit; determine whether to share, with the external electronic device, first link information for estimating a channel of the first link at least partially based on the information of the external electronic device and information of the electronic device; and transmit, based on the determination, a sharing request instructing sharing of the first link information with the external device to the user device via the first link. According to an embodiment, the one or more instructions, when executed, may cause the processor to determine whether to share the first link information at least partially based on similarity of identification information between the electronic device and the external device. According to an embodiment, the one or more instructions, when executed, may cause the processor to determine whether to share the first link information at least partially based on addresses of the electronic device and the external device. According to an embodiment, the one or more instructions, when executed, may cause the processor to determine to share the first link information when the addresses of the electronic device and the external device have the same organizationally unique identifier (OUI) and a difference of lower address part (LAP) between the addresses falls within a preset range. According to an embodiment, the one or more instructions, when executed, may cause the processor to determine to share the first link information based on manufacturers and supported service information of the electronic device and the external device. For example, the first link information may include at least one of address information, piconet clock information, logical transport address information, used channel map information, link key information, service discovery protocol (SDP) information, or supported feature information of at least one electronic device associated with the first link. According to an embodiment, the one or more instructions, when executed, may cause the processor to: create a second link with the external device using the wireless communication circuit; receive a first message indicating whether a first data packet has been received from the external device via the second link; transmit a second message indicating positive acknowledgement for the first data packet to the user device via the first link when the first message indicates successful data packet reception and the first data packet is successfully received via the first link; and transmit the second message indicating negative acknowledgement for the first data packet to the user device via the first link when the first message indicates data packet reception failure or reception of the first data packet via the first link fails. According to certain embodiments, an electronic device (e.g., the second device202-2ofFIG.2) may include a wireless communication circuit (e.g., the communication module190ofFIG.1), a processor (e.g., the processor120ofFIG.1) operatively connected to the wireless communication circuit, and a memory (e.g., the memory130ofFIG.1) operatively connected to the processor. The memory may store one or more instructions that, when executed, cause the processor to: create a second link (e.g., the second link210ofFIG.2) with a user device (e.g., the user device201ofFIG.2) using the wireless communication circuit; transmit, to the user device via the second link, a signal requesting information of an electronic device connected to the user device using the wireless communication circuit; receive information of an external device (e.g., the first device202-1ofFIG.2) connected to the user device via a first link (e.g., the first link205ofFIG.2) from the user device using the wireless communication circuit; determine whether to request to share, with the external electronic device, first link information for estimating a channel of the first link at least partially based on the information of the external electronic device and information of the electronic device; transmit, based on the determination, a signal requesting to share the first link information to the user device via the second link; and, upon receiving the first link information from the user device, monitor a data packet transmitted from the user device to the external device via the first link using the first link information. According to an embodiment, the one or more instructions, when executed, may cause the processor to determine to share the first link information when the addresses of the electronic device and the external electronic device have the same organizationally unique identifier (OUI) and a difference of lower address part (LAP) between the addresses falls within a preset range. According to an embodiment, the one or more instructions, when executed, may cause the processor to determine to share the first link information based on manufacturers and supported service information of the electronic device and the external device. For example, the first link information may include at least one of address information, piconet clock information, logical transport address information, used channel map information, link key information, service discovery protocol (SDP) information, or supported feature information of at least one electronic device associated with the first link. According to an embodiment, the one or more instructions, when executed, may cause the processor to: create a third link (e.g., the third link215ofFIG.2) with the external device using the wireless communication circuit; receive a data packet transmitted to the external device by monitoring the first link; and transmit, via the third link, a message indicating a result of receiving the data packet. The electronic device according to certain embodiments may be one of various types of electronic devices. The electronic devices may include, for example, a portable communication device (e.g., a smartphone), a computer device, a portable multimedia device, a portable medical device, a camera, a wearable device, or a home appliance. According to an embodiment of the disclosure, the electronic devices are not limited to those described above. It should be appreciated that certain embodiments of the present disclosure and the terms used therein are not intended to limit the technological features set forth herein to particular embodiments and include various changes, equivalents, or replacements for a corresponding embodiment. With regard to the description of the drawings, similar reference numerals may be used to refer to similar or related elements. It is to be understood that a singular form of a noun corresponding to an item may include one or more of the things, unless the relevant context clearly indicates otherwise. As used herein, each of such phrases as “A or B”, “at least one of A and B”, “at least one of A or B”, “A, B, or C”, “at least one of A, B, and C”, and “at least one of A, B, or C” may include any one of, or all possible combinations of the items enumerated together in a corresponding one of the phrases. As used herein, such terms as “1st” and “2nd”, or “first” and “second” may be used to simply distinguish a corresponding component from another, and does not limit the components in other aspect (e.g., importance or order). It is to be understood that if an element (e.g., a first element) is referred to, with or without the term “operatively” or “communicatively”, as “coupled with”, “coupled to”, “connected with”, or “connected to” another element (e.g., a second element), it means that the element may be coupled with the other element directly (e.g., wiredly), wirelessly, or via a third element. As used herein, the term “module” may include a unit implemented in hardware, software, or firmware, and may interchangeably be used with other terms, for example, “logic”, “logic block”, “part”, or “circuitry”. A module may be a single integral component, or a minimum unit or part thereof, adapted to perform one or more functions. For example, according to an embodiment, the module may be implemented in a form of an application-specific integrated circuit (ASIC). Certain embodiments as set forth herein may be implemented as software (e.g., the program140) including one or more instructions that are stored in a storage medium (e.g., internal memory136or external memory138) that is readable by a machine (e.g., the electronic device101). For example, a processor(e.g., the processor120) of the machine (e.g., the electronic device101) may invoke at least one of the one or more instructions stored in the storage medium, and execute it, with or without using one or more other components under the control of the processor. This allows the machine to be operated to perform at least one function according to the at least one instruction invoked. The one or more instructions may include a code generated by a compiler or a code executable by an interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Wherein, the term “non-transitory” simply means that the storage medium is a tangible device, and does not include a signal (e.g., an electromagnetic wave), but this term does not differentiate between where data is semi-permanently stored in the storage medium and where the data is temporarily stored in the storage medium. According to an embodiment, a method according to certain embodiments of the disclosure may be included and provided in a computer program product. The computer program product may be traded as a product between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., compact disc read only memory (CD-ROM)), or be distributed (e.g., downloaded or uploaded) online via an application store (e.g., PlayStore™), or between two user devices (e.g., smart phones) directly. If distributed online, at least part of the computer program product may be temporarily generated or at least temporarily stored in the machine-readable storage medium, such as memory of the manufacturer's server, a server of the application store, or a relay server. According to certain embodiments, each component (e.g., a module or a program) of the above-described components may include a single entity or multiple entities. According to certain embodiments, one or more of the above-described components may be omitted, or one or more other components may be added. Alternatively or additionally, a plurality of components (e.g., modules or programs) may be integrated into a single component. In such a case, according to certain embodiments, the integrated component may still perform one or more functions of each of the plurality of components in the same or similar manner as they are performed by a corresponding one of the plurality of components before the integration. According to certain embodiments, operations performed by the module, the program, or another component may be carried out sequentially, in parallel, repeatedly, or heuristically, or one or more of the operations may be executed in a different order or omitted, or one or more other operations may be added.
92,973
11943080
DESCRIPTION OF THE EMBODIMENTS The present disclosure is further elucidated below with reference to the accompanying drawings and a specific embodiment. In this specific embodiment, the multipolarized antenna is a distributed Electromagnetic Vector Antenna (EMVA) with the most comprehensive polarization. It should be understood that, the embodiment is merely used to describe the present disclosure, but is not intended to limit the scope of the present disclosure. Various equivalent modifications made to the present disclosure by those skilled in the art after reviewing the present disclosure shall all fall within the scope limited by the appended claims of the present application. The term “plurality” means two or more than two. Contents not described in detail in the embodiment of the present disclosure belong to the prior art. The present disclosure discloses a method for estimating dense multipath parameters by means of multipolarized broadband extended array responses, which can effectively estimate multipath parameters in a dense multipath environment, including but not limited to, estimation of parameters related to sub-paths of which the number is larger than the number of array elements in a time resolvable path (which is briefly referred to as a “path” below). In the method, multiple different transmitted signal sequences are first transmitted via a multipolarized antenna array, and received data in multiple snapshots is processed according to the known transmitted signals, to obtain channel responses of multi-polarization components at all frequency points in a frequency band. The obtained channel responses at the multiple frequency points in each snapshot are vectorized into a column vector, and the channel responses in different snapshots are arranged into a two-dimensional matrix. Then, an estimation of the delay parameter is acquired according to the frequency-domain channel response of a reference array element pair. The delay parameter can be acquired by using a subspace method such as MUSIC or ESPRIT, and then an estimation of two-dimensional angle information of the receiver and transmitter is obtained by means of the delay parameter and frequency domain smoothing. Herein, two-dimensional angular spectrum peaks are searched for by means of MUSIC, and an array response is built by using the estimated delay and angle parameters; and then a matrix containing the cross-polarization ratio, the amplitude in each snapshot, and the initial phase of each polarization combination is obtained by using the least square method. The argument for this matrix is calculated, to obtain initial phases of horizontal and vertical polarization combinations of all sub-paths; then, the cross-polarization ratio is estimated by using a relationship between each column of the obtained matrix and the cross-polarization ratio; and finally, amplitude information of each sub-path in different snapshots is estimated by using the least square method. The extraction principle of specific parameters in the present disclosure falls within the scope of the conventional subspace algorithms. FIG.1is a flowchart of an estimation method in an embodiment of the present disclosure, where H denotes a channel response matrix of all frequency points, Hsm denotes a channel response matrix subjected to smoothing and dimensionality reduction, P denotes the number of smoothing operations, N denotes the number of snapshots, and L denotes the number of paths resolvable in the time domain. The corresponding specific implementation steps are as follows: (1) Multiple different transmitted signal sequences are transmitted via a multipolarized antenna array, and received data in multiple snapshots is processed according to the known transmitted signal sequences, to obtain channel responses of multi-polarization components at all frequency points in a frequency band. The following sub-steps are included: 1) Structural Design of the Transmitted Signal Sequence FIG.2is a schematic structural diagram of transmitted signals required in the present disclosure, where Mtdenotes the number of the transmitting EMVAs, and (mt, x) denotes the xth component of the mtth distributed EMVA. Sm2,x,Jseqin the figure indicates the 1seqth subsequence signal transmitted by the (mt,x)th polarized antenna component. T is the symbol width for compact expression. In this embodiment, the transmitted signal sequence has 6Mt sets of different transmitted signals in total, and each transmitted signal set is composed of Lseqsubsequences, where Lseq≥6Mt; and each subsequence has a length equal to that of Ns code element symbols, namely, the number of DFT points. 2) Selection of a Channel Transmission Model A transmission model used in the present disclosure is shown inFIG.3, where each sub-path is denoted by a pair of numerals in the parentheses. For example, (1, kl) denotes the klth sub-path of the first path. The number of the sub-paths in the first path is denoted by kl, and it is assumed herein that there are K=Σt−1LK1sub-paths in total in the broadcast environment. Mrin the figure denotes the number of receiving EMVAs. 3) Establishment of an Expression Regarding Received Signals yn,ns,lseq(τ,Θr,Θt,Ξ)=∑l=1L∑k=1Klαn,lk⁢ar,f(Θr,lk)⊗[grT(Θr,lk)⊕Apdr,fT(Θr,lk)]T⁢T⁡(Ξlk)·at,fT(Θt,lk)⊗[gtT(Θr,lk)⊕Apdt,fT(Θt,lk)]⁢slseq,ns(ns⁢T-τl)+zn(Formula⁢1) The foregoing formula is an expression of the nsth symbol of the received lseqth subsequence in the nth snapshot, where the operator ⊗ denotes the Kronecker product operation, and the operator ⊕ denotes the Kronecker product by columns; the subscript f denotes the frequency and the superscript T denotes transpose of a matrix; Zndenotes the noise of the nth snapshot; gtand grrespectively denote pattern gains of the transmitter and receiver EMVAs in Θt,lkdirections; the variable T denotes the code element width of the transmitted sequence, and τ1denotes the delay parameter of the first path; and αn,lkdenotes the amplitude of the kith sub-path in the nth snapshot. The involved parameters τ, Θr, Θt, and Ξ respectively denote the path delay, the two-dimensional arrival angles of all sub-paths, the two-dimensional transmitting angles, and a polarization parameter set, and some parameters are defined as follows: {Θr=Δ[φr,θr]Θt=Δ[φt,θt]Ξ=Δ{κ,ωhh,ωhv,ωvh,ωvv}(Formula⁢2) φrdenotes the azimuth angle of arrival (AAoA), φtdenotes the azimuth angle of departure (AAoD), θrdenotes the elevation angle of arrival (EAoA), and θtdenotes the elevation angle of departure (EAoD); κ denotes the cross-polarization ratio; ωhh, ωhv, ωVh, and ωvvare initial phases of horizontal and vertical four polarization combinations; and at(f, Θt,lk) and Apdt(f, Θt,lk) (namely, at,f(Θt,lk) and Apdt,f(Θt,lk) in Formula 1, where f is put into the parentheses to reflect the dependence on frequency) respectively denote the steering vector in the spatial frequency domain of the EMVA array of the transmitter and a joint steering matrix in the spatial domain and polarization domain of the distributed EMVA array element of the transmitter. For the transmitter, expressions of at(f, Θt,lk) and Apdt(f, Θt,lk) may be respectively written as follows: {at(f,Θt,lk)=[e-j⁢2⁢π⁢fr1⁢ut,lk(Θt,lk)/c,…,e-j⁢2⁢π⁢frMt⁢ut,lk(Θt,lk)/c]TApdt(f,Θt,lk)=Dt(f,Θt,lk)⁢Ωt(Θt,lk)(Formula⁢3) rmtand ut, lkrespectively denote the coordinates of the mtth EMVA and the direction cosine of the departure angle of the klth sub-path. The coordinates herein refer to a three-dimensional row vector in a spatial Cartesian coordinate system, and the direction cosine indicates a unit three-dimensional column vector in this direction. c denotes the propagation speed of electromagnetic waves in the free space. Dt(f, Θt, lk) is expressed as follows: Dt(f,Θt,lk)=diag[dt,lk(f,Θt,lk)](Formula 4) dt,lk(f, Θt, lk) denotes a spatial phase shift vector of the distributed EMVA of the transmitter, and Ωt(Θt, lk) denotes a steering matrix in the polarization domain of the EMVA of the transmitter: {dt,lk⁢(f,Θt,lk)=[e-j⁢2⁢π⁢frex⁢ut,lk/c,e-j⁢2⁢π⁢frey⁢ut,lk/c,e-j⁢2⁢π⁢frez⁢ut,lk/c,e-j⁢2⁢π⁢frhx⁢ut,lk/c,e-j⁢2⁢π⁢frhy⁢ut,lk/c,e-j⁢2⁢π⁢frhz⁢ut,lk/c]TΩt(Θt,lk)=[cos⁢φt,lk⁢cos⁢θt,lk-sin⁢φt,lksin⁢φt,lk⁢cos⁢θr,lkcos⁢φt,lk-sin⁢θt,lk0-sin⁢φt,lk-cos⁢φt,lk⁢cos⁢θt,lkcos⁢φt,lk-sin⁢φt,lk⁢cos⁢θt,lk0sin⁢θt,lk](Formula⁢5) rex, rey, rez, rhx, rhy, and rhzdenote position coordinates of multiple polarized component antennas with respect to the EMVA. For the most comprehensive EMVA, the number of the polarization components is 6, or some may be selected for polarization. For the receiver steering matrix, the subscript ‘t’ in the foregoing expression is replaced with ‘r’. T(Ξlk) denotes a polarization torsion matrix, which can be expressed as follows: T⁡(Ξlk)=[ej⁢ωlkvvκlk-1⁢ej⁢ωlkvhκlk-1⁢ej⁢ωlkhvej⁢ωlkhh](Formula⁢6) where ωlkhh, ωlkhv, ωlkvh, ωlkvvare initial phases of horizontal and vertical four polarization combinations (hh, hv, vh, vv) of the kith sub-path. 4) Calculation of a Channel Response Corresponding to Each Subsequence For the nth snapshot, DFT transformation is performed on the received signals in different subsequences, and each subsequence can acquire a channel response at each frequency point. Then, channel responses at the same frequency points are extracted and grouped together. The same operation is performed for all the N snapshots. For example, for the nth snapshot, the multi-polarization response of the ith frequency point is expressed as Hn,fi, which has the following expression: Hn,fi=∑l=1L∑k=1Klat,fi(Θt,lk)⊗[gtT(Θt,lk)⊕Apdt,fiT(Θt,lk)]T⊗ar,fi(Θr,lk)⊗[grT(Θr,lk)⊕Apdr,fiT(Θr,lk)]T⁢vec⁡(Tlk)⁢αn,lk(Formula⁢7) Hn,ficontains multiple polarized channel responses. In the subscript fiof the present disclosure, i=1, . . . , Ns, which denotes the ith frequency point. vec in the foregoing formula denotes vectorization operation on the matrix by columns. (2) Through step (1), the obtained channel responses at the multiple frequency points in each snapshot are vectorized into a column vector, and the channel responses in different snapshots are arranged into a two-dimensional matrix H ∈ C36MVMYNS×N, which is shown in the following form: H=[vec⁡(H1,f1)vec⁡(H2,f1)…vec⁡(HN,f1)vec⁡(H1,f2)vec⁡(H2,f2)…vec⁡(HN,f2)⋮⋮⋱⋮vec⁡(H1,fNs)vec⁡(H2,fNs)…vec⁡(HN,fNs)](Formula⁢8) (3) An array element is selected as a reference point (the reference array elements at the receiver and the transmitter may be randomly selected, and their coordinates are set as the reference origins) separately at the receiver and the transmitter, and the delay parameter regarding multipath propagation is acquired according to the channel response of the reference array element. The delay parameter can be acquired by using a subspace method such as MUSIC or ESPRIT. Assuming that the channel response of the reference array element is Ht∈CNs×Nherein, eigenvalue decomposition is performed for Ht, to obtain a noise subspace. Then, the delay parameter is estimated by means of the conventional subspace algorithm (reference may be made to “Multiple emitter location and signal parameter estimation” by R. Schmidt in IEEE Transactions on Antennas and Propagation). An estimation result of the delay parameter of the path is given inFIG.5, where in the current operation, the simulation signal-to-noise ratio is 10 dB, the signal bandwidth is 500 MHz, the carrier frequency is 28 GHz, and the segment number Lseqin the transmitted signal sequences is 300. Two delay resolvable paths are considered and each delay resolvable path contains 14 sub-paths. The number of the array elements is 12, which is less than the number of the sub-paths in the delay resolvable path. It can be seen from the figure that the two sets of delay can be correctly estimated. (4) For the estimation of the two-dimensional departure angle, rows related to the receiver reference array element are selected from H to form a matrix Ht, and the following step (5) is performed. For the two-dimensional arrival angle, rows related to the transmitter reference array element are selected from H to form a matrix Hr, and then the following step (5) is performed, where the corresponding subscript ‘t’ is replaced with the subscript ‘r’. (5) Channel matrix dimensionality and the operational amount are reduced by means of frequency domain smoothing; and the two-dimensional departure and arrival angles are estimated by using a channel matrix subjected to dimensionality reduction. Generally, the obtained Hthas high dimensionality and needs to be subjected to smoothing and dimensionality reduction, so as to reduce the operation cost and the noise impact. A basic smoothing principle is given inFIG.4. The channel response matrix Htis divided into P subarrays by rows, where P is the number of the subarrays obtained after division and also the number of smoothing operations, and generally can be divided by Ns; and each subarray contains channel responses of Ns/P frequency points. Because parameters such as ωhh, ωhv, ωvh, and ωvvare known, the two-dimensional arrival angle is estimated by means of the reduced-rank subspace algorithm (RD-MUSIC) (reference may be made to “Direction finding with an array of antennas having diverse polarizations” by E. Ferrara and T. Parks in IEEE Transactions on Antennas and Propagation). It should be noted in this step that, the array manifold corresponding to Hsmthat is obtained during summation of channel responses of the P subarrays obtained after division differs from the array manifold of each subarray by a phase shift factor. For example, for the transmitter, the phase shift factor is expressed as follows: Σt,P=(Φt,0+Φt,1+ . . . +Φt,P−1)  (Formula 9) where Φt,phas the following expression: Φt,p={16⁢M1⁢Ns/P×K,p=0[ϕt,11,…,ϕt,1⁢K1,…,ϕt,L⁢1,…,ϕt,LKL],p=1Φt,p-1⊙Φt,1,p>1(Formula⁢10) The operator ⊙ denotes the Hadamard product operation, 16MtNs/P×Kdenotes all 1′s matrix with the dimensionality of 6MtNs/P×K, and Φt,lkhas the following expression: (Formula⁢11)ϕt,lk=[e-j⁢2⁢πΔ⁢f⁡(τl+rt,1⁢ut,lk/c),…,e-j⁢2⁢πΔ⁢f⁡(τl+rt,1⁢ut,lk/c)︸Ns/P,…,e-j⁢2⁢πΔ⁢f⁡(τl+rt,6⁢Mt⁢ut,lk/c),…,e-j⁢2⁢πΔ⁢f⁡(τl+rt,6⁢Mt⁢ut,lk/c)︸Ns/P]T In the foregoing formula, Δf denotes an interval between frequency points, and the coordinates rt,mindicate the position coordinates of the mth antenna unit. For the arrival angle, Htin Formula (5) is replaced with Hrwith reference to Formula (4), and estimation is performed through method steps identical with those for the departure angle. The estimation results of the two-dimensional arrival and departure angles of the sub-paths are given inFIGS.6and7. In the current operation, the simulation signal-to-noise ratio is 10 dB, the signal bandwidth is 500 MHz, the carrier frequency is 28 GHz, and the segment number Lseqin the transmitted signal sequences is 300.FIGS.6and7show estimated MUSIC spectrums of sub-paths in one of the delay resolvable paths, where the delay resolvable path contains 14 sub-paths. The number of the array elements is 12, which is less than the number of the sub-paths in the delay resolvable path. Clear sub-path spectrum peaks can be seen in the figures, which indicates that the sub-paths can be correctly resolved. (6) The estimated AoA and AoD are paired, and an array response at any frequency point is selected and defined as Hpair; and then eigenvalue decomposition is performed for Hpair, to obtain a noise subspace Upair,n. An angle pair that can minimize the value of the following formula is found: Ppair=argminΘ^t,lk,Θ^r,lk⁢det⁢{❘"\[LeftBracketingBar]"ApairH(Θ^r,lk,Θ^t,lk)⁢Upair,n⁢Upair,nH⁢Apair(Θ^r,lk,Θ^t,lk)❘"\[RightBracketingBar]"}(Formula⁢12) where Apair({circumflex over (Θ)}r,lk, {circumflex over (Θ)}t,lk) can be expressed as follows: Apair({circumflex over (Θ)}r,lk,{circumflex over (Θ)}t,lk)=at,fi({circumflex over (Θ)}t,lk)⊗[gtT({circumflex over (Θ)}t,lk)⊕Apdt,fiT({circumflex over (Θ)}t,lk)]T⊗ar,fi({circumflex over (Θ)}r,lk)⊗[grT({circumflex over (Θ)}r,lk)⊕Apdr,fiT({circumflex over (Θ)}t,lk)]T(Formula 13) {circumflex over (Θ)}r,lk, {circumflex over (Θ)}t,lkin the foregoing formula denote the estimated two-dimensional arrival and departure angles. The top mark {circumflex over ( )} in this embodiment indicates that the corresponding parameter is estimated. (7) The parameters such as the cross-polarization ratios, the initial phases, and the amplitudes of the sub-paths are estimated by using the paired angle parameters, thus finally realizing estimation of the multi-dimensional parameters including the delay, the two-dimensional departure angle, the two-dimensional arrival angle, the cross-polarization ratio, the initial phase, and the amplitude. The matrix H can be expressed as H=BΓ+Z, where B is a total steering matrix, Γ denotes a sub-path amplitude matrix in all snapshots, and Z is the white gaussian noise. Therefore, B can be written as B=Aψ, where A and ψ are respectively expressed as follows: A=[Atrf1,11⁢e-j⁢2⁢π⁢f1⁢τ1…Atrf1,LKL⁢e-j⁢2⁢π⁢f1⁢τL⋮⋱⋮AtrfNs,11⁢e-j⁢2⁢π⁢fNs⁢τ1…AtrfNs,LKL⁢e-j⁢2⁢π⁢fNs⁢τL](Formula⁢14)Ψ=IK⊕[β11,…,β1⁢K1,…,βL⁢1,…,βLKL](Formula⁢15)whereAtrfi,lk=atfi,lk⊗[gt,lkT⊕Apdtfi,lkT]T⊗arfi,lk⊗[gr,lkT⊕Apdrfi,lkT]T(Formula⁢16)βlk=[ejwlkvv⁢κlk-1⁢ejwlkhv⁢κlk-1⁢ejwlkvh⁢ejwlkhh]T(Formula⁢17) atfi,lk, gt,lkand Apdtfi,lkare short for at(fi, Θt,lk), gt(Θt,lk) and Apdt(fi, Θt,lk). A matrix  can be built according to the estimated angle and delay parameters. Let Π=ΨΓ, and then {circumflex over (Π)}=Â+Ĥ, where the superscript ‘+’ denotes pseudo inverse of the matrix. A column vector v is obtained by means of summation of {circumflex over (Π)} by columns and is expressed as follows: v=[v11T, . . . , vLK1T]T(Formula 18) An initial value of the phase can be estimated as {circumflex over (ω)}=arg{v}, where arg denotes the operation of calculating the argument. The estimation of the cross-polarization ratio is expressed as follows: κ~lk=(❘"\[LeftBracketingBar]"vlk⁢(1)❘"\[RightBracketingBar]"+❘"\[LeftBracketingBar]"vlk⁢(4)❘"\[RightBracketingBar]"❘"\[LeftBracketingBar]"vlk(2)❘"\[RightBracketingBar]"+❘"\[LeftBracketingBar]"vlk(3)❘"\[RightBracketingBar]")2(Formula⁢19) In the foregoing formula, the numerals (1, 2, 3, 4) in the parentheses denote the first, second, third, and fourth elements of the vector vl,k. {circumflex over (Ψ)} can be built according to the estimated initial phase and cross-polarization ratio, and then the amplitude of all sub-paths in all snapshots can be estimated as {circumflex over (Γ)}={circumflex over (Ψ)}+{circumflex over (Π)}. FIG.8gives an estimation result of the initial phases according to the algorithm of the present disclosure,FIG.9gives an estimation result of the cross-polarization ratios according to the algorithm of the present disclosure, andFIG.10gives an estimation result of the amplitude according to the algorithm of the present disclosure. The simulation parameters inFIGS.8,9, and10are consistent with those inFIGS.5,6, and7. It can be easily seen from the estimation result diagrams that the initial phases, cross-polarization ratios, and amplitudes of the sub-paths are consistent with the corresponding set values. Therefore, these parameters can be accurately estimated by using the method of the present disclosure.
19,250
11943081
DETAILED DESCRIPTION Various embodiments of the present invention relate to wireless communication over a time-varying channel (which may also be referred to as a time-varying frequency-selective channel), and more particularly, a method of receiving a transmitted signal over a time-varying channel, a receiver thereof, and a system for wireless communication over a time-varying channel comprising the receiver. As described in the background, in wireless communication over a time-varying channel, the high time and frequency dispersiveness of the time-varying channel can significantly distort the transmitted signal, and thus efficient and accurate channel estimation and equalization techniques may be desired. In other words, channel estimation and symbol detection are challenging in a time-varying channel because of high Doppler spread and a large number of multipath that distort the signal. Accordingly, various embodiments of the present invention provide a wireless communication method over a time-varying channel, including a method of receiving a transmitted signal over a time-varying channel, that seek to overcome, or at least ameliorate, one or more of the deficiencies in conventional wireless communication methods over a time-varying channel, such as but not limited to, improving efficiency and/or accuracy in channel estimation and symbol detection in relation to the transmitted signal, for example, improving the mean-square error (MSE) of channel estimation (CE) and the bit error rates (BER). FIG.1depicts a schematic flow diagram of a method100of receiving a transmitted signal over a time-varying channel, according to various embodiments of the present invention. The method100comprises: obtaining (at102) a received symbol signal in frequency domain based on the transmitted signal; performing (at104) a first channel estimation with respect to the time-varying channel based on the received symbol signal to obtain a plurality of first estimated basis expansion model (BEM) coefficients, the time-varying channel being modeled based on a complex-exponential basis expansion model (CX-BEM); performing (at106) a first equalization based on the received symbol signal and the plurality of first estimated BEM coefficients to obtain a plurality of first detected source symbols; and performing (at108) one or more rounds of a second channel estimation and a second equalization, each round comprising: performing the second channel estimation with respect to the time-varying channel based on the received symbol signal and a plurality of detected source symbols to obtain a plurality of second estimated BEM coefficients; performing interference removal based on the received symbol signal, the plurality of detected source symbols and the plurality of second estimated BEM coefficients to obtain an interference reduced symbol signal in frequency domain; and performing the second equalization based on the interference reduced symbol signal and the plurality of second estimated BEM coefficients to obtain a plurality of second detected source symbols. Accordingly, various embodiments of the present invention advantageously provide a method100of receiving a transmitted signal over a time-varying channel with hybrid channel estimation and equalization, including interference removal in relation to the received symbol signal in frequency domain, for improving efficiency and/or accuracy in channel estimation and symbol detection in relation to the transmitted signal, for example, improving the MSE of CE and BER. In particular, the method100comprises a first equalization and a second equalization, which are of different types from each other and are utilized in different stages of equalization, whereby performing the interference removal in relation to the received symbol signal enables the second equalization to be performed in relation to the received symbol signal, which enables improvement in efficiency and/or accuracy in channel estimation and symbol detection in relation to the transmitted signal. These advantages or technical effects will become more apparent to a person skilled in the art as the method100of receiving a transmitted signal is described in more detail according to various embodiments and various example embodiments of the present invention. In various embodiments, the method100further comprises determining a first estimated channel matrix based on the plurality of first BEM coefficients. In this regard, the first equalization is performed based on the received symbol signal and the first estimated channel matrix to obtain the plurality of first detected source symbols. In various embodiments, the above-mentioned performing interference removal comprises: determining a second estimated channel matrix based on the plurality of second BEM coefficients; and decomposing the second estimated channel matrix into an estimated banded diagonal channel matrix and a complementary channel matrix (e.g., which may also be referred to as a non-banded diagonal channel matrix) of the estimated banded diagonal channel matrix. In various embodiments, the above-mentioned performing interference removal further comprises removing interference in the received symbol signal based on the complementary channel matrix and the plurality of detected source symbols to obtain the interference reduced symbol signal. In various embodiments, the above-mentioned interference comprises inter-carrier interference associated with non-significant subcarriers in the received symbol signal. In various embodiments (e.g., in the case of the received symbol signal without error correction code), in a first round of the one or more rounds, the plurality of detected source symbols are the plurality of first detected source symbols. In various embodiments (e.g., in the case of the received symbol signal without error correction code), the above-mentioned one or more rounds is a plurality of rounds. In this regard, in each subsequent round of the plurality of rounds subsequent to the first round, the plurality of detected source symbols are the plurality of second detected source symbols obtained at the immediately preceding round of the plurality of rounds. In various embodiments, the received symbol signal is encoded based on an error correction code, and the plurality of first detected source symbols are a plurality of first detected coded source symbols. In various embodiments (e.g., in the case of the received symbol signal being encoded based on the error correction code), the method100further comprises: demapping and deinterleaving the plurality of first detected coded source symbols to obtain a plurality of first detected coded bits; soft decoding the plurality of first detected coded bits to obtain a plurality of first soft coded bits (i.e., soft coded bits obtained after the soft decoding); hard decoding the plurality of first detected coded bits to obtain a plurality of first hard coded bits (i.e., hard coded bits obtained after the hard decoding); interleaving and mapping the plurality of first soft coded bits to obtain a plurality of first detected soft decoded source symbols; and interleaving and mapping the plurality of first hard coded bits to obtain a plurality of first detected hard decoded source symbols. Hard decoding (which may also be referred as hard decision decoding) and soft decoding (which may also be referred as soft decision decoding) are known in the art and thus need not be described herein. For example, hard decoding may receive a stream or block of bits from a threshold detector and decodes each bit into an exact or definite value, for example, either 0 or 1. In contrast, soft decoding may receive a stream or block of bits and decodes each bit into a range of possible values (e.g., a measure of reliability or probability, such as in a range of 0 to 1). In various embodiments (e.g., in the case of the received symbol signal being encoded based on the error correction code), the plurality of second detected source symbols are a plurality of second detected coded source symbols. In this regard, the method100further comprises: demapping and deinterleaving the plurality of second detected coded source symbols to obtain a plurality of second detected coded bits; soft decoding the plurality of second detected coded bits to obtain a plurality of second soft coded bits (i.e., soft coded bits obtained after the soft decoding); hard decoding the plurality of second detected coded bits to obtain a plurality of second hard coded bits (i.e., hard coded bits obtained after the hard decoding); interleaving and mapping the plurality of second soft coded bits to obtain a plurality of second detected soft decoded source symbols; and interleaving and mapping the plurality of second hard coded bits to obtain a plurality of second detected hard decoded source symbols. In various embodiments (e.g., in the case of the received symbol signal being encoded based on the error correction code), in a first round of the one or more rounds, the plurality of detected source symbols are the plurality of first detected hard decoded source symbols. In various embodiments (e.g., in the case of the received symbol signal being encoded based on the error correction code), in the first round of the one or more rounds, the second equalization is performed further based on the plurality of first detected soft decoded source symbols. In various embodiments (e.g., in the case of the received symbol signal being encoded based on the error correction code), the one or more rounds is a plurality of rounds; and, in each subsequent round of the plurality of rounds subsequent to the first round, the plurality of detected source symbols are the plurality of second detected hard decoded source symbols obtained at the immediately preceding round of the plurality of rounds. In various embodiments (e.g., in the case of the received symbol signal being encoded based on the error correction code), in the above-mentioned each subsequent round of the plurality of rounds, the second equalization is performed further based on the plurality of second detected soft decoded source symbols. In various embodiments, the received symbol signal comprises a plurality of frames, and the method100further comprises performing, for each of the plurality of frames, inter-frame channel interpolation of estimated channel gains in the frame. In various embodiments, the above-mentioned inter-frame channel interpolation comprises: determining an inter-frame channel interpolation polynomial function based on estimated channel gains in an intermediate portion of the frame; and modifying estimated channel gains in at least one of a beginning portion and an end portion of the frame based on the inter-frame channel interpolation polynomial function. In various embodiments, the transmitted signal is transmitted over the time-varying channel based on orthogonal frequency division multiplexing (OFDM) transmission. In various embodiments, the first channel estimation is based on a pilot-aided channel estimation, the first equalization is based on a minimum mean-square error (MMSE) equalization, the second channel estimation is based on a data-aided channel estimation, and the second equalization is based on a maximum likelihood sequence estimation (MLSE) equalization. In various embodiments, the above-mentioned obtaining the received symbol signal comprises performing a discrete Fourier transform (DFT) based on the transmitted signal. In various embodiments, there is provided a method of transmitting a signal over a time-varying channel. In various embodiments, the signal is transmitted over the time-varying channel based on OFDM transmission. In various embodiments, the signal has been encoded based on an error correction code. In various embodiments, there is provided a wireless communication method comprising the above-mentioned method of transmitting a signal over a time-varying channel and the above-mentioned method100of receiving the transmitted signal over the time-varying channel as described hereinbefore with reference toFIG.1according to various embodiments. FIG.2depicts a schematic drawing of a receiver200for receiving a transmitted signal over a time-varying channel, according to various embodiments of the present invention, corresponding to the method100of receiving a transmitted signal over a time-varying channel as described hereinbefore with reference toFIG.1according to various embodiments of the present invention. The receiver200comprises a memory202and at least one processor204communicatively coupled to the memory202and configured to: obtain (corresponding to102) a received symbol signal in frequency domain based on the transmitted signal; performing (corresponding to104) a first channel estimation with respect to the time-varying channel based on the received symbol signal to obtain a plurality of first estimated basis expansion model (BEM) coefficients, the time-varying channel being modeled based on a complex-exponential basis expansion model (CX-BEM); perform (corresponding to106) a first equalization based on the received symbol signal and the plurality of first estimated BEM coefficients to obtain a plurality of first detected source symbols; and perform (at108) one or more rounds of a second channel estimation and a second equalization, each round comprising: performing the second channel estimation with respect to the time-varying channel based on the received symbol signal and a plurality of detected source symbols to obtain a plurality of second estimated BEM coefficients; performing interference removal based on the received symbol signal, the plurality of detected source symbols and the plurality of second estimated BEM coefficients to obtain an interference reduced symbol signal in frequency domain; and performing the second equalization based on the interference reduced symbol signal and the plurality of second estimated BEM coefficients to obtain a plurality of second detected source symbols. It will be appreciated to a person skilled in the art that the receiver200may be a receiver system, which may also be embodied as a receiver device or a receiver apparatus. It will be appreciated by a person skilled in the art that the at least one processor204may be configured to perform the required functions or operations through set(s) of instructions (e.g., software modules) executable by the at least one processor204to perform the required functions or operations. Accordingly, as shown inFIG.2, the receiver200may further comprise a symbol signal obtaining module (or a symbol signal obtaining circuit)206configured to obtain a received symbol signal in frequency domain based on the transmitted signal; a first channel estimating and equalizing module (or a first channel estimating and equalizing circuit)208configured to: perform a first channel estimation with respect to the time-varying channel based on the received symbol signal to obtain a plurality of first estimated BEM coefficients, the time-varying channel being modeled based on a CX-BEM; and perform a first equalization based on the received symbol signal and the plurality of first estimated BEM coefficients to obtain a plurality of first detected source symbols; and a second channel estimating and equalizing module (or a second channel estimating and equalizing circuit)210configured to perform one or more rounds of a second channel estimation and a second equalization, each round comprising: performing the second channel estimation with respect to the time-varying channel based on the received symbol signal and a plurality of detected source symbols to obtain a plurality of second estimated BEM coefficients; performing interference removal based on the received symbol signal, the plurality of detected source symbols and the plurality of second estimated BEM coefficients to obtain an interference reduced symbol signal in frequency domain; and performing the second equalization based on the interference reduced symbol signal and the plurality of second estimated BEM coefficients to obtain a plurality of second detected source symbols. It will be appreciated by a person skilled in the art that the above-mentioned modules are not necessarily separate modules, and two or more modules may be realized by or implemented as one functional module (e.g., a circuit or a software program) as desired or as appropriate without deviating from the scope of the present invention. For example, the symbol signal obtaining module206, the first channel estimating and equalizing module208and the second channel estimating and equalizing module210may be realized (e.g., compiled together) as one executable software program (e.g., software application or simply referred to as an “app”), which for example may be stored in the memory202and executable by the at least one processor204to perform the functions/operations as described herein according to various embodiments. In various embodiments, the receiver200corresponds to the method100as described hereinbefore with reference toFIG.1, therefore, various functions or operations configured to be performed by the least one processor204may correspond to various steps of the method100described hereinbefore according to various embodiments, and thus need not be repeated with respect to the receiver200for clarity and conciseness. In other words, various embodiments described herein in context of the method100are analogously valid for the corresponding receiver200, and vice versa. For example, in various embodiments, the memory202may have stored therein the symbol signal obtaining module206, the first channel estimating and equalizing module208and/or the second channel estimating and equalizing module210, which respectively correspond to various steps of the method100as described hereinbefore according to various embodiments, which are executable by the at least one processor204to perform the corresponding functions or operations. FIG.3depicts a system300for wireless communication (or a wireless communication system) over a time-varying channel, according to various embodiments of the present invention. The system300comprises a transmitter250configured to transmit a signal80over the time-varying channel (e.g., corresponding to the method of transmitting a signal80over a time-varying channel as described hereinbefore according to various embodiments) and a receiver200configured to receive the transmitted signal80over the time-varying channel as described hereinbefore with reference toFIG.2according to various embodiments of the present invention. A computing system, a controller, a microcontroller or any other system providing a processing capability may be provided according to various embodiments in the present disclosure. Such a system may be taken to include one or more processors and one or more computer-readable storage mediums. For example, the transmitter250and the receiver200described hereinbefore may each include a processor (or controller) and a computer-readable storage medium (or memory) which are for example used in various processing carried out therein as described herein. A memory or computer-readable storage medium used in various embodiments may be a volatile memory, for example a DRAM (Dynamic Random Access Memory) or a non-volatile memory, for example a PROM (Programmable Read Only Memory), an EPROM (Erasable PROM), EEPROM (Electrically Erasable PROM), or a flash memory, e.g., a floating gate memory, a charge trapping memory, an MRAM (Magnetoresistive Random Access Memory) or a PCRAM (Phase Change Random Access Memory). In various embodiments, a “circuit” may be understood as any kind of a logic implementing entity, which may be special purpose circuitry or a processor executing software stored in a memory, firmware, or any combination thereof. Thus, in an embodiment, a “circuit” may be a hard-wired logic circuit or a programmable logic circuit such as a programmable processor, e.g., a microprocessor (e.g., a Complex Instruction Set Computer (CISC) processor or a Reduced Instruction Set Computer (RISC) processor). A “circuit” may also be a processor executing software, e.g., any kind of computer program, e.g., a computer program using a virtual machine code, e.g., Java. Any other kind of implementation of the respective functions which will be described in more detail below may also be understood as a “circuit” in accordance with various alternative embodiments. Similarly, a “module” may be a portion of a system according to various embodiments in the present invention and may encompass a “circuit” as above, or may be understood to be any kind of a logic-implementing entity therefrom. Some portions of the present disclosure are explicitly or implicitly presented in terms of algorithms and functional or symbolic representations of operations on data within a computer memory. These algorithmic descriptions and functional or symbolic representations are the means used by those skilled in the data processing arts to convey most effectively the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities, such as electrical, magnetic or optical signals capable of being stored, transferred, combined, compared, and otherwise manipulated. Unless specifically stated otherwise, and as apparent from the following, it will be appreciated that throughout the present specification, discussions utilizing terms such as “obtaining”, “performing”, “determining”, “decomposing”, “removing”, “interleaving”, “deinterleaving”, “decoding”, “encoding”, “estimating”, “modifying”, “transmitting”, “receiving” or the like, refer to the actions and processes of a computer system, or similar electronic device, that manipulates and transforms data represented as physical quantities within the computer system into other data similarly represented as physical quantities within the computer system or other information storage, transmission or display devices. The present specification also discloses a system (e.g., which may also be embodied as a device or an apparatus) for performing the operations/functions of the methods described herein. Such a system may be specially constructed for the required purposes, or may comprise a general purpose computer or other device selectively activated or reconfigured by a computer program stored in the computer. The algorithms presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose machines may be used with computer programs in accordance with the teachings herein. Alternatively, the construction of more specialized apparatus to perform the required method steps may be appropriate. In addition, the present specification also at least implicitly discloses a computer program or software/functional module, in that it would be apparent to the person skilled in the art that the individual steps of the methods described herein may be put into effect by computer code. The computer program is not intended to be limited to any particular programming language and implementation thereof. It will be appreciated that a variety of programming languages and coding thereof may be used to implement the teachings of the disclosure contained herein. Moreover, the computer program is not intended to be limited to any particular control flow. There are many other variants of the computer program, which can use different control flows without departing from the spirit or scope of the invention. It will be appreciated by a person skilled in the art that various modules described herein (e.g., the symbol signal obtaining module206, the first channel estimating and equalizing module208and/or the second channel estimating and equalizing module210) may be software module(s) realized by computer program(s) or set(s) of instructions executable by a computer processor to perform the required functions, or may be hardware module(s) being functional hardware unit(s) designed to perform the required functions. It will also be appreciated that a combination of hardware and software modules may be implemented. Furthermore, two or more of the steps of a computer program/module or method described herein may be performed in parallel rather than sequentially. Such a computer program may be stored on any computer readable medium. The computer readable medium may include storage devices such as magnetic or optical disks, memory chips, or other storage devices suitable for interfacing with a general purpose computer. The computer program when loaded and executed on such a general-purpose computer effectively results in an apparatus that implements the steps of the methods described herein. In various embodiments, there is provided a computer program product, embodied in one or more computer-readable storage mediums (non-transitory computer-readable storage medium), comprising instructions (e.g., the symbol signal obtaining module206, the first channel estimating and equalizing module208and/or the second channel estimating and equalizing module210) executable by one or more computer processors to perform a method100of receiving a transmitted signal over a time-varying channel, as described hereinbefore with reference toFIG.1. Accordingly, various computer programs or modules described herein may be stored in a computer program product receivable by a system therein, such as the receiver200as shown inFIG.2, for execution by at least one processor204of the receiver200to perform the required or desired functions. The software or functional modules described herein may also be implemented as hardware modules. More particularly, in the hardware sense, a module is a functional hardware unit designed for use with other components or modules. For example, a module may be implemented using discrete electronic components, or it can form a portion of an entire electronic circuit such as an Application Specific Integrated Circuit (ASIC). Numerous other possibilities exist. Those skilled in the art will appreciate that the software or functional module(s) described herein can also be implemented as a combination of hardware and software modules. In various embodiments, the receiver200may be realized by any computer system having communication functionality or capability (e.g., desktop or portable computer system, which may also be embodied as a computing device, such as a mobile communication device (e.g., a smartphone, a tablet computer, a wearable device) or a car navigation system), including at least one processor and a memory. By way of an example only and without limitation, an example mobile communication device400is schematically shown inFIG.4in which the receiver200may be implemented, and various methods/steps or functional modules (e.g., the symbol signal obtaining module206, the first channel estimating and equalizing module208and/or the second channel estimating and equalizing module210) may be implemented as software, such as a computer program being executed within the mobile communication device400, and instructing the mobile communication device400(in particular, at least one processor therein) to conduct the methods/functions of various embodiments described herein. In various embodiments, the mobile communication device400may comprise a processor module402, an input module such as a keypad404and an output module such as a display screen406. It can be appreciated by a person skilled in the art that the display screen406may be a touch-sensitive display screen, and thus may also constitute an input module being in addition to, or instead of, the keypad404. That is, it can be appreciated by a person skilled in the art that the keypad404may be omitted from the portable computing device400as desired or as appropriate. The processor module402is coupled to a first communication unit408for communication of signal90with a cellular network410. The first communication unit408can include but is not limited to a subscriber identity module (SIM) card loading bay. The cellular network410can, for example, be a 3G, 4G or 5G network. The processor module402may further be coupled to a second communication unit412for connection to a local area network414. For example, the connection can enable wired/wireless communication of signal90and/or access to, e.g., the Internet or other network systems such as Local Area Network (LAN), Wireless Personal Area Network (WPAN) or Wide Area Network (WAN). The second communication unit312may include but is not limited to a wireless network card or an Ethernet network cable port. The processor module402in the example includes a processor416, a Random Access Memory (RAM)418and a Read Only Memory (ROM)420. The processor module402also includes a number of Input/Output (I/O) interfaces, for example I/O interface422to the display screen406, and I/O interface424to the keypad404. The components of the processor module402typically communicate via an interconnected bus426and in a manner known to the person skilled in the relevant art. Various software or application programs (or may simply be referred to herein as “apps”) may be pre-installed in a memory of the mobile communication device400or may be transferred to a memory of the mobile communication device400by reading a memory card having stored therein the application programs or by downloading wirelessly from an application server (e.g., an online app store). It will be appreciated by a person skilled in the art that the terminology used herein is for the purpose of describing various embodiments only and is not intended to be limiting of the present invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Any reference to an element or a feature herein using a designation such as “first”, “second”, and so forth does not limit the quantity or order of such elements or features. For example, such designations are used herein as a convenient method of distinguishing between two or more elements or instances of an element. Thus, a reference to first and second elements does not mean that only two elements can be employed, or that the first element must precede the second element. In addition, a phrase referring to “at least one of” a list of items refers to any single item therein or any combination of two or more items therein. In order that the present invention may be readily understood and put into practical effect, various example embodiments of the present invention will be described hereinafter by way of examples only and not limitations. It will be appreciated by a person skilled in the art that the present invention may, however, be embodied in various different forms or configurations and should not be construed as limited to the example embodiments set forth hereinafter. Rather, these example embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the present invention to those skilled in the art. In particular, for better understanding of the present invention and without limitation or loss of generality, various example embodiments of the present invention will now be described with respect to wireless communication based on OFDM transmission over a time-varying frequency-selective channel (which may also be referred to as a doubly selective channel (DSC)), whereby the DSC is modeled as a complex-exponential basis expansion model (CX-BEM), and moreover, the first equalization and the second equalization are based on a MMSE equalization and a MLSE equalization, respectively. Various example embodiments provide methodologies for wireless transmission and reception over a DSC. As described in the background, channel estimation and symbol detection are challenging in a DSC because of high Doppler spread and a large number of multipath that distort the transmitted signal. In various example embodiments, to seek to overcome, or at least ameliorate, this problem, a channel estimation and equalization scheme or method (e.g., a hybrid iterative channel estimation and equalization method) for OFDM transmission over a DSC is provided. To facilitate channel estimation of a time varying channel, the DSC is modeled as a CX-BEM. Based on the CX-BEM, various example embodiments provide a hybrid MMSE-maximum likelihood sequence estimation (MLSE) detector (e.g., corresponding to the method100of or the receiver200for receiving a transmitted signal over a time-varying channel as described hereinbefore according to various embodiments) that performs MMSE detection (which may also be referred to as MMSE equalization) in the first iteration (e.g., corresponding to the first channel estimation and the first equalization as described hereinbefore according to various embodiments, which may also be referred to as a round of the first channel estimation and the first equalization), followed by interference cancellation and MLSE detection (which may also be referred to as MLSE equalization) in the subsequent iteration(s) (e.g., corresponding to the one or more rounds of the second channel estimation and the second equalization as described hereinbefore according to various embodiments). Various example embodiments also provide optimal power distribution between the data symbols and the pilot symbols at the transmitter, and inter-frame channel interpolation at the receiver, as additional methods to further improve the system performance at high fading rates. Simulation results show that methods for wireless communication (e.g., corresponding to the method100of receiving a transmitted signal over a time-varying channel as described hereinbefore according to various embodiments) according to various example embodiments significantly improve the MSE of channel estimation (CE), as well as the BER as compared to conventional methods, under both without and with FEC scenarios. Moreover, the BER performance of the methods according to various example embodiments approach very close to the perfect CSI case, at both low and high fading rates. In mobile radio communication, radar sensing, and satellite communication, very often time-varying channels are required in which the notorious Doppler shifts/spreads (frequency dispersiveness) are caused by moving transmitters, receivers, or signal reflectors. Moreover, multipath propagation leads to high frequency selectivity (time dispersiveness). Thus, in general, practical wireless channels may be characterized as DSCs. The high time and frequency dispersiveness of the DSC can significantly distort the transmitted signal, and thus efficient and accurate channel estimation and equalization techniques may be desired. Accordingly, various example embodiments provide channel estimation and equalization techniques for OFDM transmission over a DSC that to seek to overcome, or at least ameliorate, the above-mentioned problems. In various example embodiments, firstly, a hybrid iterative channel estimation and equalization method (which may be herein referred to as MMSE&IC-MLSE, e.g., corresponding to the method100of receiving a transmitted signal over a time-varying channel as described hereinbefore according to various embodiments) for channel estimation and symbol detection at the receiver end. In the first round of the detection (e.g., corresponding to the first channel estimation and the first equalization as described hereinbefore according to various embodiments), the hybrid iterative channel estimation and equalization method may use pilot symbols to perform pilot-aided CE (e.g., corresponding to the first channel estimation as described hereinbefore according to various embodiments) and a MMSE equalizer (e.g., corresponding to the first equalization as described hereinbefore according to various embodiments) for symbol detection. In the subsequent iteration(s) (e.g., corresponding to the one or more rounds of the second channel estimation and the second equalization as described hereinbefore according to various embodiments), the hybrid iterative channel estimation and equalization method may use the detected symbols (detected source symbols) obtained from the last iteration (immediately preceding iteration) to perform data-aided CE (e.g., corresponding to the second channel estimation as described hereinbefore according to various embodiments) and to perform MLSE equalization (e.g., corresponding to the second equalization as described hereinbefore according to various embodiments) plus interference cancellation (IC) (e.g., corresponding to the interference removal as described hereinbefore according to various embodiments) for symbol detection. In various example embodiments, the hybrid iterative channel estimation and equalization method may be configured for (or extended to) the scenario that an error correction code (e.g., a forward error correction (FEC) code) is used. In such a scenario, for example, the channel estimation block associated with the MLSE equalizer may use the symbols (detected hard decoded source symbols) recovered from the hard decoded bits (hard coded bits obtained after hard decoding) to performance data-aided CE (e.g., corresponding the second channel estimation in the case of the received symbol signal being encoded based on an error correction code as described hereinbefore according to various embodiments), and the MLSE equalizer (e.g., corresponding the second equalization in the case of the received symbol signal being encoded based on the error correction code as described hereinbefore according to various embodiments) may use the soft-outputs (soft coded bits obtained after the soft decoding) of the channel decoder to perform turbo equalization. Simulation results demonstrate that by using the hybrid iterative channel estimation and equalization method according to various example embodiments, the CE and BER performance can be significantly improved as compared to conventional iterative techniques (e.g., as described in Barhumi et al., “MLSE and MAP equalization for transmission over doubly selective channels,”IEEE Trans. Veh. Technol., vol. 58, no. 8, pp. 4120-4128, October 2009” and Tang et al., “Pilot-assisted time-varying channel estimation for OFDM systems,”IEEE Trans. Signal Process., vol. 55, no. 5, pp. 2226-2238, May 2007) under both without and with FEC scenarios. In particular, the BER performance of the hybrid iterative channel estimation and equalization method according to various example embodiments is demonstrated to advantageously approach that of the perfect CSI based equalization. In addition, various example embodiments note that the MSE of the CE is higher at the transmission block edges as compared to the center of the transmission block. Furthermore, various example embodiments note that these errors are attributed to the block-based transmission with rectangular windowing which may result in imprecise BEM modeling of the channel. In the literature, various conventional windowing techniques have been proposed at the receiver to reduce the modeling errors, such as described in Qu et al., “On the estimation of doubly-selective fading channels,”IEEE Trans. Wireless Commun., vol. 9, no. 4, pp. 1261-1265, April 2010. In contrast, in various example embodiments, an inter-frame channel interpolation method (e.g., inter-frame channel interpolation based on least-square (LS) fitting method) is provided to reduce the MSE of CE at the transmission block edges. In this regard, it can be observed from simulation results that the overall performance of CE and BER can be further improved by using the inter-frame channel interpolation method according to various example embodiments of the present invention. Furthermore, various example embodiments provide or use appropriate power allocation between the data symbols and the pilot symbols in the transmission frames to further improve the CE and BER performance of the system. Accordingly, various example embodiments advantageously provide one or more of the following:a hybrid iterative channel estimation and equalization method (MMSE&IC-MLSE) for OFDM transmission over a DSC. In various example embodiments, the hybrid iterative channel estimation and equalization method may be configured respectively for both scenarios of FEC being used and not being used. It is found that the hybrid iterative channel estimation and equalization method significantly outperforms conventional pilot-based iterative CE and equalization method for both the scenarios of with and without FEC, especially when the Doppler spread of the channel is high,for continuous transmission of data blocks, an inter-frame channel interpolation method according to various example embodiments may be provided to reduce the MSE of CE at the transmission block edges. The overall performance of the CE and BER can be further improved by using the inter-frame channel interpolation method according to various example embodiments of the present invention,various example embodiments may further use appropriate power allocation between the data symbols and the pilot symbols in the transmission frames to further improve the CE and BER performance of the hybrid iterative channel estimation and equalization method. The performance analysis of methods according to various example embodiments will also be discussed later below in terms of the MSE of the channel estimation, and the BER after equalization and/or decoding. For example, it will be demonstrated that methods according to various example embodiments are able to achieve the channel MSE lower bound in the mid to high SNR regions, and after4iterations, the BER approaches that obtained under perfect CSI condition. Extensive simulation results are also provided for various fading scenarios, showing the advantages of the methods according to various example embodiments over conventional methods. System and Channel Model System Model Various example embodiments may consider transmission over a DSC with one antenna70each at the transmitter and the receiver. A data sequence x[n] of length N is transmitted at a rate of 1/T symbols/s over the DSC. The discrete-time baseband equivalent of the received symbol at the nthtime instant may be written as: y[n]=Σl=0Lh[n;l]x[n−l]+v[n](Equation 1) where h[n; l] denotes the discrete-time equivalent baseband representation of the DSC, which subsumes the physical multipath channel together with the transmit and receive pulse shaping filters, l denotes the lthmulti-path, L denotes the number of multipaths and is given as L=⌊τmaxT⌋ with τmaxbeing the maximum delay spread of the channel, and v[n] denotes the circularly symmetric complex additive white Gaussian noise (AWGN) with v[n]˜CN(0,σv2). According to various example embodiments, an exemplary block transmission design may be adopted where the pilot symbols are multiplexed with the data symbols by periodically placing them in the transmission block, such as illustrated inFIG.5. In particular,FIG.5depicts a schematic drawing of an example transmission block comprising three transmission sub-blocks, each transmission sub-block comprising a data sub-block (shown in black solid circle) and a pilot sub-block (shown in outlined circle, e.g., a time-domain Kronecker delta (TDKD) pilot cluster). Each pilot sub-block includes a pulse in the center, and surrounded by Q zeros on both sides, where Q denotes the number of BEM coefficients, which will be described later below in further detail. In various example embodiments, the pilot positions may be optimized to further improve the CE performance. Channel Model In various example embodiments, the channel h[n;l] (e.g., corresponding to the time-varying channel as described hereinbefore according to various embodiments) may be modeled using the CX-BEM where the lthtap of the channel at the nthtime-instant is expressed as a weighted combination of the complex exponentials bases functions, and may be expressed as: h[n;l]=Σq=0Q{tilde over (h)}q(l)ejωqn,  (Equation 2) where ωq=2⁢πK⁢(q-⌈Q2⌉), q∈{0, 1, . . . , Q} denotes the qthBEM modeling frequency, K≥N is the BEM resolution. In various example embodiments, the modeling frequency may be taken to be uniformly distributed between [2⁢πK⁢(-⌈Q2⌉),2⁢πK⁢(⌈Q2⌉)]. In various other example embodiments, the modeling frequency may be taken to be non-uniformly spaced to further reduce the BEM modelling error. {tilde over (h)}g(l)(q∈{0, 1, . . . , Q}) denotes the weight or the qthBEM coefficient corresponding to the lthpath, and Q denotes the number of BEM coefficients. It has been mentioned in the art that Q should be given as Q=[2fmaxNT], where fmaxis the channel maximum Doppler spread. Accordingly, substituting Equation (2) into Equation (1), it can be obtained that: y[n]=Σq=0QΣ=l=0Lejωqn{tilde over (h)}q(l)x[n−l]+v[n](Equation 3) Defining y=[y[0], . . . , y[N−1]]T, in a block-level, the received vector y may be expressed as: y=Σq=0QΣl=0L{tilde over (h)}q(l)DqZlx+v(Equation 4) where Dqis the diagonal matrix with the qthbasis function components on its diagonal, i.e., Dq=diag{[1, ejωq, . . . ej2π(N-1)ωq]T}, Z1is an N× (N+L) Toeplitz matrix defined as Zl=[0N×(L-l), IN, 0N×l], where INis a N×N identity matrix. The transmitted symbol vector x is defined as x=[x[−L], . . . , x[N−1]]T, and v is the additive noise vector defined as v=[v[0], . . . , v[N−1]]T. FIG.6depicts a schematic block diagram of OFDM transmission, according to various example embodiments of the present invention. For OFDM transmission, as shown inFIG.6, the frequency-domain information symbols may firstly be divided into transmission blocks of N symbols, with each symbol transmitted on a subcarrier in parallel. Each transmission block may then be transformed to time domain by inverse discrete Fourier transform (IDFT). Then, a cyclic prefix (CP) of length v≥L may be added in front of each transmission block. In various example embodiments, the CP may actually be the repeat of the last v points of each transmission block after IDFT. The CP-added transmission blocks (e.g., corresponding to the transmitted signal as described hereinbefore according to various embodiments) may then be transmitted through the channel. Suppose S[k] is the frequency-domain symbol which is transmitted by the kthsubcarrier, after IDFT, the nthsymbol transmitted on time-domain may be expressed as: x[n]=1N⁢∑k=0N-1S[k]⁢ej⁢2⁢π⁡(n-v)⁢k/N,(Equation⁢5)n=0,…,N+v-1 At the receiver side, after removing CP, the block-level received vector y in time-domain may be written as: y=Σq=0QΣl=0L{tilde over (h)}q(l)Dq{tilde over (Z)}lFHS+v(Equation 6) where F is the N×N unitary DFT matrix with the element on the ithrow and jthcolumn equals to 1N⁢e-j⁢2⁢π⁡(i-1)⁢(j-1)/N, (⋅)Hdenotes Hermitian transpose. {tilde over (Z)}lis an N×N circular matrix with the first column has 1 in position 1+1, that is: l⁢{0…01…0⋮⋱⋮⋮⋱⋮000001Z~l=100000⋮⋱⋮⋮⋱⋮⌊0…10…0⌋(Equation⁢7) and S=[S[0], . . . , S[N−1]]Tis the source symbol vector. After DFT, the block-level received vector Y in frequency domain (e.g., corresponding to the received symbol signal in frequency domain as described hereinbefore according to various embodiments) may be expressed as: Y=Σq=0QΣl=0L{tilde over (h)}q(l)FDq{tilde over (Z)}lFHS+V(Equation 8) where V=[V[0], . . . , V[N−1]]Tis the noise vector in the frequency-domain, with V[k] being the noise on the kthsubcarrier. Iterative Channel Estimation and Data Equalization At the receiver side, the received vector Y is used to perform channel estimation and data detection. A conventional iterative CE and equalization method has been disclosed in Barhumi et al., “MLSE and MAP equalization for transmission over doubly selective channels,”IEEE Trans. Veh. Technol., vol. 58, no. 8, pp. 4120-4128, October 2009, and a schematic block diagram of the conventional iterative CE and equalization method disclosed in the above-mentioned Barhumi reference is shown inFIG.7. As shown inFIG.7, the received signal Y[k], together with the known pilot symbols, are firstly used to perform a pilot-aided CE. After that, the estimated BEM coefficients ĥqp(l) are used by an equalization block to perform data detection in frequency domain. After that, if iteration is used, the detected data Ŝ[k] may be feedback to the CE block to perform data-aided CE and the estimated BEM coefficients ĥqd(l) may be used by the equalization block to perform data detection again. This CE and equalization may be repeated until no more performance gain can be achieved. Conventional frequency domain equalization schemes for OFDM transmission have been separately disclosed, including maximum likelihood sequence estimation (MLSE) (e.g., as described in the above-mentioned Barhumi reference) and minimum mean square error (MMSE) equalization (e.g., as described in the above-mentioned Tang reference), which are described below. Conventional MLSE Equalization According to the above-mentioned Barhumi reference, when K=N, suppose Q is an even number, the received signal on the kthsubcarrier may be written as: Y[k]=∑l=0L∑q=0Qej⁢2⁢π⁡(q-Q2-k)⁢l/N⁢h˜q(l)⁢S[❘"\[LeftBracketingBar]"k+Q2-q❘"\[RightBracketingBar]"N]+V[k](Equation⁢9) where |⋅|Ndenotes modulo of N. From Equation (9), various example embodiments note that Y[k] is related to source symbols on Q consecutive subcarriers, and a MLSE equalizer may be used to perform maximum likelihood detection of S[k]. The Viterbi algorithm is commonly used by the MLSE detector to perform symbol detection. Define the state of the kthsubcarrier in the Viterbi algorithm as sk={S[❘"\[LeftBracketingBar]"k-Q2❘"\[RightBracketingBar]"N],S[❘"\[LeftBracketingBar]"k-Q2+1❘"\[RightBracketingBar]"N],…,S[❘"\[LeftBracketingBar]"k+Q2-1❘"\[RightBracketingBar]"N]}. If the alphabet size of S[k] is M, the total number of states in the Viterbi algorithm is MQ. Let sk(i), i∈{0, 1, . . . , MQ-1} denote different states of the kthsubcarrier. The accumulated metric of sk(i) may be defined as Γ(sk(i)). The branch metric associated with the state transition from sk-1(j) to sk(i) may be denoted by γ(sk-1(j)→sk(i)). The MLSE equalization for OFDM may include the following steps:1. Start from the 0thsubcarrier. For each state s0(i), i∈{0, 1, . . . , MQ-1}, initialize the accumulated metric as Γ(s0(i))=0.2. For the kthsubcarrier, let sk-1(j0), . . . , sk-1(jM-1) be the M preceding states of sk(i), and the source symbol vector associated with the state transition being S(sk-1(jm)→sk(i))={Sjm,i[0], Sjm,i[1], . . . , Sjm,i[Q]}. For each sk-1(jm), m∈{0, 1, . . . , M−1}, compute the branch metric by: γ⁡(sk-1(jm)→sk(i))=❘"\[LeftBracketingBar]"Y[k]-∑l=0L∑q=0Qej⁢2⁢π⁡(q-Q2-k)⁢l/N⁢h˜q(l)⁢Sjm,i[Q-q]❘"\[RightBracketingBar]"2(Equation⁢10)3. Calculate the accumulated metric of the kthsubcarrier by: Γ⁡(sk(i))=minjm(Γ⁡(sk-1(jm))+γ⁡(sk-1(jm)→sk(i)))(Equation⁢11)4. Keep the preceding state jmwhich leads to minimum Γ(sk(i)) in a table.5. Set k←k+1 and go back to step 2 or go to step 6 if k=N.6. Find the most likely transmitted sequence recursively by tracing back from the state with minimum accumulated metric Γ(sN-1(i)). Various example embodiments note that in the above-mentioned step 2, for Equation (10), the calculation of branch metric requires the knowledge of the BEM coefficients {tilde over (h)}q(l). In practice, since {tilde over (h)}q(l) is normally unknown, various example embodiments may advantageously utilize the estimated BEM coefficients ĥq(l) instead of ĥq(l) in the calculation of the branch metric. Conventional MMSE Equalization In the above-mentioned Tang reference, a banded MMSE equalizer for OFDM is disclosed and may include the following steps:1. Make calculation of original channel matrix H by: H=Σq=0QΣl=0L{tilde over (h)}q(l)FDq{tilde over (Z)}lFH(Equation 12)2. Make calculation of matrix G by: G=(HKHKH+Rn+σ2I)  (Equation 13) whereHKis the banded approximation of the original channel matrix H, i.e.,HK=H⊙Tkwith Tkbeing an N×N matrix whose main diagonal, K sub-diagonals, and K super-diagonals are ones, and the remaining entries are zero. Rnis the covariance matrix of the noise, and σ2I is a regularization term which servers to prevent performance degradation at high SNR.3. The detected source symbol vector Ŝ=[Ŝ[0], . . . , Ŝ[N−1]]Tmay be given by Ŝ=HKHG−1Y(Equation 14) Various example embodiments note that in the above-mentioned step 1, since {tilde over (h)}q(l) is normally unknown in practice, various example embodiments may advantageously utilize the estimated BEM coefficients ĥq(l) instead of {tilde over (h)}q(l) in Equation (12). Various example embodiments note that in the above-mentioned step 2, using banded approximation matrixHKinstead of the original channel matrix H is to reduce the computational complexity. If the complexity is not an issue, the original channel matrix H in Equation (13) may be used to obtain a better performance than usingHK. Various example embodiments note to determine σ in practice, Equation (13) may firstly be applied by setting σ=0, then observe from which SNR the BER curve begins to saturate, and finally choose σ2based on that SNR. Present Hybrid Iterative Channel Estimation and Equalization (MMSE&IC-MLSE) Various example embodiments note that although the MLSE equalizer may be optimal in performance, it can only be used or applied for the special case that K=N. In this regard, various example embodiments found that, for a BEM resolution K>N, inter-carrier interference (ICI) is not limited to the Q subcarrier but may cover the whole OFDM transmission block. To illustrate this,FIGS.8A and8Bdepict the channel matrix H plotted for K=N and K=2N, respectively. In particular,FIG.8Adepicts elements of the channel matrix H for Q=4 and K=4 andFIG.8Bdepicts elements of the channel matrix H for Q=4 and K=2N, whereby the gray portions denote non-zero elements. FromFIGS.8A and8B, it can be observed that when K=N, H is a banded diagonal matrix and only the main diagonal, Q/2 sub-diagonals, and Q/2 super-diagonals are non-zero. When K=2N, the number of non-zero sub-diagonals and super-diagonals becomes much larger. Various example embodiments found that, as the complexity of Viterbi algorithm grows exponentially with the number of non-zeros sub- and super-diagonals, the MLSE equalizer becomes impractical when K>N. On the other hand, if keeping K=N, for practical channels, this resolution may not be sufficient to model the channel and results in high channel modelling error, which will lead to significant overall performance degradation. On the other hand, various example embodiments note that, compared with the MLSE equalizer, the complexity of MMSE equalizer is much lower, and the complexity does not change for K>N. However, the MMSE equalizer is sub-optimal in performance. To address the above deficiencies, various example embodiments provide a hybrid MMSE&IC-MLSE equalizer. In this regard,FIG.9depicts a schematic block diagram of a hybrid iterative channel estimation and equalization system900according to various example embodiments of the present invention. FromFIG.9, it can be seen that there is an interference cancellation (IC) block940(e.g., corresponding to the interference removal as described hereinbefore according to various embodiments) before the frequency domain MLSE equalization block950(e.g., corresponding to the second equalization as described hereinbefore according to various embodiments). The IC block940may be configured to remove the interference from non-significant subcarriers. In various example embodiments, the non-significant subcarrier may correspond to the subcarriers multiplying with 0 when H is a banded diagonal matrix as shown inFIG.10A. As an example, for the kthsubcarrier, all other subcarriers indices from 0 to N−1 except subcarriers indices (❘"\[LeftBracketingBar]"k-Q2❘"\[RightBracketingBar]"N,❘"\[LeftBracketingBar]"k-Q2+1❘"\[RightBracketingBar]"N,…,❘"\[LeftBracketingBar]"k+Q2-1❘"\[RightBracketingBar]"N), are non-significant subcarriers. For example, Y[k]=[Hk,0, Hk,1, . . . , Hk,N-1][S[0], S[1], . . . S[N−1]]T. When H is a banded diagonal matrix as shown inFIG.10A, Hk,0=0, Hk,1=0, . . . , Hk,N-1=0, except Hk,❘"\[LeftBracketingBar]"k-Q2❘"\[RightBracketingBar]"N,Hk,❘"\[LeftBracketingBar]"k-Q2+1❘"\[RightBracketingBar]"N⁢…,Hk,❘"\[LeftBracketingBar]"k+Q2-1❘"\[RightBracketingBar]"N are non-zero. Correspondingly, S[0], S[1], . . . S[N−1] are non-significant subcarrier, except s[❘"\[LeftBracketingBar]"k-Q2❘"\[RightBracketingBar]"N],S[❘"\[LeftBracketingBar]"k-Q2+1❘"\[RightBracketingBar]"N]⁢…,S[❘"\[LeftBracketingBar]"k+Q2-1❘"\[RightBracketingBar]"N]. After interference cancellation, the channel matrix becomes a banded matrix again even when K>N. The IC block940will now be described below in further detail, according to various example embodiments of the present invention. According to Equation (8), the frequency-domain received signal vector Y (e.g., corresponding to the received symbol signal in frequency domain as described hereinbefore according to various embodiments) may be written as: Y=HS+V(Equation 15) where H=∑q=0Q∑l=0Lh˜q(l)⁢F⁢Dq⁢Z∼l⁢FH=[H0,0…H0,N-1⋮⋱⋮HN-1,0…HN-1,N-1](Equation⁢16) As explained above, when K>N, H is not a banded diagonal channel matrix and the complexity of conventional MLSE may be too large to be applicable. To make the MLSE to be simpler, various example embodiments divide H into two matrices, that is, H=Hb+HIC, where Hbis a banded diagonal channel matrix with elements on the ithrow and jthcolumn, that is, Hb,(i,j), which may be expressed as: Hb,(i,j)={Hi,j,j=❘"\[LeftBracketingBar]"i-Q2❘"\[RightBracketingBar]"N,❘"\[LeftBracketingBar]"i-Q2+1❘"\[RightBracketingBar]"N,…,❘"\[LeftBracketingBar]"i+Q2❘"\[RightBracketingBar]"N0,otherwise,(Equation⁢17)i=0,1,…,N-1 In contrast, HICis the complementary channel matrix of Hb(which may also be referred to as a non-banded diagonal channel matrix) and it is a matrix with elements on the ithrow and jthcolumn, that is, Hic,(i,j), which may be expressed as: Hi⁢c,(i,j)={0,j=❘"\[LeftBracketingBar]"i-Q2❘"\[RightBracketingBar]"N,❘"\[LeftBracketingBar]"i-Q2+1❘"\[RightBracketingBar]"N,…,❘"\[LeftBracketingBar]"i+Q2❘"\[RightBracketingBar]"NHi,j,otherwise,(Equation⁢18)i=0,1,…,N-1 For illustration purpose only, example formats of banded diagonal channel matrix Hband the complementary channel matrix HICare shown inFIG.10AandFIG.10B, respectively. Accordingly, Equation (15) may be re-written as: Y=HS+V=HbS+HICS+V(Equation 19) Therefore, the following relationship may be obtained: Y−HICS=Y′=HbS+V(Equation 20) From Equation (20), various example embodiments found that if the interference HICS is removed from received frequency-domain vector Y, the MLSE equalizer950can be used since Hbis a banded diagonal channel matrix. To make the calculation of the received frequency-domain vector after IC, that is, Y′, HICand S are required. In various example embodiments, HICcan be obtained from the CE block940, and S, since it is unknown, can only be replaced by the detected symbol vector Ŝ. Various example embodiments note that Ŝ is only available after one round of MLSE equalization. Accordingly, in a first iteration (e.g., corresponding to a round of the first channel estimation and the first equalization as described hereinbefore according to various embodiments), various example embodiments may utilize the conventional MMSE equalization, such as shown inFIG.9. Accordingly, the hybrid iterative channel estimation and equalization method according to various example embodiments may include the following steps:1. Perform a pilot-aided CE910(e.g., as described above with reference to the above-mentioned Tang reference) (e.g., corresponding to the first channel estimation as described hereinbefore according to various embodiments) with respect to the DSC based on the received frequency-domain vector Y. The estimated BEM channel coefficient vector produced by the pilot-aided CE910may be expressed as ĥp=[ĥ0p(0), ĥ0p(1), . . . , ĥ0p(L), . . . , ĥQp(0), ĥQp(1), . . . , ĥQp(L)]T(e.g., corresponding to the plurality of first estimated BEM coefficients as described hereinbefore according to various embodiments) and the reconstructed channel matrix (e.g., corresponding to the first estimated channel matrix as described hereinbefore according to various embodiments) may be expressed as Ĥ=Σq=0QΣl=0Lĥq(l)FDq{tilde over (Z)}lFH.2. Using a MMSE equalizer920(e.g., based on Equation 12) to perform a first round of equalization (e.g., corresponding to the first equalization as described hereinbefore according to various embodiments) based on the received frequency-domain vector Y and the estimated BEM channel coefficient vector to produce equalized symbol Ŝ[k] (e.g., corresponding to the plurality of first detected source symbols as described hereinbefore according to various embodiments). In various example embodiments, to achieve better performance, the estimated channel matrix Ĥ may be used without multiplying it with the banded diagonal channel matrix.3. Pass the equalized symbol Ŝ to the data-aided CE block930(e.g., corresponding to the second channel estimation as described hereinbefore according to various embodiments). The estimated channel BEM coefficient vector (e.g., corresponding to the plurality of second estimated BEM coefficients as described hereinbefore according to various embodiments) produced by the data-aided CE block930based on the received frequency-domain vector Y and the equalized symbol Ŝ (which may also be referred to as detected source symbols) may be given by: ĥd=[ĥ0d(0),ĥ0d(1), . . . ,ĥ0d(L), . . . ,ĥQd(0),ĥQd(1), . . . ,ĥQd(L)]T=(SHS)−1SHY(Equation 21) whereSis a N×(Q+1) (L+1) matrix with the ((Q+1)l+q)thcolumn equals to FDq{tilde over (Z)}lFHŜ. In various example embodiments, the detected source symbols input to the data-aided CE block930may be the detected source symbols from the MMSE equalizer920in a first round or iteration (e.g., iteration 1 inFIG.9) or the detected source symbol from the MLSE equalizer950in subsequent round(s) or iteration(s) (e.g., iterations 2, 3 and so on inFIG.9) from the MLSE equalizer950).4. The IC block940may be configured to make reconstruction of the channel matrix Ĥ again (e.g., corresponding to the second estimated channel matrix as described hereinbefore according to various embodiments). The IC block940may be further configured to divide or decompose Ĥ into Ĥ=Hb+ĤIC, where Ĥband ĤICare the estimated Hband HICas described before. The IC block940may be configured to then calculate Y′ based on detected source symbols (e.g., the detected source symbols from the MMSE equalizer920in a first round or iteration (e.g., iteration 1 inFIG.9) or the detected source symbol from the MLSE equalizer950in subsequent round(s) or iteration(s) (e.g., iterations 2, 3 and so on inFIG.9) from the MLSE equalizer950) and ĤIC(e.g., corresponding to the performing interference removal as described hereinbefore according to various embodiments), such as by Y′=Y−ĤICŜ(Equation 22)Pass the interference reduced symbol signal Y′ to the MLSE equalizer950configured to perform MLSE equalization based on the interference reduced symbol signal Y′ and the estimated channel BEM coefficient vector from the data-aided CE block930.5. For the MLSE equalizer950(e.g., based on the conventional MLSE equalizer as described hereinbefore), start from the 0thsubcarrier. For each state s0(i), i∈{0, 1, . . . , MQ-1}, initialize the accumulated metric as Γ(s0(i))=0.6. For each sk-1(jm), m∈{0, 1, . . . , M−1}, compute the branch metric by: γ⁡(sk-1(jm)→sk(i))=❘"\[LeftBracketingBar]"Y′[k]-∑q=0QH^k,❘"\[LeftBracketingBar]"k-Q2+q❘"\[RightBracketingBar]"N⁢Sjm,i[q]❘"\[RightBracketingBar]"2(Equation⁢23)7. Calculate the accumulated metric of the kthsubcarrier by Equation (11).8. Keep the preceding state jmwhich leads to minimum Γ(sk(i)) in a table.9. Set k←k+1 and go back to step 6 or go to step 10 if k=N.10. Find the most likely transmitted sequence Ŝ recursively by tracing back from the state with minimum accumulated metric Γ(sN-1(i)).11. Go back to step 3 for the next round of CE and equalization, or stop if the maximum round or iteration number is reached. Iterative Channel Estimation and Data Equalization with FEC Various example embodiments further consider the situation that an error correction code, such as a forward error correction (FEC) code, is used after the equalizer for error correction.FIG.11depicts a schematic block diagram of the OFDM transmission with FEC, according to various example embodiments. It can be observed that when FEC is used, at the transmitter side, the source bit b[i]∈{0,1} may be encoded by a channel encoder, and the encoded bit may be c[k]∈{0,1}. The encoded bits may then pass through an interleaver and the interleaved bits are mapped to source symbol S[k]∈{0,1, . . . , M−1}. After that, the processing may be the same as that described and shown inFIG.6. FIG.12depicts a schematic block diagram of the hybrid iterative channel estimation and data equalization system with FEC1200, according to various example embodiments of the present invention. As shown inFIG.12, the MLSE equalizer1250(e.g., corresponding the second equalization as described hereinbefore according to various embodiments) may be configured to use a soft-output Viterbi algorithm (SOVA) (e.g., as described in Hagenauer et al., “A Viterbi algorithm with soft-decision outputs and its applications,” in Proc. Globecom, Dallas, TX, November 1989, pp. 1680-1686 or Ling et al., “On SOVA for Nonbinary Codes”, IEEE Communications Letters, vol. 3, no. 12, pp. 335-337, December 1999) to calculate L(S[k]), which is the log-likelihood ratio (LLR) of each detected symbol Ŝ[k]. After that, the extrinsic information Le(S[k])=L(S[k])−La(S[k]) (e.g., corresponding the plurality of second detected coded source symbols as described hereinbefore according to various embodiments), is soft-demapped (e.g., by the soft demapping block1244) and deinterleaved (e.g., by the deinterleaving block1246) to become the a priori information of the coded bit, that is, La(C[k]) (e.g., corresponding the plurality of second detected coded bits as described hereinbefore according to various embodiments). The channel decoder1260(e.g., corresponding the soft decoding the plurality of second detected coded bits and hard decoding the plurality of second detected coded bits as described hereinbefore according to various embodiments) may be a soft-input soft-output (SISO) decoder which takes the a priori information of the coded bits La(C[k]) as input and provides LLR, L(C[k]), of each coded bit C[k]. In the iterative channel estimation and data equalization method with FEC according to various example embodiments, the FEC code may be a rate ½ convolutional code with generator polynomial (133, 171), and the SISO algorithm used by the channel decoder1260may also be SOVA. After channel decoding, the extrinsic information Le(C [k])=L(C [k])−La(C [k]) (e.g., corresponding the plurality of second soft coded bits (i.e., soft coded bits obtained after the soft decoding) as described hereinbefore according to various embodiments), is interleaved (e.g., by the interleaving block1264), soft-mapped (e.g., by the hard/soft mapping block1266) and fed back to the MLSE equalizer1250as the a priori information La(S[k]) (e.g., corresponding to the plurality of second detected soft decoded source symbols as described hereinbefore according to various embodiments) of the source symbol S[k] for the next round of equalization. Meanwhile, the coded bits after error correction, i.e., Ĉd[k] (e.g., corresponding the plurality of second hard coded bits (i.e., hard coded bits obtained after the hard decoding) as described hereinbefore according to various embodiments), is interleaved (e.g., by the interleaving block1264) and hard mapped (e.g., by the hard/soft mapping block1266) to ŝd[k] (e.g., corresponding to the plurality of second detected hard decoded source symbols as described hereinbefore according to various embodiments) and fed back to the data-aided CE1230(e.g., corresponding the second channel estimation as described hereinbefore according to various embodiments) and IC blocks1240(e.g., corresponding the interference removal as described hereinbefore according to various embodiments) to perform channel estimation and interference cancellation, respectively. In various example embodiments, the hard coded bits may be obtained by making hard-decision of soft coded bits. For example, if p(b=0)>p(b=1), b=0, otherwise b=1. The iterative channel estimation and data equalization method with FEC1200will now be described in further detail below according to various example embodiments. The pilot-aided CE block1210(e.g., corresponding to the first channel estimation as described hereinbefore according to various embodiments) may be configured to output the estimated BEM coefficients ĥqp(l) (e.g., corresponding to the plurality of first estimated BEM coefficients as described hereinbefore according to various embodiments) with respect to the DSC based on the received frequency-domain vector Y, such as in the same or similar manner as the pilot-aided CE block910. The MMSE equalization1220(e.g., corresponding to the first equalization as described hereinbefore according to various embodiments) may be configured to produce the equalized symbols ŝ[k] (e.g., corresponding to the plurality of first detected source symbols as described hereinbefore according to various embodiments) based on the received frequency-domain vector Y and the estimated BEM channel coefficients, which may be obtained by Equation (14). The equalized symbols ŝ[k] may then be hard-demapped (e.g., by the hard demapping block1224) and deinterleaved (e.g., by the deinterleaving block1226) to produce detected coded bits Ĉ[k] (e.g., corresponding the plurality of first detected coded bits as described hereinbefore according to various embodiments). The HISO decoder1227may be configured to produce the hard coded bits (e.g., corresponding the plurality of first hard coded bits (i.e., hard coded bits obtained after the hard decoding) as described hereinbefore according to various embodiments) Ĉd[k] after error correction and soft coded bits L(C [k]) (e.g., corresponding to the plurality of first soft coded bits (i.e., soft coded bits obtained after the soft decoding) as described hereinbefore according to various embodiments), which may be obtained by channel decoding algorithms depending on error-correction codes used. By way of an example only and without limitation, when convolutional code is used, the hard coded bits and the soft coded bits may be obtained by SOVA algorithm, as mentioned hereinbefore. The hard coded bits Ĉd[k], after interleaving (e.g., by the interleaving block1228) and hard mapping (e.g., by the hard/soft mapping block1229), becomes the detected symbol after error correction ŝd[k] (e.g., corresponding to the plurality of first detected hard decoded source symbols as described hereinbefore according to various embodiments). The soft coded bits L(C [k]), after interleaving (e.g., by the interleaving block1228) and soft mapping (e.g., by the hard/soft mapping block1229), becomes the a priori information La(S[k]) (e.g., corresponding to the plurality of first detected soft decoded source symbols as described hereinbefore according to various embodiments). The data-aided CE block1230may be configured to produce the estimated BEM coefficients ĥqd(l) (e.g., corresponding the plurality of second estimated BEM coefficients as described hereinbefore according to various embodiments) based on the received frequency-domain vector Y and detected hard decoded source symbols (e.g., the detected hard decoded source symbols based on the MMSE equalizer1220in a first round or iteration (e.g., iteration 1 inFIG.12) or the detected hard decoded source symbols based on the MLSE equalizer1250in subsequent round(s) or iteration(s) (e.g., iterations 2, 3 and so on inFIG.12), which may be obtained based on Equation (21). The IC block1240may be configured to produce an interference reduced symbol signal Y′, which may be obtained by Equation (22). In particular, the IC block1240may be configured to make reconstruction of the channel matrix Ĥ again (e.g., corresponding to the second estimated channel matrix as described hereinbefore according to various embodiments). The IC block1240may be further configured to divide or decompose Ĥ into Ĥ=Ĥb+ĤIC, where Ĥband ĤICare the estimated Hband HICas described before. The IC block1240may be configured to then calculate the interference reduced symbol signal Y′ based on detected hard decoded source symbols (e.g., the detected hard decoded source symbols based on the MMSE equalizer1220in a first round or iteration (e.g., iteration 1 inFIG.12) or the detected hard decoded source symbols based on the MLSE equalizer1250in subsequent round(s) or iteration(s) (e.g., iterations 2, 3 and so on inFIG.12) and ĤIC(e.g., corresponding to the performing interference removal as described hereinbefore according to various embodiments), such as by Equation (22). The MLSE equalization block1250may be configured to produce the extrinsic information Le(S[k])=L(S[k])−La(S[k]) based on the interference reduced symbol signal Y′ and the estimated BEM coefficients ĥqd(l). In various example embodiments, the MLSE equalization block1250may be further configured to produce the extrinsic information Le(S[k])=L(S[k])−La(S[k]) based on detected soft decoded source symbols (e.g., the detected soft decoded source symbols based on the MMSE equalizer1220in a first round or iteration (e.g., iteration 1 inFIG.12) or the detected soft decoded source symbols based on the MLSE equalizer1250in subsequent round(s) or iteration(s) (e.g., iterations 2, 3 and so on inFIG.12). For example, L(S[k]) may be obtained by the SOVA algorithm, as mentioned hereinbefore. The SISO decoder1260may be configured to produce the hard coded bits after error correction Ĉd[k] and the extrinsic information Le(C [k])=L(C [k])−La(C [k]). For example, Ĉd[k] and L(C[k]) may be obtained using channel decoding algorithms depending on error-correction codes used. By way of an example only and without limitation, when convolutional code is used, Ĉd[k] and L(C [k]) may be obtained using the SOVA algorithm, as mentioned hereinbefore. The hard coded bits Ĉd[k], after interleaving and hard mapping, becomes the detected symbol after error correction ŝd[k]. The extrinsic information Le(C [k]), after interleaving and soft mapping, becomes the a priori information La(S[k]). As mentioned hereinbefore, “hard” and “soft” can be understood by a person skilled in the art. For example, a hard value may refer to a binary value (e.g., C[k]=0 or 1) and a soft value may refer to a log-likelihood ratio (e.g.,L⁡(C[k])=ln⁢P⁡(C[k]=0)P⁡(C[k]=1)). In the first round of iteration, since the MMSE equalizer1220is used, only hard-output Ŝ[k] may be provided to the channel decoder1227and the channel decoder1227may be an hard-input soft-output (HISO) decoder. Improve the Performance of Channel Estimation by Using Inter-Frame Channel Interpolation Various example embodiments note that when using BEM channel model, the estimated channel gains by using pilot symbols are more accurate in the middle of each frame than the beginning and end of each frame. Accordingly, various example embodiments improve the performance of the channel estimation, by using inter-frame channel interpolation. In various example embodiments, the estimated channel gains in middle of each frame as known channel gains may be obtained and used for polynomial interpolation to estimate channel gains at a beginning portion and an end portion of the transmission frame. In various example embodiments, polynomial interpolation may determine a polynomial f[n;l] (e.g., corresponding to the inter-frame channel interpolation polynomial function as described hereinbefore according to various embodiments) to represent the channel gains of the lthpath at time index n. Suppose the degree of f[n;l] is d, then f[n;l] may be written as: f[n;l]=ul(0)+nul(1)+n2ul(2)+ . . . +ndul(d)  (Equation 24) The target is to minimize the MSE between f[n;l] and the Nhknown channel coefficients h[ni;l], i=1, 2, . . . Nh, that is: R2=Σi=1Nh|f[ni;l]−h[ni;l]|2(Equation 25) To minimize R2, least square (LS) fitting method may be used according to various example embodiments. According to Equation (24), the following may be obtained: 1n1…n1d1n2…n2d⋮⋮⋱⋮1nNh…nNhd[ul(0)ul(1)⋮ul(d)]=f[n1;l]f[n2;l]⋮f[nNh;l](Equation⁢26) Let B=1n1…n1d1n2…n2d⋮⋮⋱⋮1nNh…nNhd,(Equation⁢27) ul=[ul(0), ul(1), . . . , ul(d)]Tand hl=[h[n1]; h[n2; l], . . . , h[nNh; l]]T, according to LS fitting, the following may be obtained: ul=(BHB)−1B·hl(Equation 28) After ul(which may be referred to as polynomial interpolation coefficients) is obtained, it may then be used to calculate f[n;l], n=0, 1, . . . , N−1. Optimal Power Allocation Between the Data and the Pilots In practical communication systems, there is a power constraint on the transmission block which comprises both the data symbols as well as the pilot symbols used for CE. Various example embodiments note that an optimal allocation of power between the data and the pilots can improve the overall system performance. Specifically, it may be desirable for the hybrid iterative channel estimation and data equalization method according to various example embodiments to have a good initial CE as its starting point. In this regard, providing more power to the pilots can improve the initial CE, but leaves relatively lesser power for the data symbols. Thus, although the CE is good, the BER performance may deteriorate because the data power is reduced. Therefore, various example embodiments may determine an optimal allocation of power between data symbols and pilot symbols so that the system BER is minimized. According to various example embodiments, it may be assumed that the total transmission power over a block is PT, and is divided between the data and the pilots as αPTand (1−α)PTrespectively, where α∈{0,1} is the power allocation factor. In other words, α is defined as the ratio of the power allocated to the data blocks to the power in the entire transmission block. For equal number of data (or pilot) symbols per sub-block (single impulse pilot per sub-block in our case), it can be verified that the SNR per data symbol as well as the SNR per pilot symbol are the same if α=ndnd+1 where nddenotes the number of data symbols in a single sub-block. If α<ndnd+1, every pilot sample is allocated more power compared to every data sample, and vice-versa. Accordingly, various example embodiments determine the optimal value of α to minimize the system BER. In this regard, BER is the final desired performance metric, and is a function of the CE as well as the power of the data symbols. In contrast, in various conventional iterative estimation techniques in the literature, the overall transmitted power may be allocated equally or in fixed (non-optimal) ratio between the data and the pilots. However, various example embodiments note that such approaches may result in sub-optimal system performance. In the power allocation approach for the hybrid iterative channel estimation and data equalization method according to various example embodiments, it may be difficult to find the optimal closed-form power expression for BER minimization due to its iterative nature. Accordingly, various example embodiments determine a near-optimal power allocation using prior training (before the actual transmission starts), assuming that the channel statistics remain unchanged for a sufficiently long duration. Simulation Results Various simulation results for the hybrid iterative channel estimation and equalization method according to various example embodiments over a DSC will now be described. In the simulation conducted, the modulation scheme is Quadrature-phase shift keying (QPSK). The transmission bandwidth is 7.68 MHz. The FFT size is N=512. The DSC is of order L=3, that is, four multipaths are considered. Each channel tap is simulated as an independent and identically distributed (i.i.d.) random variable correlated in time according to Jakes' model with the correlation function given as J0(2πnfmaxTs), which is the zeroth-order Bessel function of the first kind. The average total power of the 4 paths is normalized to 1. The vehicle speed is 500 km/h and the carrier frequency fc=2 or 20 GHz, which leads to normalized Doppler spread fmaxTsN=0.062 or 0.62, respectively. For the BEM model, K=2N and Q=4 was used. Using non-uniformly spaced frequencies, Q can be reduced from 4 to 2. For simplicity, various example embodiments assume uniformly spaced frequencies and Q=4. The length of the pilot cluster is 2Q+1=9 and the number of sub-blocks is L+1=4. Therefore, the efficiency of the transmission is 93%. Firstly, the BER performance under different power allocation schemes was analyzed. The BERs obtained by the hybrid iterative channel estimation and equalization (MMSE&IC-MLSE) method according to various example embodiments with different values of α are plotted. In the comparisons, it was assumed that the SNR is 20 dB and there is no FEC. FIG.13depicts a plot of BER versus the values of α (Es/No=20 dB). FromFIG.13, it can be observed that α=0.9 is a good choice for both low Doppler (fmaxTsN=0.062) and high Doppler (fmaxTsN=0.62) scenarios. Therefore, all the following simulation results of the hybrid iterative channel estimation and equalization method according to various example embodiments are obtained under the condition that α=0.9. Subsequently, the performance of the inter-frame interpolation was analyzed. In the simulation, it was assumed that the interleaver length is 5120 bits, which includes 5 OFDM frames, that is, the inter-frame interpolation was performed each time for 5 frames. In each frame, the channel gains were taken from time index 51 to 462 as the known channel gains, and the first and last 50 channel gains as unknown. Hence, the total number of known channel gains is Nh=2060. The index ki, i=1, 2, . . . Nhare [51, 52, . . . , 461, 462, 563, 564, . . . , 973, 974, 1075, 1076, . . . , 1485, 1486, 1587, 1588, . . . , 1997, 1998, 2099, 2100, . . . , 2509, 2510]. The polynomial degree was set to d=13. After obtaining u1by using Equation (28), the channel gains after interpolation for k=1, 2, . . . , 2560 were calculated by using Equation (24). In FIG.14, the channel gains after interpolation are compared with those obtained without inter-frame interpolation. FIG.14depicts a plot comparing channel gains estimated by pilots with channel gains after interpolation (frame length=512, SNR=20 dB, d=13, interleaver length=5120 bits). FromFIG.14, it can be observed that after interpolation, the estimated channel gains approach the actual channel gain more closely, except in the beginning and end of the 2560 symbols. Accordingly, in various example embodiments, the channel gains after interpolation was used to replace the channel gains estimated by using pilot symbols, except the first and last 100 symbols. Accordingly, the estimated channel gains in at least one of a beginning portion and an end portion of the frame may be modified based on the inter-frame channel interpolation polynomial function. Furthermore, the MSE of the channel estimation and BER of the equalization obtained by using convention methods and the MMSE&IC-MLSE method were computed and compared. The MSE for the lthchannel tap was computed as: M⁢S⁢E⁡(l)=1IN⁢∑i=1I∑n=0N-1❘"\[LeftBracketingBar]"h(i)[n;l]-hˆ(i)[n;l]❘"\[RightBracketingBar]"2(Equation⁢29) where l and N denote the number of transmission blocks and length of each block respectively. ĥ(i)[n;l]=Σq=0Qĥq(i)(l)ejωqnis the estimated time domain channel coefficients of the ithblock obtained from the estimated BEM coefficient ĥq(i)(l). The lower bound of MSE is obtained by performing data-aided CE by using perfectly known source symbols, that is, {tilde over (h)}=({tilde over (S)}H{tilde over (S)})−1{tilde over (S)}HY, where {tilde over (S)} is a N×(Q+1)(L+1) matrix with the ((Q+1)l+q)thcolumn equals to FDq{tilde over (Z)}lFHS. After the BEM coefficient vector {tilde over (h)} is obtained, the time domain channel coefficients {tilde over (h)}(n;l) can be calculated by Equation (2) and the lower bound MSE can be calculated accordingly by Equation (29). The lower bound of BER is obtained by performing IC-MLSE only one time, and the IC and MLSE are based on the assumption that the channel state information (CSI) is perfectly known. When channel is perfectly known, the channel matrix H is given by: H=Σl=0LFDh{tilde over (Z)}lFH(30) where Dhis a N×N diagonal matrix with the diagonal elements equal to the time domain channel coefficients, that is, Dh=diag{[h(0;l), h(1;l), . . . , h(N−1;l)]T}. H was then divided into Hband HICand the IC under perfect CSI was performed by Y″=Y−HICS. The MSE of the channel estimation and BER obtained by different equalization methods are shown inFIGS.15A,15B,16A and16B, respectively. Results shown inFIGS.15A and16Aare obtained without FEC. In particular,FIG.15Adepicts a plot of the MSE of the channel estimation without FEC (fdTsA1=0.62, fc=20 GHz, vehicle speed=500 km/h, efficiency=93%).FIG.15Bdepicts a plot of the MSE of the channel estimation with FEC (fdTsN=0.62, fc=20 GHz, vehicle speed=500 km/h, efficiency=93%).FIG.16Adepicts a plot of the BER after equalization without FEC (fdTSN=0.62, fc=20 GHz, vehicle speed=500 km/h, efficiency=93%).FIG.16Bdepicts a plot of the BER after equalization with FEC (fdTsN=0.62, fc=20 GHz, vehicle speed=500 km/h, efficiency=93%). Results shown inFIGS.15B and16Bare obtained with FEC. The FEC code was a rate ½ convolutional code with generator polynomial (133, 171) and the interleaver was a random interleaver with length of 5120 bits. For the MLSE disclosed in the above-mentioned Barhumi reference, K=N was used since Equation (9) is only valid when K=N. The BERs obtained under flat fading channel are also shown inFIGS.16A and16Bfor benchmark. FromFIGS.15A,15B,16A and16B, it can be observed that the method according to various example embodiments leads to much better MSE and BER performance than those achieved by conventional MLSE and MMSE methods. The reason that the MSE and BER obtained by MLSE proposed in the above-mentioned Barhumi reference are not good is that using K=N and ωq=2⁢πK⁢(q-⌈Q2⌉), q∈{0,1, . . . , Q} leads to a poor BEM channel model for the actual channel with large Doppler spread, and therefore, there are large channel modelling error, which leads to high MSE and BER. The method according to various example embodiments also performs better than the MMSE disclosed in the above-mentioned Tang reference because MMSE is sub-optimum in performance as compared with MLSE, and in addition, the method according to various example embodiments achieves further improvement by using iterations. FromFIGS.15B and16B, it can also be observed that by adding inter-frame channel interpolation, further performance improvement can be obtained, and the BER after adding inter-frame channel interpolation get very close to the BER obtained with perfect CSI. Accordingly, various example embodiments provide a hybrid iterative channel estimation and equalization method (MMSE&IC-MLSE) for efficient and accurate channel estimation (CE) and data equalization for OFDM transmission over a DSC without excessive pilot symbol transmission. In a first iteration or round, the method uses MMSE to perform equalization, and in subsequent iteration(s) or round(s), the method uses the MLSE together with an interference cancellation (IC) block to perform equalization. The method works without and with FEC. Simulation results show that the method significantly outperforms the conventional pilot-based iterative CE and equalization methods for both without and with FEC scenarios, especially when the Doppler spread of the channel is high. Moreover, unlike equal or fixed power allocation in conventional iterative schemes, various example embodiments allocate the transmission power differently over the data and the pilot symbols in an optimal way in the method to further improve the system MSE/BER performance. Furthermore, to further improve the performance of the proposed scheme, for continuous transmission of data blocks, various example embodiments use inter-frame channel interpolation based on LS fitting method to reduce the MSE of CE at the transmission block edges. Simulation results show that the overall performance of the CE and BER can be improved by using the method according to various example embodiments of the present invention. While embodiments of the invention have been particularly shown and described with reference to specific embodiments, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. The scope of the invention is thus indicated by the appended claims and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced.
90,207
11943082
DETAILED DESCRIPTION The embodiments set forth below represent information to enable those skilled in the art to practice the embodiments. Upon reading the following description in light of the accompanying figures, those skilled in the art will understand the concepts of the description and will recognize applications of these concepts not particularly addressed herein. It should be understood that these concepts and applications fall within the scope of the description. In the following description, numerous specific details are set forth. However, it is understood that embodiments may be practiced without these specific details. In other instances, well-known circuits, structures, and techniques have not been shown in detail in order not to obscure the understanding of the description. Those of ordinary skill in the art, with the included description, will be able to implement appropriate functionality without undue experimentation. References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to implement such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. FIG.1illustrates an example of a wireless communication network100that may be used for wireless communications. Wireless communication network100includes wireless devices110A-110B (collectively referred to as wireless devices or WDs110) and a plurality of radio network nodes130A-130B (e.g., NB s and/or RNCs in UMTS, eNBs in LTE, gNBs in NR, etc.) (collectively referred to as radio network node or radio network nodes130) directly or indirectly connected to a core network150which may comprise a plurality of core network nodes (e.g., SGSNs and/or GGSNs in UMTS, MMEs, SGWs, and/or PGWs in LTE/EPC, AMFs, SMFs, and/or UPFs in NGC, etc.) (collectively referred to as core network node or core network nodes). The wireless communication network100may use any suitable radio access network (RAN) deployment scenarios, including UMTS Terrestrial Radio Access Network, UTRAN, Evolved UMTS Terrestrial Radio Access Network, EUTRAN, and Next Generation Radio Access Network, NG-RAN. Wireless devices110within coverage areas115may each be capable of communicating directly with radio network nodes130over a wireless interface. In certain embodiments, wireless devices may also be capable of communicating with each other via device-to-device (D2D) communication. As an example, wireless device110A may communicate with radio network node130A over a wireless interface. That is, wireless device110A may transmit wireless signals to and/or receive wireless signals from radio network node130A. The wireless signals may contain voice traffic, data traffic, control signals, and/or any other suitable information. In some embodiments, an area of wireless signal coverage associated with a radio network node130may be referred to as a cell. In wireless networks such as wireless communication network100, pseudo-random sequences are used in the transmission of some downlink messages and/or in the transmission of some downlink reference signal (In the present description, downlink generally refers to communication from the radio network node to one or more wireless devices while uplink generally refers to communications from one or more wireless devices to the radio network node). For example, in wireless networks deployed according to the 3GPP LTE and/or NR standards, downlink messages sent over the physical downlink shared channels (PDSCH) are scrambled with a pseudo-random (or scrambling) sequence before being modulated and ultimately transmitted by the radio network node (e.g., eNB in LTE, gNB in NR) to the wireless device (e.g., UE in both LTE and NR). The scrambling of the downlink PDSCH messages is described in section 6.3.1 of 3GPP TS 36.211 V14.5.0 (for LTE) and in section 7.3.1.1 of 3GPP TS 38.211 V15.0.0 (for NR). Similarly, in wireless networks deployed according to the 3GPP LTE and/or 3GPP NR standards, downlink reference signals such as the UE-specific reference signals or demodulation reference signals (DM-RS) are generated using a pseudo-random sequence. The generation of the DM-RS is described in section 6.10.3.1 of 3GPP TS 36.211 V14.5.0 (for LTE) and in section 7.4.1.1.1 of 3GPP TS 38.211 V15.0.0 (for NR). As indicated above, downlink messages that are targeted at multiple wireless devices such as paging messages, system information messages, etc., may not be delivered to wireless devices when a certain identifier (e.g., nID, NIDnSCID) has been configured by higher layers (e.g., by the RRC layer). According to a broad embodiment, the identifier used in the generation of the initialization value and which can be configured by higher layers (e.g., nID, nIDnSCID) takes the value of a first parameter, that is the parameter configured by the higher layer(s) only when the first parameter is configured and when a second identifier is a wireless-device-specific identifier. By doing so, the configurable identifier can be configured to a particular value when necessary and/or beneficial. Additional details and embodiments are disclosed below. FIG.2is a flow chart that illustrates some operations of a radio network node130according to some embodiments. As illustrated, the radio network node130first obtains a pseudo-random sequence initialization value (e.g., cinit) which will be used when subsequently obtaining the pseudo-random (or scrambling) sequence (action S100). The pseudo-random sequence initialization value is based, at least in part, on a first identifier (e.g., nID, NIDnSCID). In some embodiments, the first identifier equals to a first parameter (e.g., Data-scrambling-Identity or DL-DMRS-Scrambling-ID) if the first parameter has been configured and if a second identifier corresponds to a wireless-device-specific identifier. In some embodiments, how the pseudo-random sequence initialization value (e.g., cinit) is obtained may differ according to the intended use of the pseudo-random sequence initialization value. For instance, when the pseudo-random sequence initialization value is used to generate or otherwise obtain a scrambling sequence to scramble a downlink message, the pseudo-random sequence initialization value may be obtained using the relation described in section 7.3.1.1 of 3GPP TS 38.211: cinit=nRNTI·21⁢5+q·21⁢4+nI⁢D wherein nIDcorresponds to the first identifier mentioned above. When the pseudo-random sequence initialization value is used to generate or otherwise obtain a pseudo-random sequence to generate or otherwise obtain a downlink reference signal, the pseudo-random sequence initialization value may be obtained using the relation described in section 7.3.1.1 of 3GPP TS 38.211: cinit=(21⁢7⁢(1⁢4⁢ns+l+1)⁢(2⁢NI⁢DnSCID+1)+2⁢NI⁢DnSCID+nS⁢C⁢I⁢D)⁢mod⁢⁢23⁢1 wherein nIDnSCIDcorresponds to the first identifier mentioned above. Regardless of how the pseudo-random sequence initialization value is obtained, as indicated above, the value of the first identifier will be equal to the first parameter if the first parameter is configured (typically by higher layer(s) such as the RRC layer) and if the second identifier corresponds to a wireless-device-specific identifier. In some embodiments, the second identifier is a radio network temporary identifier (RNTI). In NR, RNTIs are summarized in section 7.1 of 3GPP TS 38.321 V15.0.0. However, not all RNTIs are wireless-device-specific RNTIs. For instance, paging RNTI (P-RNTI) and system information RNTI (SI-RNTI) are usually not wireless-device-specific RNTIs in the sense that they do not identify a unique wireless device. However, cell RNTI (C-RNTI), temporary C-RNTI, and configured scheduling RNTI (CS-RNTI) are usually wireless-device-specific RNTIs in the sense that they identify a unique wireless device. Understandably, other RNTIs exist and still other RNTIs may be developed in the future. As such, the above RNTIs are non-limitative examples of RNTIs. Hence, when the first parameter is configured and when the second identifier is a wireless-device-specific identifier, the first identifier equals to the first parameter. If the first parameter has not been configured and/or if the second identifier is not a wireless-device-specific identifier, then there are at least two possible scenarios. In a first scenario, when the first parameter has not been configured and/or when the second identifier is not a wireless-device-specific identifier, then the first identifier takes a default value. In NR, this default value can be a cell identifier such as NIDcell. In some embodiments, the following sections of 3GPP TS 38.211 V15.0.0 may be modified as follows to enable one or more of the described embodiments, including the first scenario. ======<<<<<<3GPP TS 38.211 V15.0.0>>>>>====== 7.3.1.1 Scrambling Up to two codewords can be transmitted, q∈ {0,1}. In case of single-codeword transmission, q=0. For each codeword q, the UE shall assume the block of bits b(q)(0), . . . , b(q)(Mbit(q)−1), where Mbit(q)is the number of bits in codeword q transmitted on the physical channel, are scrambled prior to modulation, resulting in a block of scrambled bits {tilde over (b)}(q)(0), . . . , {tilde over (b)}(q)(Mbit(q)−1) according to b˜(q)⁡(i)=(b(q)⁡(i)+c(q)⁡(i))⁢mod⁢⁢2 where the scrambling sequence c(q)(i) is given by clause 5.2.1. The scrambling sequence generator shall be initialized with cinit=nRNTI·21⁢5+q·21⁢4+nI⁢D wherenID∈ {0,1, . . . , 1023} equals the higher-layer parameter Data-scrambling-Identity if configured and if the RNTI equals the C-RNTI (or another UE-specific RNTI),nID=NIDcellotherwise 7.4.1.1.1 Sequence Generation The UE shall assume the reference-signal sequence r(m) is defined by r⁡(m)=12⁢(1-2·c⁡(2⁢m))+j⁢12⁢(1-2·c⁡(2⁢m+1)). where the pseudo-random sequence c(i) is defined in clause 5.2. The pseudo-random sequence generator shall be initialized with cinit=(21⁢7⁢(1⁢4⁢ns+l+1)⁢(2⁢NI⁢DnSCID+1)+2⁢NI⁢DnSCID+nS⁢C⁢I⁢D)⁢mod⁢⁢23⁢1 where l is the OFDM symbol number within the slot andnSCID∈ {0,1} and NIDnSCID∈ {0,1, . . . , 65535} is given by the higher-layer parameter DL-DMRS-Scrambling-ID if provided and if the RNTI equals the C-RNTI (or another UE-specific RNTI),nSCID=0 and NIDcell=NIDcellotherwise ======<<<<<<3GPP TS 38.211 V15.0.0>>>>>====== In a second scenario, it is possible that a second parameter (e.g., Data-scrambling-Identity-Common or DL-DMRS-Scrambling-ID-Common) be configured. In such a case, if the second parameter has been configured and if the second identifier is not a wireless-device-specific identifier, then the first identifier takes the value of the second parameter. In some embodiments, the following sections of 3GPP TS 38.211 V15.0.0 may be modified as follows to enable one or more of the described embodiments, including the second scenario. ======<<<<<<3GPP TS 38.211 V15.0.0>>>>>====== 7.3.1.1 Scrambling Up to two codewords can be transmitted, q∈ {0,1}. In case of single-codeword transmission, q=0. For each codeword q, the UE shall assume the block of bits b(q)(0), . . . , b(q)(Mbit(q)−1), where Mbit(q)is the number of bits in codeword q transmitted on the physical channel, are scrambled prior to modulation, resulting in a block of scrambled bits {tilde over (b)}(q)(0), . . . , {tilde over (b)}(q)(Mbit(q)−1) according to b˜(q)⁡(i)=(b(q)⁡(i)+c(q)⁡(i))⁢mod⁢⁢2 where the scrambling sequence c(q)(i) is given by clause 5.2.1. The scrambling sequence generator shall be initialized with cinit=nRNTI·21⁢5+q·21⁢4+nI⁢D wherenID∈ {0,1, . . . , 1023} equals the higher-layer parameter Data-scrambling-Identity if configured and if the RNTI equals the C-RNTI (or another UE-specific RNTI),nID∈ {0,1, . . . , 1023} equals the higher-layer parameter Data-scrambling-Identity-Common if configured and if the RNTI does not equal the C-RNTI (or another UE-specific RNTI),nID=NIDcellotherwise 7.4.1.1.1 Sequence Generation The UE shall assume the reference-signal sequence r(m) is defined by r⁡(m)=12⁢(1-2·c⁡(2⁢⁢m))+j⁢12⁢(1-2·c⁡(2⁢m+1)). where the pseudo-random sequence c(i) is defined in clause 5.2. The pseudo-random sequence generator shall be initialized with cinit=(21⁢7⁢(1⁢4⁢ns+l+1)⁢(2⁢NI⁢DnSCID+1)+2⁢NI⁢DnSCID+nS⁢C⁢I⁢D)⁢mod⁢⁢23⁢1 where l is the OFDM symbol number within the slot andnSCID∈ {0,1} and NIDnSCID∈ {0,1, . . . , 65535} is given by the higher-layer parameter DL-DMRS-Scrambling-ID if provided and if the RNTI equals the C-RNTI (or another UE-specific RNTI),nSCID∈ {0,1} and NIDnSCID∈ {0,1, . . . , 65535} is given by the higher-layer parameter DL-DMRS-Scrambling-ID-Common if provided and if the RNTI does not equal the C-RNTI (or another UE-specific RNTI),nSCID=0 and NIDnSCID=NIDcellotherwise ======<<<<<<3GPP TS 38.211 V15.0.0>>>>>====== Understandably, other scenarios are possible. Once the pseudo-random sequence initialization value is obtained, the radio network node then obtains a pseudo-random sequence based, at least in part, on the previously obtained pseudo-random sequence initialization value (action S102). In NR for instance, the generation of the pseudo-random sequence is described in section 5.2.1 of 3GPP TS 38.211 V15.0.0. Then, the radio network node130may use the obtained pseudo-random sequence differently depending on whether the pseudo-random sequence is used with a downlink message or with a downlink reference signal. When the obtained pseudo-random sequence is to be used with a downlink message (e.g., a PDSCH message), the radio network node130scrambles the downlink message with the obtained pseudo-random sequence (action S104) prior to transmitting the scrambled downlink message to a wireless device to which the downlink message is directed (action S106). The scrambling of the downlink message is described, for instance, in section 7.3.1.1 of 3GPP TS 38.211 V15.0.0. When the obtained pseudo-random sequence is to be used with a downlink reference signal (e.g., a DM-RS), the radio network node130generates the downlink reference signal based, at least in part, on the obtained pseudo-random sequence (action S108) prior to transmitting the generated downlink reference signal to the wireless device (action S110). The generation of the downlink reference signal DM-RS is described, for instance, in section 7.4.1.1.1 of 3GPP TS 38.211 V15.0.0. FIG.3is a flow chart that illustrates some operations of a wireless device110according to some embodiments. As illustrated, the wireless device110first obtains a pseudo-random sequence initialization value (e.g., cinit) which will be used when subsequently obtaining the pseudo-random (or scrambling) sequence (action S200). The pseudo-random sequence initialization value is based, at least in part, on a first identifier (e.g., nID, NIDnSCID). In some embodiments, the first identifier equals a first parameter if the first parameter has been configured and if a second identifier corresponds to a wireless-device-specific identifier. In some embodiments, how the pseudo-random sequence initialization value (e.g., cinit) is obtained may differ according to the intended use of the pseudo-random sequence initialization value. For instance, when the pseudo-random sequence initialization value is to be used to generate or otherwise obtain a scrambling sequence to scramble a downlink message, the pseudo-random sequence initialization value may be obtained using the relation described in section 7.3.1.1 of 3GPP TS 38.211: cinit=nRNTI·21⁢5+q·21⁢4+nI⁢D wherein nIDcorresponds to the first identifier mentioned above. When the pseudo-random sequence initialization value is to be used to generate or otherwise obtain a pseudo-random sequence to generate or otherwise obtain a downlink reference signal, the pseudo-random sequence initialization value may be obtained using the relation described in section 7.3.1.1 of 3GPP TS 38.211: cinit=(21⁢7⁢(1⁢4⁢ns+l+1)⁢(2⁢NI⁢DnSCID+1)+2⁢NI⁢DnSCID+nSCID)⁢mod⁢⁢23⁢1 wherein NIDnSCIDcorresponds to the first identifier mentioned above. Regardless of how the pseudo-random sequence initialization value is obtained, as indicated above, the value of the first identifier will be equal to the first parameter if the first parameter is configured (typically by higher layer(s) such as the RRC layer) and if the second identifier corresponds to a wireless-device-specific identifier. In some embodiments, the second identifier is a radio network temporary identifier (RNTI). In NR, RNTIs are summarized in section 7.1 of 3GPP TS 38.321 V15.0.0. However, not all RNTIs are wireless-device-specific RNTIs. For instance, paging RNTI (P-RNTI) and system information RNTI (SI-RNTI) are usually not wireless-device-specific RNTIs in the sense that they do not identify a unique wireless device. However, cell RNTI (C-RNTI), temporary C-RNTI, and configured scheduling RNTI (CS-RNTI) are usually wireless-device-specific RNTIs in the sense that they identify a unique wireless device. Understandably, other RNTIs exist and still other RNTIs may be developed in the future. As such, the above RNTIs are non-limitative examples of RNTIs. Hence, when the first parameter is configured and when the second identifier is a wireless-device-specific identifier, the first identifier equals to the first parameter. If the first parameter has not been configured and/or if the second identifier is not a wireless-device-specific identifier, then there are at least two possible scenarios. In a first scenario, when the first parameter has not been configured and/or when the second identifier is not a wireless-device-specific identifier, then the first identifier takes a default value. In NR, this default value can be a cell identifier such as NIDcell. In some embodiments, the following sections of 3GPP TS 38.211 V15.0.0 may be modified as follows to enable one or more of the described embodiments, including the first scenario. ======<<<<<<3GPP TS 38.211 V15.0.0>>>>>====== 7.3.1.1 Scrambling Up to two codewords can be transmitted, q∈ {0,1}. In case of single-codeword transmission, q=0. For each codeword q, the UE shall assume the block of bits b(q)(0), . . . , b(q)(Mbit(q)−1), where Mbit(q)is the number of bits in codeword q transmitted on the physical channel, are scrambled prior to modulation, resulting in a block of scrambled bits {tilde over (b)}(q)(0), . . . , {tilde over (b)}(q)(Mbit(q)−1) according to b˜(q)⁡(i)=(b(q)⁡(i)+c(q)⁡(i))⁢mod⁢⁢2 where the scrambling sequence c(q)(i) is given by clause 5.2.1. The scrambling sequence generator shall be initialized with cinit=nRNTI·21⁢5+q·21⁢4+nI⁢D wherenID∈ {0,1, . . . , 1023} equals the higher-layer parameter Data-scrambling-Identity if configured and if the RNTI equals the C-RNTI (or another UE-specific RNTI),nID=NIDcellotherwise 7.4.1.1.1 Sequence Generation The UE shall assume the reference-signal sequence r(m) is defined by r⁡(m)=12⁢(1-2·c⁡(2⁢m))+j⁢12⁢(1-2·c⁡(2⁢m+1)). where the pseudo-random sequence c(i) is defined in clause 5.2. The pseudo-random sequence generator shall be initialized with cinit=(21⁢7⁢(1⁢4⁢ns+l+1)⁢(2⁢NI⁢DnSCID+1)+2⁢NI⁢DnSCID+nS⁢C⁢I⁢D)⁢mod⁢⁢23⁢1 where l is the OFDM symbol number within the slot andnSCID∈ {0,1} and NIDnSCID∈ {0,1, . . . , 65535} is given by the higher-layer parameter DL-DMRS-Scrambling-ID if provided and if the RNTI equals the C-RNTI (or another UE-specific RNTI),nSCID=0 and NIDnSCID=NIDcellotherwise ======<<<<<<3GPP TS 38.211 V15.0.0>>>>>====== In a second scenario, it is possible that a second parameter (e.g., Data-scrambling-Identity-Common or DL-DMRS-Scrambling-ID-Common) be configured. In such a case, if the second parameter has been configured and if the second identifier is not a wireless-device-specific identifier, then the first identifier takes the value of the second parameter. In some embodiments, the following sections of 3GPP TS 38.211 V15.0.0 may be modified as follows to enable one or more of the described embodiments, including the second scenario. ======<<<<<<3GPP TS 38.211 V15.0.0>>>>>====== 7.3.1.1 Scrambling Up to two codewords can be transmitted, q∈ {0,1}. In case of single-codeword transmission, q=0. For each codeword q, the UE shall assume the block of bits b(q)(0), . . . , b(q)(Mbit(q)−1), where Mbit(q)is the number of bits in codeword q transmitted on the physical channel, are scrambled prior to modulation, resulting in a block of scrambled bits {tilde over (b)}(q)(0), . . . , {tilde over (b)}(q)(Mbit(q)−1) according to b˜(q)⁡(i)=(b(q)⁡(i)+c(q)⁡(i))⁢mod⁢2 where the scrambling sequence c(q)(i) is given by clause 5.2.1. The scrambling sequence generator shall be initialized with cinit=nRNTI·21⁢5+q·21⁢4+nI⁢D wherenID∈ {0,1, . . . , 1023} equals the higher-layer parameter Data-scrambling-Identity if configured and if the RNTI equals the C-RNTI (or another UE-specific RNTI),nID∈ {0,1, . . . , 1023} equals the higher-layer parameter Data-scrambling-Identity-Common if configured and if the RNTI does not equal the C-RNTI (or another UE-specific RNTI),nID=NIDcellotherwise 7.4.1.1.1 Sequence Generation The UE shall assume the reference-signal sequence r(m) is defined by r⁡(m)=12⁢(1-2·c⁡(2⁢m))+j⁢12⁢(1-2·c⁡(2⁢m+1)). where the pseudo-random sequence c(i) is defined in clause 5.2. The pseudo-random sequence generator shall be initialized with cinit=(21⁢7⁢(1⁢4⁢ns+l+1)⁢(2⁢NI⁢DnSCID+1)+2⁢NI⁢DnSCID+nSCID)⁢mod⁢⁢23⁢1 where l is the OFDM symbol number within the slot andnSCID∈ {0,1} and NIDnSCID∈ {0,1, . . . , 65535} is given by the higher-layer parameter DL-DMRS-Scrambling-ID if provided and if the RNTI equals the C-RNTI (or another UE-specific RNTI),nSCID∈ {0,1} and NIDnSCID∈ {0,1, . . . , 65535} is given by the higher-layer parameter DL-DMRS-Scrambling-ID-Common if provided and if the RNTI does not equal the C-RNTI (or another UE-specific RNTI),nSCID=0 and NIDnSCID=NIDcellotherwise ======<<<<<<3GPP TS 38.211 V15.0.0>>>>>====== Understandably, other scenarios are possible. Once the pseudo-random sequence initialization value is obtained, the wireless device then obtains a pseudo-random sequence based, at least in part, on the previously obtained pseudo-random sequence initialization value (action S202). In NR for instance, the generation of the pseudo-random sequence is described in section 5.2.1 of 3GPP TS 38.211 V15.0.0. Then, the wireless device110may use the obtained pseudo-random sequence differently depending on whether the pseudo-random sequence is used with a downlink message or with a downlink reference signal. When the obtained pseudo-random sequence is to be used with a downlink message (e.g., a PDSCH message), the wireless device110usually first receives a downlink message which has been previously scrambled by the sending radio network node (see action S104) (action S204), and then uses the obtained pseudo-random sequence to unscramble the received (and previously scrambled) downlink message (action S206). When the obtained pseudo-random sequence is to be used with a downlink reference signal (e.g., a DM-RS), the wireless device110generates a (local) downlink reference signal based, at least in part, on the obtained pseudo-random sequence (action S208). The wireless device110also receives a downlink reference signal (e.g., DM-RS) from the radio network node (action S210). The wireless device then estimates a downlink channel based on a comparison between the downlink reference signal received from the radio network node and the reference signal generated by the wireless device (action S212). The results of the channel estimation can be used, for instance, in demodulating downlink messages received from the radio network node130. Embodiments of a radio network node130will now be described with respect toFIGS.4and5. As used herein, a “radio network node” is any node in a radio access network of a wireless communication network that operates to wirelessly transmit and/or receive signals. Notably, various communication standards sometimes use different terminologies when referring to or describing radio network nodes. For instance, in addition to base station, 3GPP also uses Node B (NB), evolved Node B (eNB), and Next Generation Node B (gNB). For its part, IEEE 802.11 (also known as WiFi™) uses the term access point (AP). Some examples of a radio network node include, but are not limited to, a base station (e.g., a New Radio (NR) base station (gNB) in a Third Generation Partnership Project (3GPP) Fifth Generation (5G) NR network or an enhanced or evolved Node B (eNB) in a 3GPP Long Term Evolution (LTE) network), a high-power or macro base station, a low-power base station (e.g., a micro base station, a pico base station, a home eNB, or the like), and a relay node. FIG.4is a block diagram of an exemplary radio network node130according to some embodiments. Radio network node130may include one or more of a transceiver132, a processor134, a memory136, and one or more communication interface(s)146. In some embodiments, the transceiver132facilitates transmitting wireless signals to and receiving wireless signals from wireless devices110(e.g., via transmitter(s) (Tx)138, receiver(s) (Rx)140, and antenna(s)142). The processor134executes instructions to provide some or all of the functionalities described above as being provided by a radio network node130, and the memory136stores the instructions to be executed by the processor134. In some embodiments, the processor134and the memory136form processing circuitry144. The communication interface(s)146enable the radio network130to communicate with other network nodes, including other radio network nodes (via a radio access network interface) and core network nodes (via a core network interface). The processor134may include any suitable combination of hardware to execute instructions and manipulate data to perform some or all of the described functions of radio network node130, such as those described above. In some embodiments, the processor134may include, for example, one or more computers, one or more central processing units (CPUs), one or more microprocessors, one or more application specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs) and/or other logic. The memory136is generally operable to store instructions, such as a computer program, software, an application including one or more of logic, rules, algorithms, code, tables, etc. and/or other instructions capable of being executed by a processor. Examples of memory include computer memory (for example, Random Access Memory (RAM) or Read Only Memory (ROM)), mass storage media (for example, a hard disk), removable storage media (for example, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or or any other volatile or non-volatile, non-transitory computer-readable and/or computer-executable memory devices that store information. In some embodiments, the communication interface146is communicatively coupled to the processor134and may refer to any suitable device operable to receive input for radio network node130, send output from radio network node130, perform suitable processing of the input or output or both, communicate to other devices, or any combination of the preceding. The communication interface146may include appropriate hardware (e.g., port, modem, network interface card, etc.) and software, including protocol conversion and data processing capabilities, to communicate through a network. Other embodiments of radio network node130may include additional components beyond those shown inFIG.4that may be responsible for providing certain aspects of the radio network node's functionalities, including any of the functionalities described above and/or any additional functionalities (including any functionality necessary to support the solutions described above). The various different types of network nodes may include components having the same physical hardware but configured (e.g., via programming) to support different radio access technologies, or may represent partly or entirely different physical components. In some embodiments, the radio network node130may comprise a series of modules (or units)148configured to implement some or all the functionalities of the radio network node130described above. Referring toFIG.5, in some embodiments, the radio network node130may comprise a (first) obtaining module configured to obtain a pseudo-random sequence initialization value, the pseudo-random sequence initialization value being based, at least in part, on a first identifier, wherein the first identifier equals to a first parameter if the first parameter has been configured and if a second identifier corresponds to a wireless-device-specific identifier, and a (second) obtaining module configured to obtaining a pseudo-random sequence based, at least in part, on the pseudo-random sequence initialization value. In some embodiments, the radio network node130may comprise, or further comprise, a scrambling module configured to scramble a downlink message with the obtained pseudo-random sequence, and a transmitting module configured to transmit the scrambled downlink message to a wireless device. Additionally, or alternatively, in some embodiments, the radio network node130may comprise, or further comprise, a generating module configured to generate a downlink reference signal based at least in part on the obtained pseudo-random sequence, and a transmitting module configured to transmit the generated downlink reference signal to a wireless device. It will be appreciated that the various modules148may be implemented as combination of hardware and/or software, for instance, the processor134, memory136, and transceiver(s)132of radio network node130shown inFIG.4. Some embodiments may also include additional modules148to support additional and/or optional functionalities. Some embodiments of a wireless device110will now be described with respect toFIGS.6and7. Even though the expression “wireless device” is used throughout the description, it is to be understood that the expression is used generically. In that sense, a wireless device (WD) generally refers to a device capable, configured, arranged and/or operable to communicate wirelessly with one or more network nodes (e.g., radio network nodes) and/or with one or more other wireless devices. Notably, different communication standards may use different terminology when referring to or describing wireless device. For instance, 3GPP uses the terms User Equipment (UE) and Mobile Terminal (MT). For its part, 3GPP2 uses the terms Access Terminal (AT) and Mobile Station (MS). And IEEE 802.11 (also known as WiFi™) uses the term station (STA). In some embodiments, a wireless device may be configured to transmit and/or receive information without direct human interaction. Such a wireless device may be referred to as a Machine Type Communication (MTC) device or as a Machine-to-Machine (M2M) device. FIG.6is a block diagram of an exemplary wireless device110according to some embodiments. Wireless device110includes one or more of a transceiver112, processor114, and memory116. In some embodiments, the transceiver112facilitates transmitting wireless signals to and receiving wireless signals from radio network node130(e.g., via transmitter(s) (Tx)118, receiver(s) (Rx)120, and antenna(s)122). The processor114executes instructions to provide some or all of the functionalities described above as being provided by wireless device110, and the memory116stores the instructions to be executed by the processor114. In some embodiments, the processor114and the memory116form processing circuitry124. The processor114may include any suitable combination of hardware to execute instructions and manipulate data to perform some or all of the described functions of wireless device110, such as the functions of wireless device110described above. In some embodiments, the processor114may include, for example, one or more computers, one or more central processing units (CPUs), one or more microprocessors, one or more application specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs) and/or other logic. The memory116is generally operable to store instructions, such as a computer program, software, an application including one or more of logic, rules, algorithms, code, tables, etc. and/or other instructions capable of being executed by a processor. Examples of memory include computer memory (for example, Random Access Memory (RAM) or Read Only Memory (ROM)), mass storage media (for example, a hard disk), removable storage media (for example, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or or any other volatile or non-volatile, non-transitory computer-readable and/or computer-executable memory devices that store information, data, and/or instructions that may be used by the processor of wireless device110. Other embodiments of wireless device110may include additional components beyond those shown inFIG.6that may be responsible for providing certain aspects of the wireless device functionalities, including any of the functionalities described above and/or any additional functionalities (including any functionality necessary to support the solution(s) described above). As just one example, wireless device110may include input devices and circuits, output devices, and one or more synchronization units or circuits, which may be part of the processor. Input devices include mechanisms for entry of data into wireless device110. As an example, wireless device110may include additional hardware126such as input devices and output devices. Input devices include input mechanisms such as microphone, input elements, display, etc. Output devices include mechanisms for outputting data in audio, video and/or hard copy format. For example, output devices may include a speaker, a display, etc. FIG.7is a block diagram of another exemplary wireless device110according to some embodiments. As illustrated, in some embodiments, the wireless device110may comprise a series of modules (or units)128configured to implement some or all of the functionalities of the wireless device110described above. More particularly, in some embodiments, the wireless device110may comprise a (first) obtaining module configured to obtain a pseudo-random sequence initialization value, the pseudo-random sequence initialization value being based, at least in part, on a first identifier, wherein the first identifier equals to a first parameter if the first parameter has been configured and if a second identifier corresponds to a wireless-device-specific identifier, a (second) obtaining module configured to obtain a pseudo-random sequence based, at least in part, on the pseudo-random sequence initialization value. In some embodiments, the wireless device may comprise, or further comprise, a receiving module configured to receive a scrambled downlink message from a radio network node, and an unscrambling module configured to unscramble the received scrambled downlink message with the obtained pseudo-random sequence. Additionally, or alternatively, in some embodiments, the wireless device may comprise, or further comprise, a generating module configured to generate a reference signal based, at least in part, on the obtained pseudo-random sequence, a receiving module configured to receive a downlink reference signal from a radio network node, and an estimating module configured to estimate a downlink channel based on a comparison between the downlink reference signal received from the radio network node and the reference signal generated by the wireless device. It will be appreciated that the various modules128may be implemented as combination of hardware and/or software, for instance, the processor114, memory116, and transceiver(s)112of wireless device110shown inFIG.6. Some embodiments may also include additional modules128to support additional and/or optional functionalities Some embodiments may be represented as a non-transitory software product stored in a machine-readable medium (also referred to as a computer-readable medium, a processor-readable medium, or a computer usable medium having a computer-readable program code embodied therein). The machine-readable medium may be any suitable tangible medium including a magnetic, optical, or electrical storage medium including a diskette, compact disk read only memory (CD-ROM), digital versatile disc read only memory (DVD-ROM) memory device (volatile or non-volatile), or similar storage mechanism. The machine-readable medium may contain various sets of instructions, code sequences, configuration information, or other data, which, when executed, cause a processor to perform steps in a method according to one or more of the described embodiments. Those of ordinary skill in the art will appreciate that other instructions and operations necessary to implement the described embodiments may also be stored on the machine-readable medium. Software running from the machine-readable medium may interface with circuitry to perform the described tasks. The above-described embodiments are intended to be examples only. Alterations, modifications and variations may be effected to the particular embodiments by those of skill in the art without departing from the scope of the description. ABBREVIATIONS The present description may comprise one or more of the following abbreviations:CCE Control Channel ElementDAI Downlink Assignment IndexDCI Downlink Control InformationHARQ Hybrid Automatic Repeat RequestPDCCH Physical Downlink Control ChannelPUSCH Physical Uplink Shared ChannelUCI Uplink Control InformationUL-SCH Uplink Shared Channel RELATED STANDARD REFERENCES The following references may be related to the present description:3GPP TS 38.211 V15.0.03GPP TS 36.211 V14.5.0
38,850
11943083
DETAILED DESCRIPTION In accordance with implementations of the present disclosure, PHY transceivers, methods, and computer-readable media are disclosed for reliably and efficiently receiving data via a communication channel despite the presence of electromagnetic or other types of noise and/or interference. Among other advantages over prior approaches, the transceivers, methods, and computer-readable media herein employ an efficient power management scheme, realize an improved signal-to-noise ratio (SNR) despite the presence of interference by performing DME decoding based on a difference between matching filter outputs, select an optimal decision sample by using a peak tracking algorithm, reduce false reception by searching for a preamble before receiving data, and track phase and reduce jitter caused by the interference by using a digital timing loop (DTL). FIG.1is a schematic representation of physical coding sublayer (PCS)102and physical medium attachment (PMA)104components of a physical layer transceiver100, in accordance with an embodiment of the present disclosure. PCS102is a known networking protocol sublayer in the various Ethernet standards. It resides at the top of the physical layer (PHY), and provides an interface between the PMA sublayer and the media-independent interface (MII). It is responsible for data encoding and decoding, scrambling and descrambling, alignment marker insertion and removal, block and symbol redistribution, and lane block synchronization and de-skew. PMA104performs PMA framing and byte synchronization/detection, as well as a variety of other functions, as described in further detail below in connection withFIGS.2through8. PMA104includes a PMA receive chain106, which includes various components that are described in further detail below. FIG.2is a block diagram showing additional components of PMA receive chain106of a physical layer transceiver, in accordance with an embodiment of the present disclosure. PMA receive chain106includes an analog front end (AFE)202, an 8-bit 125 mega-samples per second (Msps) analog-to-digital converter (ADC)204, an up-sampler206, a 2-bit slicer208, a matching filter210, peak search circuitry212, peak tracking circuitry and DME decoder214, data select circuitry216, boundary alignment circuitry218(e.g., for 5B/4B boundary alignment), and a digital timing loop that includes a timing error detector (TED)220, a digital timing loop (DTL) filter222, and a 6-bit phase interpolator (PI)224. AFE202is analog signal conditioning circuitry that uses analog amplifiers, filters, and/or the like to condition a received non-equalized signal into a form (e.g., at an appropriate signal level) suitable for ADC204. ADC204converts the analog signal received from AFE202into a digital signal for subsequent digital signal processing. Up-sampler206up-samples a sequence of digital signal samples received from ADC204to produce an approximation of the sequence that would have been obtained by sampling the signal at a higher rate (e.g., two or more multiples of the rate at which ADC204samples the analog signal from AFE202). 2-bit slicer208is an optional component that slices digital samples from up-sampler206into slices of a fixed number of bits, to facilitate processing of data in smaller bit widths. Matching filters210improve a signal-to-noise ratio of the non-equalized signal received from ADC204(or from 2-bit slicer208in embodiments that include slicer208) by correlating the non-equalized signal with two or more signal templates to detect the presence of logic low signal patterns and logic high signal patterns in the non-equalized signal and generating a filtered signal for output. DME decoder214performs DME decoding on the filtered signal and generates for output a decoded signal. Peak search circuitry212, peak tracking circuitry214, and data select circuitry216cooperate with one another to compensate for noise or interference distortion in the non-equalized signal. In particular, peak search circuitry212detects peak values in the filtered signal from matching filters210. Peak tracking circuitry214tracks the peak values in the non-equalized signal to identify an optimal value among the peak values. Data select circuitry216selects a decision sample defined by a plurality of peaks, based on the peak tracking, extracts logic high and logic low signals values from the selected decision sample, and forwards the extracted data to boundary alignment circuitry. Boundary alignment circuitry218(e.g., for 5B/4B boundary alignment) performs boundary alignment in a known manner to align the boundaries of the data received from data select circuitry216(e.g., to map five-bit data values to four-bit data values), and outputs the boundary aligned signal to PCS102for processing. TED220, a DTL filter222, and PI224form a digital timing loop. Timing error detector220detects a timing error based on the signal received at the physical layer transceiver. DTL filter222is a low-pass filter that filters out high frequency noise from the signal that was received at the physical layer transceiver, after detection of the timing error, if any, to keep only the error in the digital timing loop to enable correction of the sample phase. Phase interpolator224adjusts a sample clock phase of a sample clock used by ADC converter204, for example, within fine increments of plus or minus 100 parts per million, to compensate for the timing error. FIG.3is a flow diagram illustrating a method300for reliably receiving data at a physical layer transceiver of an automobile in the presence of noise or interference, according to implementations of the subject matter of the present disclosure. At302, a physical layer transceiver (e.g., PHY100) receives at AFE202a non-equalized signal via a communication channel in a high noise or interference automotive environment. PHY100(or control circuitry thereof, not separately shown in the figures) then prepares the non-equalized signal for extraction of data by performing one or more of the following steps. At304, PHY100improves a signal-to-noise ratio of the non-equalized signal by using two or more parallel matching filters to correlate the non-equalized signal with two or more signal templates to detect the presence of logic low signal patterns and logic high signal patterns in the non-equalized signal. Matching filter210generates a filtered signal based on the signal received at the physical layer transceiver100. DME decoder214generates a decoded signal based on the filtered signal. At306, PHY100reduces jitter in the non-equalized signal by tracking a phase of the non-equalized signal using a digital timing loop. The digital timing loop, which includes TED220, a DTL filter222, and PI224, generally operates to track and correct for the phase or frequency offset or jitter, if any, that exists (in some cases owing to noise and/or interference) between a transmitter clock of another PHY on the network that is transmitting data on the bus and the clock that the receiver PHY100is using, to maintain synchronization between transmitter and receiver clocks. Jitter or offset in phase or frequency of transmitter and receiver clocks increases bit error rate (BER) and decreases SNR. Synchronizing transmitter and receiver clocks in the manner described above utilizing the DTL loop has the advantage of reducing BER and causing an improvement in SNR. Timing error detector220detects a timing error based on the signal received at the physical layer transceiver. DTL filter222is a low-pass filter that filters out high frequency noise from the signal that was received at the physical layer transceiver, after detection of the timing error, if any, to keep only the error in the digital timing loop to enable correction of the sample phase. Phase interpolator224adjusts a sample clock phase of a sample clock used by ADC converter204, for example, within fine increments of plus or minus 100 parts per million, to compensate for the timing error (e.g., to mitigate the impact of jitter). At308, PHY100compensates for noise or interference distortion in the non-equalized signal by detecting, at peak search circuitry212, peak values in the filtered signal and selecting, at data select circuitry216, a decision sample defined by a plurality of peaks, the selecting being performed based on tracking, at peak tracking circuitry214, the peak values in the non-equalized signal to identify an optimal value among the peak values. At310, PHY100(e.g., control circuitry thereof) searches the non-equalized signal for a preamble before initiating a process of receiving payload data, to reduce false data reception caused by noise or interference. At312, PHY100extracts data from the prepared non-equalized signal. Boundary alignment circuitry218performs boundary alignment based on the selected logic one or logic zero and outputs a signal derived from the boundary alignment to PCS102. FIG.4is a block diagram showing additional control circuitry400that may be included in PMA receive chain106, in accordance with an embodiment of the present disclosure. In some embodiments, control circuitry400manages power consumption of the physical layer transceiver by detecting the presence or the absence of a data signal based on whether data (as opposed to merely noise or interference) is detected (e.g., extracted) in the non-equalized signal received by data select circuitry216. In particular, control circuitry400includes a phase search and module402that conducts phase searching and preamble searching of non-equalized signals received at PHY100, and module404that, after the phase search has been completed and the preamble has been found in the non-equalized signal, selectively enables the operation of DTL and decoder input peak tracking. More specifically, in response to detecting the presence of the data signal, module404enables operation of timing error detector220, DTL filter222, phase interpolator224(DTL loop), matching filter210, DME decoder214, peak search circuitry212, peak tracking circuitry214, and data select circuitry216. In response to detecting the absence of a data signal, or after receipt/extraction of data is completed and merely noise or interference is present on the non-equalized signal received at PHY100, module404conducts a reset by disabling operation of timing error detector220, DTL filter222, phase interpolator224(DTL loop), matching filter210, DME decoder214, peak search circuitry212, peak tracking circuitry214, and data select circuitry216. FIG.5is a schematic representation of matching filters210of a physical layer transceiver, in accordance with an embodiment of the present disclosure. Matching filters210improve a signal-to-noise ratio of the non-equalized signal received from ADC204(or from 2-bit slicer208in embodiments that include slicer208) by correlating the non-equalized signal with two or more signal templates (one signal template that is defined by filter coefficients that correspond to a logic zero pattern and another signal template that is defined by filter coefficients that correspond to a logic one pattern) to detect the presence of logic zero (low) signal patterns and logic one (high) signal patterns in the non-equalized signal and generating a filtered signal for output. In particular, the signal received at the physical layer transceiver100is fed from ADC204(or from 2-bit slicer208in embodiments that include slicer208) in parallel through a first sub-filter502having coefficients shaped to correspond to a logic zero pattern and through a second sub-filter504having coefficients shaped to correspond to a logic one pattern (e.g., the patterns being defined in accordance with known patterns of logic low and logic high DME signals). The coefficients of the first sub-filter502and the coefficients of the second sub-filter504, in some aspects, are selected based on an oversampling factor to achieve a higher resolution. For instance, in one example, the coefficients of first sub-filter502, which correspond to a logic one output, include a sequence of zeros followed by a sequence of ones (e.g., a number of zeros equal to an oversampling factor followed by a number of ones equal to the oversampling factor); and the coefficients of second sub-filter504, which correspond to a logic zero output, include a sequence of ones followed by another sequence of ones (e.g., a total number of ones equal to twice oversampling factor). The absolute magnitude of the output of the first sub-filter502is computed at component506, and the absolute magnitude of the output of the second sub-filter504is computed at component508, to render operation of matching filter210independent of the polarity of the non-equalized signal. Then the filter signal is computed at subtractor510as a difference between the absolute values outputted by components508and506. FIG.6is a graphical representation of how DME decoder214performs DME decoding on a signal received from matching filters212, to generate a decoded signal (e.g., including logic ones and logic zeros) in accordance with an embodiment of the present disclosure. DME decoder214decodes the filtered signal by detecting positive peaks (e.g., positive peaks602in the filtered signal and outputting a respective logic zero based on detection of each positive peak. To detect a logic one, DME decoder analyzes the filtered signal to find negative peaks (e.g., negative peaks604) that are located within a threshold temporal distance (e.g., a distance equal to one or a multiple of twice the oversampling factor that was used matching filters212to achieve higher resolution) from its nearest positive local peak (e.g., local positive peaks606) in the filtered signal. In other words, DME decoder outputs a logic one for each negative peak (e.g., negative peak604) that is located within the threshold temporal distance from a nearest local positive peak. FIG.7is a graphical representation of how peak tracking circuitry214performs peak tracking, in accordance with an embodiment of the present disclosure. The signal inputted to DME decoder214might not be ideal due to waveform distortion. In this case, a distance-based decision point (e.g., selecting a decision point based on its distance in magnitude from a previously detected peak) might not be optimum. For instance, in the example ofFIG.7, Distance706might be the expected typical distance in magnitude from a positive peak to a negative peak. Owing to distortion, however, peak712might have a higher value than it would without distortion. In this circumstance, DME decoder214might use point704as a selected decision point (e.g., by virtue of selecting a decision point that appears at the expected distance (distance706, equal to distance708) in magnitude from one another). Peak704as a lower (negative) magnitude than peak702and is therefore less optimal of a decision point than peak702, since the higher the signal value, the greater the SNR. To compensate for this situation, peak tracking circuitry and DME decoder214track peak values and select, as the optimal value, a value (e.g., value702) having a greatest magnitude from among the peak values (e.g., instead of selecting value704based on a sub-optimal temporal distance-based decision), even if the peak value702is located a distance (in magnitude) from peak712that differs from the expected distance. That is, in the example ofFIG.7, DME decoder214would select peak702instead of peak704as a decision point. By tracking peaks and selecting the decision points based on peak magnitudes, DME decoder214yields increased SNR performance, since a greater signal value is used for the given amount of noise. FIG.8is a graphical representation of how preamble searching operates, in accordance with an embodiment of the present disclosure. The initial portion of an Ethernet frame is marked by a preamble, which is a predetermined fixed value and which forms part of the Ethernet packet at the physical layer. The preamble consists of a pattern (e.g., a seven byte pattern) of 1 and 0 bits that allows devices on a network to synchronize their receiver clocks, providing bit-level synchronization. In the example ofFIG.8, detecting the preamble includes detecting, at DME decoder214in the manner described above in connection withFIG.6, a sequential pattern of logic zero and logic one values (e.g., 00011) in the sequence of logical bit values that DME decoder214receives from matching filters212. DME decoder214detects the preamble in the sequence of logical bit values received from matching filters212by performing a sliding window bit-wise comparison between the bit values of the known preamble sequence and the bit values of each sequence (of the same number of bits as in the preamble sequence) received from matching filters212to determine whether the received bit sequence matches the preamble bit sequence. High amounts of noise and interference such as is often present in an automotive network environment give rise on the bus to random signal patterns, which can sometimes match patterns of data that might otherwise be expected to be transmitted on the bus by another PHY. In such circumstances, a receiving PHY might erroneously detect the random signal pattern caused by noise and interference as false data, or data transmitted by another PHY on the bus. Because the likelihood that noise and interference will give rise to a signal pattern that precisely matches an entire Ethernet preamble pattern (e.g., a seven byte pattern) is low, by preventing the physical layer transceiver from receiving data until the preamble is detected, false data reception caused by noise or interference is mitigated. While various embodiments of the present disclosure have been shown and described herein, such embodiments are provided by way of example only. Numerous variations, changes, and substitutions relating to embodiments described herein are applicable without departing from the disclosure. It is noted that various alternatives to the embodiments of the disclosure described herein may be employed in practicing the disclosure without departing from the scope of the claims. While operations are depicted in the drawings in a particular order, this is not to be construed as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed to achieve the desirable results.
18,474
11943084
DESCRIPTION OF EMBODIMENTS A signal processing method in this application may be applied to a backscatter communication system. Refer toFIG.1. In a backscatter communication system, the backscatter communication system includes an exciter101, a backscatter device102, and a receiver103. The exciter101may also be referred to as a helper (helper), an interrogator (interrogator), or a reader (reader). The exciter101may be a module in a network device or a module in a terminal. The backscatter device102is a radio-frequency tag. The radio-frequency tag may be an active tag, a passive tag, or a semi-active tag. The backscatter device may also be referred to as a backscatter device or a radio frequency identification device. The backscatter device102may include a data receiving unit and a data detection unit. The receiver103is an apparatus configured to receive a radio signal. The receiver103may be specifically a radio signal receiving module in the network device or a radio signal receiving module in the terminal. The exciter101may transmit a radio frequency signal. After receiving the radio frequency signal, the backscatter device102includes data in the radio frequency signal to obtain a backscatter signal, and then transmits the backscatter signal. The receiver103may separately receive the radio frequency signal transmitted by the exciter101and the backscatter signal transmitted by the backscatter device102. The receiver103demodulates the backscatter signal to obtain the data sent by the backscatter device102. The data may be a radio-frequency identification (RFID). When a sensor is integrated in the backscatter device102or the backscatter device102is connected to a sensor, the data may alternatively be data collected by the sensor, for example, humidity data collected by a humidity sensor or temperature data collected by a temperature sensor. The sensor and the data collected by the sensor are not limited to the foregoing examples. Refer toFIG.2. In another backscatter communication system, an exciter101and a backscatter device102are integrated in a reader104. In the backscatter communication system, a communication link on which the exciter101transmits a signal to the backscatter device102is referred to as a forward communication link, and the signal used by the forward communication link is referred to as a forward communication signal. The forward communication signal may not only carry data, but also charge the backscatter device102. The forward communication signal is usually an ASK signal, and the ASK signal is a square wave signal. In some ASK signals, high level indicates 1 and low level indicates 0; or high level indicates 0 and low level indicates 1. In an ASK signal encoded by using Manchester code, jumping from high level to low level indicates 1, and jumping from low level to high level indicates 0. When a digital signal is transmitted by using the ASK signal, the digital signal 0 or 1 is transmitted by using a square wave. Consequently, the ASK signal has narrow bandwidth, and energy of the signal is concentrated in a center of a frequency band of the ASK signal, resulting in great interference to a cellular communication system. To resolve the foregoing problem, this application provides a signal processing method applied to the cellular communication system, to disperse the energy of the ASK signal in the frequency band of the signal, so that interference to another communication device is reduced. For details, refer to the following embodiment. Refer toFIG.3. An embodiment of the signal processing method in this application includes the following steps. Step301: Generate a data signal based on original data. Step302: Scramble the data signal by using a pseudo-random signal. The pseudo-random signal is also referred to as a pseudo-random sequence or pseudo-random code. The pseudo-random signal includes at least one of a ZC sequence, a gold sequence, an m-sequence, a binary phase shift keying (BPSK) signal sequence, or a π2-BPSK signal sequence. Elements in the BPSK signal sequence may be randomly arranged or fixedly arranged, and may be a BPSK signal sequence obtained based on a gold sequence. Alternatively, elements in the π2-BPSK signal sequence may be randomly arranged or fixedly arranged, and may be but are not limited to a π2-BPSK signal sequence obtained based on a gold sequence. The pseudo-random signal may be any one of the foregoing sequences, or a combination sequence including two sequences. The combination sequence may be a combination of a ZC sequence and a random BPSK signal sequence, a combination of a ZC sequence and a random π2-BPSK signal sequence, a combination of a ZC sequence and a BPSK signal sequence that is obtained based on a gold sequence, or a combination of a ZC sequence and a π2-BPSK signal sequence that is obtained based on a gold sequence. The following describes in detail the sequences in this application. 1. The ZC sequence is also referred to as a Zadoff-Chu sequence. An mthelement seq(m) in the ZC sequence is seq⁡(m)=exp⁡(-j*π*m*(m-1)*RN), where m is any positive integer in [0, N−1]; N is a sequence length, namely, a total quantity of elements included in the ZC sequence; j is an imaginary number; and R is a sequence reference value. 2. The gold sequence is a pseudo-random sequence generated based on two m-sequences. 3. The m-sequence is a pseudo-random sequence including a preset multinomial. 4. The BPSK signal sequence may be a sequence including 1 and −1, or may be a sequence including j and −j. When an element in the sequence is 1, a phase is 0; or when an element is −1, a phase is π. 5. The π2-BPSK signal sequence is obtained by further performing phase modulation on two adjacent BPSK signal sequences. The adjacent BPSK signal sequences may be denoted as q1and q2*exp⁡(π2), where exp(π2) indicates that a phase is increased by π2. In some embodiments, step302includes: performing an exclusive OR operation on the pseudo-random signal and the data signal, and using a result of the exclusive OR operation as the scrambled data signal. In some embodiments, step302alternatively includes: performing a multiplication operation on the pseudo-random signal and the data signal, and using a result of the multiplication operation as the scrambled data signal. The scrambled data signal may alternatively be an operation result obtained by performing, on the pseudo-random signal and the data signal, an operation other than the exclusive OR operation and the multiplication operation. Step303: Generate an ASK signal based on the scrambled data signal. Step304: Transmit the ASK signal. In this embodiment, when the data signal is scrambled by using the pseudo-random signal, energy of the ASK signal can be dispersed in a frequency band of the ASK signal, so that a distance of interference of the ASK signal to another signal can be reduced. In addition, when the data signal is scrambled by using the pseudo-random signal, the data signal can be converted into a random signal, so that interference to another communication device can be reduced. In the foregoing embodiment, a method for scrambling the data signal is described. In this application, the data signal may be scrambled. In addition, after the data is repeated, the repeated data may be scrambled by using the pseudo-random signal. Similar to scrambling the data signal, after the repeated data is scrambled, the repeated data may be converted into a random signal, so that the interference of the ASK signal to another communication device can be reduced. In some embodiments, before step301, the signal processing method further includes: sending an excitation signal to a backscatter device; receiving a backscatter signal sent by the backscatter device; and demodulating the backscatter signal to obtain the original data. The backscatter signal is generated by the backscatter device in response to the excitation signal. In some embodiments, step303includes: performing time-frequency transformation on the scrambled data signal; filtering a frequency domain signal obtained through time-frequency transformation; mapping the filtered frequency domain signal to a carrier; performing frequency-time transformation on a carrier signal obtained through mapping; and filtering the frequency-time transformed carrier signal to obtain the ASK signal. In this embodiment, time-frequency transformation is performed on the scrambled data signal. Then, frequency-domain filtering is performed on the frequency domain signal. Then, the filtered frequency domain signal is mapped, and frequency-time transformation and time-domain filtering are performed. In this way, a method for generating the ASK signal based on the scrambled signal is provided. In some embodiments, step303includes: filtering the scrambled data signal to obtain the ASK signal. The ASK signal may also be generated by performing time-domain filtering on the scrambled data signal. In this way, another method for generating the ASK signal based on the scrambled signal is provided. The foregoing describes the signal processing method in this application, and the following describes an apparatus in this application. This application provides a terminal that can implement the signal processing method in the foregoing embodiment. Refer toFIG.4. In an embodiment, a terminal400includes:a data generation unit401, configured to generate a data signal based on original data, where the data signal may be downlink control information on a physical downlink control channel, data information on a physical downlink shared channel, or another data signal, which is not limited in this application;a scrambling unit402, configured to scramble the data signal by using a pseudo-random signal;a modulation unit403, configured to generate an ASK signal based on the scrambled data signal; anda transmitting unit404, configured to transmit the ASK signal. In some embodiments, the data generation unit401is specifically configured to repeat the original data to obtain the data signal. In some embodiments, the data generation unit401is specifically configured to perform repeated sampling on the original data to obtain the data signal. In some embodiments, the scrambling unit402is specifically configured to perform an exclusive OR operation on the pseudo-random signal and the data signal. In some embodiments, the scrambling unit402is specifically configured to perform a multiplication operation on the pseudo-random signal and the data signal. Refer toFIG.5. In some embodiments, the modulation unit403includes:a time-frequency transformation subunit4031, configured to perform time-frequency transformation on the scrambled data signal;a first filtering subunit4032, configured to filter a frequency domain signal obtained through time-frequency transformation;a mapping subunit4033, configured to map the filtered frequency domain signal to a carrier;a frequency-time transformation subunit4034, configured to perform frequency-time transformation on a carrier signal obtained through mapping; anda second filtering subunit4035, configured to filter the frequency-time transformed carrier signal to obtain the ASK signal. In some embodiments, the modulation unit403is specifically configured to filter the scrambled data signal to obtain the ASK signal. In the foregoing embodiments, the pseudo-random signal includes at least one of a ZC sequence, a BPSK signal sequence, a π/2-BPSK signal sequence, a gold sequence, or an m-sequence. Refer toFIG.6. In some embodiments, a reader600in this application includes a receiver601and an exciter602that are connected to each other. The exciter602can implement the signal processing method in the embodiment shown inFIG.3. Refer toFIG.7. In some embodiments, a network device700in this application includes:a processor701, a memory702, a radio frequency circuit703, and an antenna704, where the processor701is connected to each of the memory702and the radio frequency circuit703, and the radio frequency circuit703is connected to the antenna704. The processor701is configured to implement a function of controlling and managing resources in the network device700. For example, the processor701may include a digital signal processor device, a microprocessor device, an analog-to-digital converter, and a digital-to-analog converter. Control and signal processing functions of the network device700may be allocated between these devices based on capabilities of these devices. The memory702is mainly configured to store a software program and data. The radio frequency circuit703is configured to: perform conversion between a baseband signal and a radio frequency signal, and process the radio frequency signal. The antenna704is mainly configured to send and receive a radio frequency signal in a form of an electromagnetic wave. In this embodiment of this application, the antenna704and the radio frequency circuit703that have receiving and sending functions may be considered as a transceiver unit of the network device700. The transceiver unit may also be referred to as a transceiver, a transceiver circuit, a transceiver apparatus, or the like. In some embodiments, a component that is in the transceiver unit and that is configured to implement a receiving function may be considered as a receiving unit, and a component that is in the transceiver unit and that is configured to implement a sending function may be considered as a sending unit. In other words, the transceiver unit includes the receiving unit and the sending unit. Specifically, under control of the processor701, the receiving unit may receive a backscatter signal from a backscatter device. Alternatively, the receiving unit receives a radio signal sent by a terminal or another network device. It should be understood that the processor701is configured to perform step301to step303in the embodiment shown inFIG.3, and the sending unit may implement step304in the foregoing embodiment under the control of the processor701. There may be one or more processors701, one or more memories702, one or more radio frequency circuits703, and one or more antennas704. Although not shown inFIG.7, the network device700may further include a clock synchronization module, a power module, and the like. The network device700may be a base station, a relay station, an access point, or a base transceiver station (BTS). Alternatively, the network device may be a global system for mobile communications (GSM) base station, a code division multiple access (CDMA) base station, a wideband code division multiple access (WCDMA) base station, a long term evolution (LTE) base station, a wireless controller in a cloud radio access network (CRAN), a 5G base station, or a network device in a future evolved communication system (for example, a future public land mobile network (PLMN)). The WCDMA base station is also referred to as a NodeB or an NB. The LTE base station is also referred to as an eNB or an eNodeB. Alternatively, the network device700may be a wearable device or a vehicle-mounted device. Refer toFIG.8. In another embodiment, a terminal800in this application includes:a processor801, a memory802, a radio frequency circuit803, and an antenna804, where the processor801is connected to each of the memory802and the radio frequency circuit803, and the radio frequency circuit803is connected to the antenna804. The processor801may include circuits used for audio/video and logical functions of the terminal. For example, the processor801may include a digital signal processor device, a microprocessor device, an analog-to-digital converter, and a digital-to-analog converter. Control and signal processing functions of the terminal800may be allocated between these devices based on capabilities of these devices. The processor801may further include an internal voice coder, an internal data modem, and the like. In addition, the processor801may include functions for operating one or more software programs, and the software programs may be stored in the memory802. Usually, the processor801and stored software instructions may be configured to enable the terminal800to perform an action. The memory802may include a memory that can store an information element related to a mobile user, for example, a subscriber identity module (SIM). In addition to the SIM, the memory802may further include another removable and/or fixed memory. The memory802may include a transitory memory and/or a non-transitory memory. For example, the transitory memory may include a random access memory (RAM), and the RAM includes a dynamic RAM and/or a static RAM, and an on-chip and/or off-chip cache. The non-transitory memory may be embedded and/or removable. The non-transitory memory may include, for example, a read-only memory, a flash memory, a magnetic storage device such as a hard disk, a floppy disk drive, or a magnetic tape, an optical disc drive and/or a medium, and a non-transitory random access memory. Similar to the transitory memory, the non-transitory memory may include a cache area used for temporary storage of data. At least a part of the transitory memory and/or the non-transitory memory may be embedded into the processor. The memory802may store one or more software programs, instructions, information blocks, pieces of data, and the like that can be executed by the processor801. For example, the memory802may include an identifier that can uniquely identify the terminal, for example, international mobile equipment identity code. The terminal may further include one or more connection circuit modules configured to share and/or obtain data. For example, the terminal may include a short-range radio frequency transceiver and/or detector, to share data with and/or obtain data from an electronic device based on an RF technology. The radio frequency circuit803is configured to: perform conversion between a baseband signal and a radio frequency signal, and process the radio frequency signal. The antenna804is mainly configured to send and receive a radio frequency signal in a form of an electromagnetic wave. In this embodiment of this application, the antenna804and the radio frequency circuit803that have receiving and sending functions may be considered as a transceiver unit of the terminal device800. The transceiver unit may also be referred to as a transceiver, a transceiver circuit, a transceiver apparatus, or the like. In some embodiments, a component that is in the transceiver unit and that is configured to implement a receiving function may be considered as a receiving unit, and a component that is in the transceiver unit and that is configured to implement a sending function may be considered as a sending unit. In other words, the transceiver unit includes the receiving unit and the sending unit. Specifically, under control of the processor801, the receiving unit may receive a backscatter signal from a backscatter device. Alternatively, the receiving unit receives a radio signal sent by the terminal or another network device. It should be understood that the processor801is configured to perform step301to step303in the embodiment shown inFIG.3, and a transmitting unit may implement step304in the foregoing embodiment under the control of the processor801. There may be one or more processors801, one or more memories802, one or more radio frequency circuits803, and one or more antennas804. The terminal800may include other short-range transceivers such as an infrared transceiver, a used transceiver, and a wireless universal serial bus transceiver. A Bluetooth transceiver can perform an operation based on a low-power or ultra-low-power Bluetooth technology. In this case, the terminal, more specifically, the short-range transceiver can send data to and/or receive data from an electronic device near the apparatus (for example, within 10 meters). Although not shown inFIG.8, the terminal can send data to and/or receive data from the electronic device based on various wireless networking technologies. These technologies include: Wi-Fi, Wi-Fi low power consumption, and wireless local area network (WLAN) technologies such as an IEEE 802.11 technology, an IEEE 802.15 technology, and an IEEE 802.16 technology. The terminal800may further include a user interface. The user interface may include a headset or loudspeaker, a microphone, an output apparatus (for example, a display), an input apparatus, and the like. The user interface may be coupled to the processor through an operation. In this case, the processor801may include a user interface circuit, configured to control at least some functions of one or more elements (for example, the loudspeaker, the microphone, and the display) of the user interface. The processor801and/or the user interface circuit included in the processor801may be configured to control one or more functions of the one or more elements of the user interface by using computer program instructions (for example, software and/or firmware) stored in a memory accessible by the processor. Although not shown inFIG.8, the terminal800may include a battery configured to supply power to various circuits related to mobile equipment. The circuit is, for example, a circuit that provides mechanical vibration as detectable output. The input apparatus may include a device that allows the apparatus to receive data, for example, a keypad, a touch display, a joystick, and/or at least one other input device. The terminal800may be user equipment (UE), an access terminal, a user equipment unit, a subscriber station, a mobile station, a remote station, a remote terminal, a mobile device, a terminal device, a wireless communication device, a user equipment agent, a user equipment apparatus, or the like. The access terminal may be a cellular phone, a cordless phone, a session initiation protocol (SIP) phone, a wireless local loop (WLL) station, a personal digital assistant (PDA), a handheld device having a wireless communication function, a computing device or another processing device connected to a wireless modem, a vehicle-mounted device, a wearable device, a terminal device in a future 5G network, a terminal device in a future evolved PLMN, or the like. Refer toFIG.9. In an embodiment, a cellular communication system in this application includes:a backscatter device900, a network device700, and a terminal800. The network device700may transmit an ASK signal to the backscatter device900. The backscatter device may transmit a backscatter signal to the network device700based on the ASK signal. The terminal800may transmit an ASK signal to the backscatter device900. The backscatter device may transmit a backscatter signal to the terminal800based on the ASK signal. The network device700and the terminal800may perform bidirectional communication through a radio link. This application provides a computer-readable storage medium. The computer-readable storage medium includes instructions. When the instructions are run on a computer, the computer is enabled to perform the method described in any one of the foregoing embodiments. All or some of the foregoing embodiments may be implemented by using software, hardware, firmware, or any combination thereof. When software is used for implementation, all or some of embodiments may be implemented in a form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or some of the procedures or the functions according to embodiments of the present application are generated. The computer may be a general-purpose computer, a dedicated computer, a computer network, or another programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or may be transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired (for example, a coaxial cable, an optical fiber, or a digital subscriber line (DSL)) or wireless (for example, infrared, radio, or microwave) manner. The computer-readable storage medium may be any usable medium that can be stored by the computer, or a data storage device, such as a server or a data center, integrating one or more usable media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, a DVD), a semiconductor medium (for example, a solid-state drive (SSD)), or the like. The foregoing embodiments are merely intended for describing the technical solutions of this application, but not for limiting this application. Although this application is described in detail with reference to the foregoing embodiments, a person of ordinary skill in the art should understand that modifications to the technical solutions recorded in the foregoing embodiments or equivalent replacements to some technical features thereof may still be made, without departing from the scope of the technical solutions of embodiments of this application.
25,322
11943085
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS The present disclosure will now be described with reference to the figures, which in general relate to polar transmitters that may be used in various communication devices, e.g. in wireless communication networks such as cellular telephone networks. Polar transmitters generate a radio frequency (RF) output from an amplitude signal component and a phase signal component, which may be generated digitally by a processor such as a digital baseband processor. These signal components may be combined by an amplifier (e.g. a variable gain amplifier) in a transmitter. In some cases, local oscillator feedthrough (phase signal feedthrough) in such an amplifier may provide a significant source of distortion in an RF signal. In some examples, a correction circuit may correct for local oscillator feedthrough in a transmitter by calculating and applying a complex digital offset to correct the amplitude signal component and the phase signal component. The complex digital offset may include an in-phase offset correction factor and a quadrature offset correction factor. The correction circuit may be implemented digitally (e.g. in a digital baseband processor) and the in-phase and quadrature offset correction factors may be calculated as digital values. One or more Coordinate Rotation Digital Computer (CORDIC) may be used to apply correction. A CORDIC may also be used to transform a signal from cartesian components (e.g. in-phase and quadrature components) to polar components (e.g. amplitude and phase components). In some cases, the same CORDIC may be used for conversion from cartesian to polar and for applying a complex correction factor. It is understood that the present embodiments of the disclosure may be implemented in many different forms and that claims scopes should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the inventive embodiment concepts to those skilled in the art. Indeed, the disclosure is intended to cover alternatives, modifications and equivalents of these embodiments, which are included within the scope and spirit of the disclosure as defined by the appended claims. Furthermore, in the following detailed description of the present embodiments of the disclosure, numerous specific details are set forth in order to provide a thorough understanding. However, it will be clear to those of ordinary skill in the art that the present embodiments of the disclosure may be practiced without such specific details. FIG.1illustrates a wireless network for communicating data. The communication system100includes, for example, user equipment110A-110C, radio access networks (RANs)120A-120B, a core network130, a public switched telephone network (PSTN)140, the Internet150, and other networks160. Additional or alternative networks include private and public data-packet networks including corporate intranets. While certain numbers of these components or elements are shown in the figure, any number of these components or elements may be included in the system100. In one embodiment, the wireless network may be a fifth generation (5G) network including at least one 5G base station which employs orthogonal frequency-division multiplexing (OFDM) and/or non-OFDM and a transmission time interval (TTI) shorter than 1 ms (e.g. 100 or 200 microseconds), to communicate with the communication devices. In general, a base station may also be used to refer any of the eNB and the 5G BS (gNB). In addition, the network may further include a network server for processing information received from the communication devices via the at least one eNB or gNB. System100enables multiple wireless users to transmit and receive data and other content. The system100may implement one or more channel access methods, such as but not limited to code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), or single-carrier FDMA (SC-FDMA). The user equipment (UE)110A-110C are configured to operate and/or communicate in the system100. For example, the user equipment110A-110C are configured to transmit and/or receive wireless signals or wired signals. Each user equipment110A-110C represents any suitable end user device and may include such devices (or may be referred to) as a user equipment/device, wireless transmit/receive unit (UE), mobile station, fixed or mobile subscriber unit, pager, cellular telephone, personal digital assistant (PDA), smartphone, laptop, computer, touchpad, wireless sensor, wearable devices or consumer electronics device. In the depicted embodiment, the RANs120A-120B include one or more base stations170A,170B (collectively, base stations170), respectively. Each of the base stations170is configured to wirelessly interface with one or more of the UEs110A,110B,110C to enable access to the core network130, the PSTN140, the Internet150, and/or the other networks160. For example, the base stations (BSs)170may include one or more of several well-known devices, such as a base transceiver station (BTS), a Node-B (NodeB), an evolved NodeB (eNB), a next (fifth) generation (5G) NodeB (gNB), a Home NodeB, a Home eNodeB, a site controller, an access point (AP), or a wireless router, or a server, router, switch, or other processing entity with a wired or wireless network. In one embodiment, the base station170A forms part of the RAN120A, which may include other base stations, elements, and/or devices. Similarly, the base station170B forms part of the RAN120B, which may include other base stations, elements, and/or devices. Each of the base stations170operates to transmit and/or receive wireless signals within a particular geographic region or area, sometimes referred to as a “cell.” In some embodiments, multiple-input multiple-output (MIMO) technology may be employed having multiple transceivers for each cell. The base stations170communicate with one or more of the user equipment110A-110C over one or more air interfaces (not shown) using wireless communication links. The air interfaces may utilize any suitable radio access technology. It is contemplated that the system100may use multiple channel access functionality, including for example schemes in which the base stations170and user equipment110A-110C are configured to implement the Long Term Evolution wireless communication standard (LTE), LTE Advanced (LTE-A), and/or LTE Multimedia Broadcast Multicast Service (MBMS). In other embodiments, the base stations170and user equipment110A-110C are configured to implement UMTS, HSPA, or HSPA+ standards and protocols. Of course, other multiple access schemes and wireless protocols may be utilized. The RANs120A-120B are in communication with the core network130to provide the user equipment110A-110C with voice, data, application, Voice over Internet Protocol (VoIP), or other services. As appreciated, the RANs120A-120B and/or the core network130may be in direct or indirect communication with one or more other RANs (not shown). The core network130may also serve as a gateway access for other networks (such as PSTN140, Internet iso, and other networks160). In addition, some or all of the user equipment110A-110C may include functionality for communicating with different wireless networks over different wireless links using different wireless technologies and/or protocols. The RANs120A-120B may also include millimeter and/or microwave access points (APs). The APs may be part of the base stations170or may be located remote from the base stations170. The APs may include, but are not limited to, a connection point (an mmW CP) or a base station170capable of mmW communication (e.g., a mmW base station). The mmW APs may transmit and receive signals in a frequency range, for example, from 24 GHz to 100 GHz, but are not required to operate throughout this range. As used herein, the term base station is used to refer to a base station and/or a wireless access point. AlthoughFIG.1illustrates one example of a communication system, various changes may be made toFIG.1. For example, the communication system100could include any number of user equipment, base stations, networks, or other components in any suitable configuration. It is also appreciated that the term user equipment may refer to any type of wireless device communicating with a radio network node in a cellular or mobile communication system. Non-limiting examples of user equipment are a target device, device-to-device (D2D) user equipment, machine type user equipment or user equipment capable of machine-to-machine (M2M) communication, laptops, PDA, iPad, Tablet, mobile terminals, smart phones, laptop embedded equipped (LEE), laptop mounted equipment (LME) and USB dongles. FIG.2illustrates example details of an UE no that may implement the methods and teachings according to this disclosure. The UE no may for example be a mobile telephone but may be other devices in further examples such as a desktop computer, laptop computer, tablet, hand-held computing device, automobile computing device and/or other computing devices. As shown in the figure, the exemplary UE no is shown as including at least one transmitter202, at least one receiver204, memory206, at least one processor208, and at least one input/output device212. The processor208can implement various processing operations of the UE no. For example, the processor208can perform signal coding, data processing, power control, input/output processing, or any other functionality enabling the UE no to operate in the system100(FIG.1). The processor208may include any suitable processing or computing device configured to perform one or more operations. For example, the processor208may include a microprocessor, microcontroller, digital signal processor, field programmable gate array, or application specific integrated circuit. The transmitter202can be configured to modulate data or other content for transmission by at least one antenna210. The transmitter202can also be configured to amplify, filter and a frequency convert RF signals before such signals are provided to the antenna210for transmission. The transmitter202can include any suitable structure for generating signals for wireless transmission. The receiver204can be configured to demodulate data or other content received by the at least one antenna210. The receiver204can also be configured to amplify, filter and frequency convert RF signals received via the antenna210. The receiver204can include any suitable structure for processing signals received wirelessly. The antenna210can include any suitable structure for transmitting and/or receiving wireless signals. The same antenna, antenna210, can be used for both transmitting and receiving RF signals, or alternatively, different antennas can be used for transmitting signals and receiving signals. It is appreciated that one or multiple transmitters202could be used in the UE no, one or multiple receivers204could be used in the UE no, and one or multiple antennas210could be used in the UE no. Although shown as separate blocks or components, at least one transmitter202and at least one receiver204could be combined into a transceiver. Accordingly, rather than showing a separate block for the transmitter202and a separate block for the receiver204inFIG.2, a single block for a transceiver could have been shown. The UE no further includes one or more input/output devices212. The input/output devices212facilitate interaction with a user. Each input/output device212includes any suitable structure for providing information to or receiving information from a user, such as a speaker, microphone, keypad, keyboard, display, or touch screen. In addition, the UE no includes at least one memory206. The memory206stores instructions and data used, generated, or collected by the UE110. For example, the memory206could store software or firmware instructions executed by the processor(s)208and data used to reduce or eliminate interference in incoming signals. Each memory206includes any suitable volatile and/or non-volatile storage and retrieval device(s). Any suitable type of memory may be used, such as random access memory (RAM), read only memory (ROM), hard disk, optical disc, subscriber identity module (SIM) card, memory stick, secure digital (SD) memory card, and the like. FIG.3illustrates an example BS170that may implement the methods and teachings according to this disclosure. As shown in the figure, the BS170includes at least one processor308, at least one transmitter302, at least one receiver304, one or more antennas310, and at least one memory306. The processor308implements various processing operations of the BS170, such as signal coding, data processing, power control, input/output processing, or any other functionality. Each processor308includes any suitable processing or computing device configured to perform one or more operations. Each processor308could, for example, include a microprocessor, microcontroller, digital signal processor, field programmable gate array, or application specific integrated circuit. Each transmitter302includes any suitable structure for generating signals for wireless transmission to one or more UEs110or other devices. Each receiver304includes any suitable structure for processing signals received wirelessly from one or more UEs110or other devices. Although shown as separate blocks or components, at least one transmitter302and at least one receiver304may be combined into a transceiver. Each antenna310includes any suitable structure for transmitting and/or receiving wireless signals. While a common antenna310is shown here as being coupled to both the transmitter302and the receiver304, one or more antennas310could be coupled to the transmitter(s)302, and one or more separate antennas310could be coupled to the receiver(s)304. Each memory306includes any suitable volatile and/or non-volatile storage and retrieval device(s). The technology described herein can be implemented using hardware, software, or a combination of both hardware and software. The software used is stored on one or more of the processor readable storage devices described above to program one or more of the processors to perform the functions described herein. The processor readable storage devices can include computer readable media such as volatile and non-volatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer readable storage media and communication media. Computer readable storage media may be implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Examples of computer readable storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. A computer readable medium or media does (do) not include propagated, modulated or transitory signals. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a propagated, modulated or transitory data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as RF and other wireless media. Combinations of any of the above are also included within the scope of computer readable media. In alternative embodiments, some or all of the software can be replaced by dedicated hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), special purpose computers, etc. In one embodiment, software (stored on a storage device) implementing one or more embodiments is used to program one or more processors. The one or more processors can be in communication with one or more computer readable media/storage devices, peripherals and/or communication interfaces. FIG.4Ashows an example that includes a processor408(e.g. processor208of UE110or processor308of BS170) coupled to transmitter402(e.g. transmitter202of UE110or transmitter302of BS170) to generate an RF output (e.g. to antenna210or310). Processor408(processor module) is a digital processor such as a digital baseband processor and includes various digital circuits including, but not limited to the components shown. Digital modulation circuit412(modulation module) generates a modulated digital signal including an in-phase signal component (I) and a quadrature signal component (Q), which are provided to a CORDIC414for conversion to an amplitude signal component A(t) and a phase signal component θ(t). The amplitude signal component A(t) (envelope amplitude) is scaled by multiplier416, which multiplies amplitude signal component A(t) by a digital gain factor DGain, and by adder418, which applies a digital DC offset correction DCOC. The amplitude signal component is then subject to a delay (for synchronization with phase signal component θ(t)) by delay element420and filtering by filter422before being output from processor408to an amplitude input of transmitter402. The phase signal component θ(t) is provided directly to phase input of transmitter402. In transmitter402(polar transmitter module), the amplitude signal component is converted from a digital value to an analog signal by digital-to-analog converter, DAC424. This analog signal is filtered by low pass filter, LPF426, and is then used to control gain of a variable gain amplifier, VGA428. Phase signal component θ(t) is provided to a phase-locked loop, PLL430, which provides a phase modulated Local Oscillator (LO) signal to VGA428. Thus, VGA428combines amplitude and phase signal components to provide an RF signal that is then amplified by power amplifier, PA432, and is filtered by filter434before being provided as an RF output of transmitter402at terminal436(e.g. output to an antenna). CORDIC414may be implemented by dedicated circuits to perform the transformation from cartesian coordinates to polar coordinates, i.e. from I and Q to A and θ. A CORDIC is a well-known component used in digital circuits to compute various mathematical functions such as trigonometric functions. A CORDIC may be implemented using various hardware and the present technology is not limited to any particular hardware implementation. An example of a CORDIC, which may be used to implement CORDIC414, is illustrated inFIG.4B, including multiplexers, MUX440, MUX442, MUX444, that receive inputs Xo, Yo, Zoand provide outputs to registers446,448,450respectively, which in turn provide outputs to Arithmetic Logic Units, ALU452, ALU454, ALU456respectively. The output of register446is provided through bit shifter458to ALU454and the output of register448is provided through bit shifter460to ALU452. Element462provides an input to ALU456. Outputs of ALU452, ALU454, ALU456provide CORDIC outputs Xn, Yn, Zn respectively and are also fed back to MUX440, MUX442, MUX444. Arithmetic Logic Units, ALU452, ALU454, ALU456may perform addition/subtraction so that CORDIC414performs shift and add operations in multiple iterations to generate outputs Xn, Yn, Zn. CORDIC414operates by using successive rotations. For example, given an X, Y input, the successive rotations on an input vector X+jY may attempt to zero out the Y component while keeping track of the number of rotations (iterations) performed and the phase applied per rotation. The final accumulated phase is the phase output of the CORDIC. In addition to transforming cartesian values (I and Q values) to polar values (A and θ values) as shown inFIG.4A, CORDICs such as CORDIC414may be configured to perform a wide variety of other transformations. In an ideal polar transmitter, the amplitude and phase are recombined in a VGA to reproduce the desired RF complex envelope signal (i.e. both amplitude and phase components are reproduced without distortion). This complex envelope signal is expressed in Eq. 1 below. Note that due to coupling through parasitics in a VGA such as VGA428, the phase-modulated LO can leak (or feedthrough) without combining with the amplitude signal as shown by LO feedthrough438inFIG.4A(LO feedthrough). Assuming VGA428is linear when the input signal amplitude is smaller, the complex envelope signal at the VGA output of a polar transmitter such as transmitter402can be described by the following simple equation, where the final term represents the local oscillator feedthrough component: Vout(t)={RFGain*[DGain*A(t)+DCOC]+DCEnv}ej[θ(t)+ϑ]+αej[θ(t)+ø](Equation 1) Where: RFGain: Total Analog+RF gain up to output of VGA428 DCOC: Digital DC offset correction of the analog envelope path applied by adder418 A(t): amplitude signal component for envelope modulation θ(t): phase signal component for phase modulation DGain: Digital gain used by multiplier416to scale the envelope modulation for a desired output power DCEnv: Envelope DC offset of the analog/RF envelope path—corrected by DCOC α: phase signal (LO) feedthrough amplitude of VGA428 ϑ: phase of the desired polar signal ø: phase of the LO feedthrough amplitude Defining the difference between LO feedthrough and desired polar signal phase, phase delta, or Δφ, as Δφ=ø−ϑ And given that DCOC and DCEnvin the above equation are both real quantities, Equation 1 can be rewritten as: Vout(t)=ej[θ(t)+ϑ]{RF Gain*[DGain*A(t)+DCOC]+DCEnv+αejΔφ} Ignoring the phase rotation term ejϑ, this equation can be rewritten as: Vout(t)=ejθ(t){RF Gain*[DGain*A(t)+DCOC]+DCEnv+αejΔφ}  (Equation 2) Note that if the quantity Δφ=0 in Equation 2, then the DC offset generated in the analog/RF path is a real quantity (DCEnv+α) and there is no imaginary component. This real DC offset can be cancelled by setting DCOC to an appropriate value given by the following: DCOC=-DCEnv+αRFGain However, in many real transmitters, the phase delta is not zero Δφ≠0) as it is not easily controlled by design. In this case, the effective DC offset generated by the transmitter is complex and cannot simply be compensated by a real (or envelope) DC offset such as DCOC applied by adder418. Moreover, the presence of a complex DC offset for a polar transmitter system like the one described by Eq. 2 above results in AMAM/AMPM distortion as the envelope signal swings lower in amplitude. To see this, note that in the case where phase delta is not zero Δφ≠0), Equation 2 above can be re-written as: Vout(t)=ej[θ(t)+ϑ]{RFGain*[DGain*A(t)+DCOC]+DCEnv+α cos Δφ+jα sin Δφ}   (Equation 2.1) In this case, if the real envelope DC offset correction (DCOC) is set as follows: DCOC=-DCEnv+α⁢cos⁢Δ⁢φRFGain Then Equation 2.1 above becomes: Vout(t)=ejθ(t){RFGain*DGain*A(t)+jα sin Δφ} Thus, the resulting AM-to-AM distortion is given by: AMAM(t)=√{square root over ([RFGain*DGain*A(t)]2+[α sin Δφ]2)}   (Equation 3) And the resulting AM-to-PM distortion is given by: AMPM⁡(t)=tan-1(α⁢sin⁢Δ⁢φRFGain*DGain*A⁡(t))(Equation⁢4) From Equations 3 and 4 above, it is clear that the amount of both AMAM and AMPM distortion is a function of the phase delta Δφ and the LO feedthrough (phase signal feedthrough) in dBc (LOFT_dBc) as defined by: LOFT-⁢dBc=20⁢log⁡(αRFGain*DGainRMS⁢{A⁡(t)}) Where the term RMS{A(t)} indicates the root-mean-square (RMS) level of the envelope signal A(t). For example, for Enhanced Data rates for GSM Evolution (EDGE) modulation, the envelope signal swings 3.2 dB above and 14.2 dB below the RMS level.FIGS.5A-Billustrate examples of the AMAM and AMPM distortion respectively due to LO phase signal component feedthrough as a function of voltage amplitude (Vout). As can be seen fromFIGS.5A-B, the local oscillator feedthrough in polar transmitter systems (e.g. in VGA428of transmitter402) results in AMAM and AMPM distortion for lower voltage amplitude levels (i.e. more distortion on left ofFIGS.5A-Bthan on right. This distortion impacts the Error Vector Magnitude (EVM) and spectrum performance of a transmitter and may be particularly problematic for lower output power levels. It can be understood from the above discussion that the use of a complex DC offset correction may enable the removal of LO feedthrough (phase feedthrough) in polar transmitters. In other words, applying a DC offset correction (DCOC) that is a complex quantity may provide a technical solution to the problem of distortion caused by local oscillator feedthrough in a polar transmitter. Such a complex offset correction factor has both in-phase and quadrature components and may be given by the following: D⁢C⁢O⁢C=DCOCI+jDCOCQ=-[D⁢CEnv+α⁢ej⁢Δ⁢φRFGain](Equation⁢5) Using this to substitute DCOC in Equation 2, the digital signal may be expressed as: Vout(t)=ejθ(t){RFGain*[DGain*A(t)+DCOCI+jDCOCQ]+DCEnv+αejΔφ}   (Equation 5.1) FIG.6shows an embodiment that includes a processor608that provides an amplitude signal component and a phase signal component to respective inputs of a polar transmitter (e.g. transmitter402) with compensation for local oscillator feedthrough using a complex correction factor that includes an in-phase correction factor and a quadrature correction factor. Processor608is a digital processor such as a digital baseband processor and includes various digital circuits including, but not limited to the components shown. Digital modulation circuit412generates a modulated digital signal including an in-phase signal component (I) and a quadrature signal component (Q), which are provided to CORDIC414(a first CORDIC) for conversion to an initial amplitude signal component A(t) and an initial phase signal component θ(t) as previously described (i.e. conversion from cartesian to polar coordinates). Initial amplitude signal component A(t) and initial phase signal component θ(t) are sent to a correction circuit660where initial amplitude signal component A(t) (envelope amplitude) is scaled by multiplier416(which multiplies initial amplitude signal component A(t) by digital gain factor DGain) and by adder418(which applies a digital in-phase offset correction DCOCI to obtain DGain*A(t)+DCOCI) and is then provided as an input to CORDIC662(a second CORDIC). CORDIC662also receives a quadrature offset correction factor DCOCQ and from these inputs generates an amplitude signal component A′(t) and a phase offset Δθ(t). The amplitude signal component A′(t) is then subject to a delay (for synchronization) by delay element420and filtering by filter422before being output from processor608(e.g. to amplitude input of a transmitter such as transmitter402). The phase offset Δθ(t) is added to the initial phase signal component θ(t) by adder664to generate phase signal component θ′(t), which is output from processor608(e.g. to a phase input of a transmitter such as transmitter402). Correction using correction circuit660(correction module) uses two parameters, in-phase offset correction factor DCOCI and quadrature offset correction factor DCOCQ. This simplifies the calibration procedure for lower transmitter polar powers compared with other solutions that may use digital predistortion tables that are made up of several lookup entries. FromFIG.6, it is clear that the modified amplitude and phase required to compensate for LO feedthrough are respectively: A′(t)=√{square root over ([DGain*A(t)+DCOCI]2+DCOCQ2)} and θ′(t)=θ⁡(t)+tan-1(DCOCQD⁢G⁢a⁢i⁢n*A⁡(t)+D⁢COCI) Where the phase offset calculated by CORDIC662is the second term in the above equation, i.e. Δθ⁡(t)=tan-1(D⁢C⁢O⁢CQD⁢G⁢a⁢i⁢n*A⁡(t)+D⁢COCI). The outputs A′(t) and Δθ(t) can be generated by a CORDIC such as CORDIC662configured to perform corresponding transformations. The example ofFIG.6uses two CORDICS, a first CORDIC414to convert the in-phase signal component and the quadrature signal component to an initial amplitude signal component and an initial phase signal component (i.e. convert I and Q to A and θ) and second CORDIC662coupled to correct the initial amplitude signal component and the initial phase signal component. However, the technique illustrated does not necessarily require two separate CORDICs. By running a single CORDIC at twice the clock rate of other components in a transmitter, the same functionality can be realized as in processor608using a single CORDIC (e.g. without CORDIC662). During a first clock phase, the CORDIC can calculate A(t) and0(t) from the cartesian pair, I(t) and Q(t). In a second clock phase, the quantity DGain*A(t)+DCOCI+jDCOCQ is computed and re-applied to the same CORDIC to generate the amplitude signal component A′(t) and the phase signal component θ′(t). FIG.7Ashows an example of a processor768using CORDIC414alone to perform the functions of both CORDIC414and CORDIC662that were described with respect toFIG.6, i.e. converting the in-phase signal component and the quadrature signal component to an initial amplitude signal component and an initial phase signal component and also correcting the initial amplitude signal component and the initial phase signal component. Similar numbering is used for components that are common to processors768and608. Digital modulation circuit412generates a modulated digital signal including an in-phase signal component (I) and a quadrature signal component (Q), which are provided through multiplexers, MUX770and MUX772respectively, to CORDIC414. Outputs of CORDIC414are provided to demultiplexers, DEMUX774and DEMUX776. A first output of DEMUX774is connected to multiplier416, which is connected to adder418, with the output from adder418provided as an input back to MUX770. A second output of DEMUX774is connected through delay element420and filter422to provide a first output (e.g. amplitude signal component of processor768to a transmitter). A first output of DEMUX776is connected directly to adder664. A second output of DEMUX776is connected through delay element778to adder664, which provides a second output (e.g. phase signal component of processor768to a transmitter). MUXs770,772and DEMUXs774,776are configured to switch input and output signals of CORDIC414from clock cycle to clock cycle and CORDIC performs different functions accordingly, converting the in-phase signal component and the quadrature signal component to an initial amplitude signal component and an initial phase signal component in a first clock cycle and correcting the initial amplitude signal component and the initial phase signal component in a second clock cycle. FIG.7Bshows an example of processor768during a first clock cycle with MUX770and MUX772providing in-phase and quadrature signal components, I and Q, to CORDIC414(components that are not active in this clock cycle are omitted for clarity). CORDIC414is configured to perform cartesian to polar conversion in this clock cycle to convert in-phase and quadrature signal components to initial amplitude and phase signal components A(t) and0(t). DEMUX774provides initial amplitude signal component A(t) to multiplier416where it is multiplied by gain factor DGain. The output of multiplier416is sent to adder418, which applies in-phase offset correction factor DCOCI to generate an output DGain*A(t)+DCOCI that is used in the second clock cycle. DEMUX776provides initial phase signal component θ(t) to delay element778for use in the second clock cycle. FIG.7Cshows processor768during the second clock cycle with MUX770providing input DGain*A(t)+DCOCI(from adder418as shown inFIG.7B) to CORDIC414and shows MUX772providing quadrature offset correction factor DCOCQto CORDIC414(components that are not active in this clock cycle are omitted for clarity). CORDIC414is configured to calculate amplitude signal component A′(t) and phase offset Δθ(t) from inputs DGain*A(t)+DCOCI and DCOCQin this clock cycle, e.g. from A′ (t)=√{square root over ([DGain*A(t)+DCOCI]2+DCOCQ2)} and Δθ⁡(t)=tan-1(D⁢C⁢O⁢CQD⁢G⁢a⁢i⁢n*A⁡(t)+D⁢COCI) above. DEMUX774provides amplitude signal component A′(t) to through delay element420and filter422as an output (e.g. to a transmitter). DEMUX776provides phase offset Δθ to adder664where it is added to initial phase signal component θ(t) to generate phase signal component θ′(t), i.e. phase offset Δθ(t) is applied to initial phase signal component θ(t) to provide phase correction. In other examples, digital correction may be provided to in-phase and quadrature signal components prior to conversion to amplitude and phase components (e.g. prior to CORDIC414in processor408inFIG.4A) using a lookup table or otherwise.FIG.8shows an example of a processor880, which includes digital modulation circuit412generating in-phase and quadrature signal components (I+jQ) for CORDIC414. Between digital modulation circuit412and CORDIC414correction circuit882provides correction to the signal using envelope and LUT (lookup table) indexing circuit884in conjunction with a Digital PreDistortion Lookup Table circuit, DPD LUT886. In-phase correction factors DCOCIand quadrature correction factors DCOCQmay be stored in DPD LUT886for a range of different amplitudes. Envelope and LUT indexing circuit884may generate an output that is used to find a corresponding entry in DPD LUT886, i.e. to find corresponding in-phase and quadrature correction factors DCOCIand DCOCQfor a given amplitude. These in-phase and quadrature correction factors, DCOCI and DCOCQ, are then combined with in-phase and quadrature signal components I+jQ by multiplier888. Corrected in-phase and quadrature signal components from multiplier888are then provided to CORDIC414and converted to corrected amplitude and phase signal components A′(t) and θ′(t). Corrected amplitude signal component A′(t) is then sent to multiplier416, where further gain correction is applied. A DC offset is not applied after CORDIC414in this example because any appropriate offset is implemented prior to CORDIC414by correction circuit882. While a gain correction factor such as DGain may be applied in correction circuit882, it may also be applied by multiplier416as shown (although in some cases this is not done). Delay element420provides synchronization between amplitude signal component A′(t) and phase signal component θ′(t), with filter422filtering amplitude signal component A′(t) prior to output. By rewriting the term [DGain*A(t)+DCOCI+jDCOCQ] in Equation 5.1 above, Equation 5.1 may be rewritten as: Vout⁡(t)=ej⁢θ⁡(t)⁢{RFGain*DGain*A⁡(t)[1+DCOCIA⁡(t)*D⁢G⁢a⁢i⁢n+j⁢DCOCQA⁡(t)*D⁢G⁢a⁢i⁢n]+D⁢CE⁢n⁢v+α⁢ej⁢Δφ}(Equation5.2) Equation 5.2 may be used to populate a lookup table such as DPD LUT886according to an amplitude signal component A(t), where complex gain is a function of A(t) given by: C⁢G⁡(A⁡(t))=1+D⁢COCIA⁡(t)*D⁢G⁢a⁢i⁢n+j⁢D⁢C⁢O⁢CQA⁡(t)*D⁢G⁢a⁢i⁢n It can be seen that entries of such a lookup table may be simply determined by three parameters: DCOCI, DCOCQ, and DGain. Thus DPD LUT886may be relatively simple and may be rapidly accessed. In some cases, a DPD LUT may compensate for other unwanted components of an RF output signal so that compensation for local oscillator feedthrough of an amplifier in a transmitter may be combined with compensation for other factors (e.g. compensation for distortion at high amplitudes caused by amplifier non-linearity). Thus, a single DPD LUT may be used to compensate for large signal distortion (e.g. from a power amplifier such as PA432) and also compensate for small signal distortion (e.g. from LO feedthrough such as local oscillator feedthrough438). An alternative implementation of a processor with correction for local oscillator feedthrough in a transmitter amplifier is shown inFIG.9. Processor900includes digital modulation circuit412coupled through correction circuit902to CORDIC414, with CORDIC414performing cartesian to polar transformation as before. In this example, correction is applied prior to conversion by CORDIC414. Modulated in-phase and quadrature signal components I and Q are first converted to Cosine and Sine components, Cos(θ(t)) and Sin(θ(t)), by CORDIC904, which may be implemented using similar hardware to CORDIC414(e.g. as shown inFIG.4B) or otherwise. In some cases, a CORDIC (first CORDIC), which performs a relatively simple transformation to Cosine and Sine components may be implemented using a simpler circuit with fewer hardware components than in CORDIC414. Correction circuit902is configured to apply correction to the Sine component and the Cosine component and generate a corrected in-phase signal component I′ and a corrected quadrature signal component Q′. CORDIC414is configured to convert the corrected in-phase signal component I′ and the corrected quadrature signal component Q′ to the amplitude signal component A′(t) and the phase signal component θ(t). Correction circuit902includes a plurality of multipliers including a first multiplier906to combine the in-phase offset correction factor DCOCIand the Cosine component Cos (θ(t)), a second multiplier908to combine the in-phase offset correction factor DCOCIand the Sine component Sin(θ(t)), a third multiplier910to combine the quadrature offset correction factor DCOCQand the Cosine component Cos(θ(t)), a fourth multiplier912to combine the quadrature offset correction factor DCOCQand the Sine component Sin(θ(t)), a first adder914to combine outputs of the first multiplier906and fourth multiplier912(subtracting output of fourth multiplier912from output of first multiplier906), and a second adder916to combine outputs of the second multiplier908and third multiplier910. Correction circuit902further includes a fifth multiplier918to combine the in-phase signal component I and a gain factor DGain, a third adder920to combine outputs of the fifth multiplier918and the first adder914to generate the corrected in-phase signal component I′, a sixth multiplier922to combine the quadrature signal component Q and the gain factor DGain, and a fourth adder924to combine the output of the sixth multiplier922and the second adder916to generate the corrected quadrature signal component Q′. This arrangement provides corrected in-phase signal component I′ according to the following equation: I′(t)=DGain*I(t)+DCOCIcos(θ(t))−DCOCQsin(θ(t)) and provides corrected quadrature signal component Q′ according to the following equation: Q′(t)=DGain*Q(t)+DCOCIsin(θ(t))+DCOCQcos(θ(t)) Where I(t) and Q(t) represent the modulated in-phase and quadrature signal components from digital modulation circuit412. An advantage of this design is its relative simplicity. No LUT may be needed (e.g. if a DPD LUT is not used for large signal predistortion compensation or other effect then adequate compensation may be provided by a correction circuit alone thereby simplifying hardware). In order to apply correction factors, such as gain (e.g. DGain), in-phase offset correction factor (e.g. DCOCI), and quadrature offset correction factor (e.g. DCOCQ) for a given desired output power, some calibration may be performed. For example, distortion may be measured at one or more amplitudes (e.g. the system AMAM/AMPM distortion, as illustrated inFIGS.5A-B, may be measured). This can be done by transmitting a variety of digital stimuli waveforms to help measure the system AMAM/AMPM. For example, a stepped-up DC or continuous wave (CW) stimulus or even a modulated waveform can be generated in a processor and transmitted through the transmitter and down converted back to in-phase and quadrature signal components (I and Q signal components) for AMAM/AMPM distortion measurement. For example, in one embodiment, a stepped-up DC stimulus A(t) can be transmitted and a linear measurement receiver (MRx) can be used to down convert the RF transmitted signal. The measurement receiver in-phase and quadrature (I and Q) data is then processed to generate the transmitter system AMAM/AMPM distortion data. FIG.10shows an example of a processor moo, e.g. a digital broadband processor such as processor408,608,768,880,900, coupled to transmitter402. Transmitter402provides an RF output to antenna1002. Transmitter402includes components previously illustrated including DAC424, LPF426, PLL430, VGA428, PA432, filter434and terminal436. A coupler1004is inserted between PA432and filter434in this example to couple the RF signal from PA432to measurement system1006. In measurement system1006, the RF signal from coupler1004is provided to a measurement receiver1024where it is fed through attenuator1008to a Low Noise Amplifier, LNA1010. In some cases, amplification may not be needed and LNA1010may be bypassed. The output of LNA1010is provided to mixers1012,1014(e.g. receiver down-converting mixers used to generate receiver I and Q signal components). Mixers1012,1014also receive Local Oscillator (LO) inputs from PLL430of transmitter402. Mixers1012,1014provide output signals through filters1016,1018respectively. These signals are then converted to digital values by ADCs1020,1022respectively. Thus, attenuator1008, LNA1010, mixers1012,1014, filters1016,1018, and ADCs1020,1022form measurement receiver1024, which uses the same PLL, PLL430, as transmitter402so that the resulting receiver I and Q signal components are a complex DC representation of the AMAM/AMPM distortion of transmitter402(provided that measurement receiver1024is sufficiently linear). In-phase and quadrature components from measurement receiver1024are provided to analysis circuit1026, which may include a processor configured to calculate AMAM/AMPM distortion from analysis of in-phase and quadrature signal components received from ADCs1020,1022. In an example of a calibration procedure, a digital modulation circuit of processor moo (e.g. digital modulation circuit412) generates a stimulus, which may be repeated as many times as desired to average out measurement noise.FIG.11illustrates an example in which M runs (M ramp-up stimulus sequences) are performed to average out any potential measurement noise (two such runs are illustrated in the example ofFIG.11). For each ramp-up stimulus sequence, there can be a total of K (uniform or non-uniform) steps (e.g. I1to IK shown inFIG.11) covering a certain dynamic range for a given transmitter gain setting (e.g. 40 dB for EDGE to cover top five Power Control Levels (PCLs)). Each step in the stimulus can have a programmable dwell time DTkas shown inFIG.11. For example, for 2G transmitter calibration, the following may be used: number of steps K=32, change per step Δ⁢k=4⁢0⁢a⁢BK, dwell time per step DTk=173 μs, number of runs M=16. If the final receiver processing sampling rate (e.g. sampling rate of ADCs1020,1022) is 7.68 MHz sampling rate, then DTk=173 μs corresponds to N=1024 samples per step in the stimulus. Initially, the transmitter output power may be observed with a spectrum analyzer (e.g. spectrum analyzer coupled to output of PA432of transmitter402) and can be associated with the resulting receiver I/Q DC output value after choosing the optimal measurement receiver automatic gain control settings (MRx AGC), e.g. by appropriate setting of LNA1010, attenuator1008and filter1016and applying the measurement receiver DC correction. The following steps may then be performed: 1. Prior to transmitting a stimulus (e.g. as shown inFIG.11), the DC gain and offset values (in-phase and quadrature offset correction factors) in processor moo may be set as follows: DGain=1 and DCOCI=DCOCQ=0. 2. Immediately before transmitting each run of the stimulus (1, 2, . . . M), e.g. at times1150,1152ofFIG.11, the measurement receiver DC offset of measurement receiver1024may be re-measured with the AM path of transmitter402(through DAC424and LPF426) disabled or maximally attenuated while measurement receiver1024is ON (e.g. by disabling VGA428). PA432may also be disabled. This will isolate the measurement receiver path of measurement receiver1024from the transmitter path of transmitter402for a more accurate measurement receiver DC offset measurement. The DC measurement time is the programmable dwell time (DTk). The DC offset of measurement receiver1024is simply measured by averaging the measurement receiver I/Q output as shown in the following equation: MRx_DC⁢(m,k=0)=∑n=0N-1⁢IM⁢R⁢x(m,0,n)+j⁢QM⁢R⁢x(m,0,n),1≤m≤M This measurement before the start of each stimulus is done to protect against slow MRx flicker noise drifting with time. 3. For the mth stimulus run (1≤m≤M) and with each stimulus run made of K steps (Ik, 1≤k≤K), the average complex DC value may be computed according to the following equation: DC(m,k)=∑n=0N-1⁢IM⁢R⁢x⁢(m,k,n)+j⁢QM⁢R⁢x⁢(m,k,n),1≤k≤K,1≤m≤Mand then, the kth AMAM/AMPM responses (i.e. AMAM/AMPM distortion for the kth stimulus step) may be computed from the following expressions: AMAM⁢(k)=❘"\[LeftBracketingBar]"1M⁢∑m=1M⁢D⁢C⁢(m,k)-MRx_DC⁢(m,k=0)❘"\[RightBracketingBar]"AMPM⁡(k)=Angle[1M⁢∑m=1M⁢D⁢C⁡(m,k)-MRx_DC⁢(m,k=0)] If desired, the noise-averaged out measurement points from applying Equation 6 can be further smoothed out by using a polynomial fitting (e.g. order=7). This polynomial fitting has the advantage of smoothing out any residual noise in the measured AMAM/AMPM points. Once the AMAM/AMPM distortion is measured, an estimation procedure such as least squares estimation may be used to determine values for digital offset correction, i.e. values for DGain, DCOCI, and DCOCQ. Note that the goal of this procedure is to determine values of a complex DC offset according to Equation 5 above, which requires estimating the complex DC offset and cascaded analog/RF gain. This may include estimating the following two quantities: (DCEnv+αejΔφ) and RFGain. Using a Least Squares (LS) estimation may involve several points k=m, m+1, . . . , m+L with m and L both being programmable as illustrated inFIG.12, i.e. a configurable range1260is selected by selecting appropriate values of m and L and configurable range is used for performing LS estimation.FIG.12shows AMPM distortion obtained from applying Equation 6 to data that was gathered while applying stimulus during multiple runs, e.g. as shown inFIG.11, with polynomial fitting for smoothing out residual noise. Typically, to get a more accurate estimate of LO feedthrough in a transmitter (e.g. local oscillator feedthrough438in transmitter402) using a LS fit, more weighing should be given to lower Ikvalues where LO feedthrough is more dominant (e.g. as illustrated inFIGS.5A-B). Due to using the same phase modulated LO from PLL430for the transmitter up-conversion in transmitter402and measurement receiver down-conversion in measurement receiver1024as shown inFIG.10, the phase modulation information θ(t) can be eliminated when looking at the output of measurement receiver1024, i.e. at outputs of ADCs1020,1022. Thus, Equation 2 may be rewritten as follows, with θ(t)=0, DGain=1 and DCOCI=DCOCQ=0 out(Ik)=ejϑ{RFGain*Ik+DCEnv+αejΔφ}=AMAM(k)ejAMPM(k),k=m,m+1, . . .L In matrix format, this can be written as [Im1Im+11····Im+L1][ej⁢ϑ⁢R⁢F⁢G⁢a⁢i⁢nej⁢ϑ⁢{D⁢CE⁢n⁢v+α⁢ej⁢Δ⁢φ}]=[AMAM⁡(m)⁢ej⁢A⁢M⁢P⁢M⁡(m)AMAM⁡(m+1)⁢ej⁢A⁢M⁢P⁢M⁡(m+1)··AMAM⁡(m+L)⁢ej⁢A⁢M⁢P⁢M⁡(m+L)](L+1)×22×1(L+1)×1 The LS solution to this matrix system is [x1x2]=[ej⁢ϑ⁢R⁢F⁢G⁢a⁢i⁢nej⁢ϑ⁢{D⁢CE⁢n⁢v+α⁢ej⁢Δ⁢φ}]=B⁢AT[AMAM⁡(m)⁢ej⁢A⁢M⁢P⁢M⁡(m)AMAM⁡(m+1)⁢ej⁢A⁢M⁢P⁢M⁡(m+1)··AMAM⁡(m+L)⁢ej⁢A⁢M⁢P⁢M⁡(m+L)](Equation⁢7) Where A{circumflex over ( )}T is a 2×(L+1) transpose matrix of the following matrix: A=[Im1Im+11····Im+L1] and B is a 2×2 matrix given by: B=(ATA)−1 Note that the B matrix can be easily pre-computed and stored in memory. The simple matrix multiplication on the right side of Equation 7 can be carried out in firmware or hardware. For a given desired RMS output power Pout, Root Mean Square (RMS) voltage, VRMSis given by the following equation: VR⁢M⁢S=0.05*1⁢0Pout1⁢0 Thus, digital scaling and complex DC offset corrections are given by the following: DGain=VR⁢M⁢Sabs⁢(x1)*R⁢M⁢S⁡(A⁡(t))DCOC=-x2x1 Where A(t) is the magnitude output of the CORDIC and RMS(.) is the root mean square calculation (i.e. the average power of A(t) over time). Note that the quantities x1and x2are complex values so that the resulting value for the DC offset correction is a complex digital offset that includes an in-phase offset correction factor and a quadrature offset correction factor. These values can be stored, e.g. in a lookup table, for subsequent use to correct for local oscillator feedthrough in an amplifier (e.g. may be stored in a memory in or coupled to the processor). Calibration of correction circuits to correct for local oscillator feedthrough of an amplifier in a transmitter may be performed as described above or otherwise to provide a complex digital offset to digitally correct for local oscillator feedthrough of the amplifier, the complex digital offset including an in-phase offset correction factor and a quadrature offset correction factor. The complex digital offset may then be applied, e.g. using one or more CORDIC to apply the digital offset to correct both amplitude and phase of a polar transmitter. FIG.13illustrates an example of using a complex digital offset that includes: generating a modulated digital signal including an in-phase signal component and a quadrature signal component1380, converting the modulated digital signal to an amplitude signal component and a phase signal component1382, combining the amplitude signal component and the phase signal component in an amplifier to generate a radio frequency output1384, and calculating a complex digital offset to digitally correct for local oscillator feedthrough of the amplifier, the complex digital offset includes an in-phase offset correction factor and a quadrature offset correction factor1386. The method also includes applying the complex digital offset to correct the amplitude signal component and the phase signal component1388. It is understood that the present subject matter may be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this subject matter will be thorough and complete and will fully convey the disclosure to those skilled in the art. Indeed, the subject matter is intended to cover alternatives, modifications and equivalents of these embodiments, which are included within the scope and spirit of the subject matter as defined by the appended claims. Furthermore, in the following detailed description of the present subject matter, numerous specific details are set forth in order to provide a thorough understanding of the present subject matter. However, it will be clear to those of ordinary skill in the art that the present subject matter may be practiced without such specific details. Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatuses (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable instruction execution apparatus, create a mechanism for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The aspects of the disclosure herein were chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure with various modifications as are suited to the particular use contemplated. Although the present disclosure has been described with reference to specific features and embodiments thereof, it is evident that various modifications and combinations can be made thereto without departing from scope of the disclosure. The specification and drawings are, accordingly, to be regarded simply as an illustration of the disclosure as defined by the appended claims, and are contemplated to cover any and all modifications, variations, combinations or equivalents that fall within the scope of the present disclosure. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
53,883
11943086
DESCRIPTION OF EMBODIMENTS The following describes technical solutions of this application with reference to accompanying drawings. Unless otherwise defined, all technical and scientific terms used in this specification have same meanings as those usually understood by a person skilled in the art of this application. The terms used in the specification of this application are merely for the purpose of describing specific embodiments, and are not intended to limit this application. Compared with a multi-carrier waveform such as orthogonal frequency division multiplexing (OFDM), a single-carrier waveform such as DFT-s-OFDM or SC-QAM has a lower peak to average power ratio (PAPR). Therefore, in a case of a same power amplifier, the single-carrier waveform can provide larger output power and higher power amplification efficiency, and this improves coverage and reducing energy consumption. Therefore, single-carrier waveforms such as DFT-s-OFDM or SC-QAM are widely applied to various communication systems such as a long term evolution (LTE) system, a 5th generation (5G) system, or a new radio (NR) system. DFT-s-OFDM is discrete fourier transform-spread-orthogonal frequency division multiplexing (DFT-s-OFDM). SC-QAM is single carrier-quadrature amplitude modulation (SC-QAM). In both NR and LTE, the DFT-s-OFDM waveform uses a CP as a guard interval between symbols (DFT-s-OFDM symbols). FIG.1is a schematic diagram of a time domain structure in which a CP is used as a guard interval between symbols. InFIG.1, two transmitted symbols are provided: a transmitted symbol 1 and a transmitted symbol 2. A CP of the transmitted symbol 1 refers to a cyclic structure formed by copying a segment of transmitted symbol components between a location from which a CP is obtained through truncation and an end location in the transmitted symbol 1 to the start of the transmitted symbol 1. Similarly, a CP of the transmitted symbol 2 refers to a cyclic structure formed by copying a segment of transmitted symbol components between a location from which a CP is obtained through truncation and an end location in the transmitted symbol 2 to the start of the transmitted symbol 2. The CP of the transmitted symbol 2 is used as a guard interval between the transmitted symbol 1 and the transmitted symbol 2, and the CP of the transmitted symbol 1 is used as a guard interval between the transmitted symbol 1 and a transmitted symbol (not shown inFIG.1) preceding the transmitted symbol 1. FIG.2shows receiving cases, at a receive end, of the transmitted symbol 1 and the transmitted symbol 2 shown inFIG.1after the transmitted symbol 1 and the transmitted symbol 2 are transmitted through a channel (FIG.2shows only a maximum multipath delay of the channel as an example). Case 1 indicates that the maximum multipath delay of the channel does not exceed a CP length. In Case 1, because of protection of a CP, a receive window of none of transmitted symbols includes ISI of another transmitted symbol. As shown inFIG.2, a receive window of the transmitted symbol 2 does not include the transmitted symbol 1, and therefore ISI of the transmitted symbol 1 can be avoided, and a receive window of the transmitted symbol 1 does not include the transmitted symbol (not shown inFIG.2) preceding the transmitted symbol 1, and therefore the transmitted symbol 1 is not subject to ISI, either. In addition, in Case 1, in the receive window of each transmitted symbol, a received symbol is a cyclic convolution of the transmitted symbol and the channel, so that the receive end can eliminate a multipath effect of the channel by using a frequency domain equalization method. Case 2 indicates that the maximum multipath delay of the channel exceeds a CP length. In Case 2, because the channel delay exceeds the CP length, a receive window of one transmitted symbol includes another transmitted symbol. As shown inFIG.2, a receive window of the transmitted symbol 2 includes a segment of transmitted symbol components of the transmitted symbol 1, and therefore the transmitted symbol 2 is subject to ISI of the transmitted symbol 1, and a receive window of the transmitted symbol 1 may also include the transmitted symbol (not shown inFIG.2) preceding the transmitted symbol 1, and therefore the transmitted symbol 1 is also subject to ISI. In addition, in Case 2, in a receive window of each transmitted symbol, a received symbol is no longer a cyclic convolution of the transmitted symbol and the channel. This is not helpful for the receive end to eliminate a multipath effect of the channel. The transmitted symbol mentioned above represents a symbol sent by a transmit end, and the received symbol represents a symbol received by the receive end. It can be learned fromFIG.2that a CP length required by a channel condition in Case 2 is greater than a CP length required by a channel condition in Case 1. In other words, different channel conditions may also require different CP lengths. It can be learned fromFIG.1orFIG.2that a CP length affects a frame structure. Because of reasons such as complexity of a transceiver and out-of-band interference, flexibility of performing (frequency division, spatial, and time division) multiplexing between users with different CP lengths is relatively poor. In an implementation, to perform flexible multiuser multiplexing, a network device usually configures a same CP length for different users. However, as shown inFIG.2, user equipment with different channel conditions may also require different CP lengths. In the conventional technology, to overcome the problem shown inFIG.2, in other words, to ensure performance of all users with different channel conditions, a system selects, as CP lengths of all users, CP lengths greater than a channel delay of a large-delay user. However, for a small-delay user, an excessive CP length causes unnecessary signaling overheads. It can be learned from the foregoing descriptions that, in the conventional technology, a guard interval between symbols cannot be flexibly configured based on a user requirement. This application provides a symbol processing method and apparatus, so that a guard interval between symbols can be flexibly configured when a CP length is fixed. The technical solutions in the embodiments of this application may be applied to various communication systems, for example, a cellular communication system such as LTE and evolution of the cellular communication system, a 5G system, an NR system, a machine-to-machine (M2M) system, and another future evolved communication system. An embodiment of this application provides a symbol processing method. The method includes: sending a first transmitted symbol and a second transmitted symbol that are consecutive in time domain, where the first transmitted symbol is located before the second transmitted symbol, a symbol component whose end location is a first reference point in the first transmitted symbol is the same as a symbol component whose end location is a second reference point in the second transmitted symbol, the first reference point represents an end location of a transmitted symbol, and the second reference point represents a location from which a CP is obtained through truncation in a transmitted symbol. The second transmitted symbol has a CP. For example,FIG.3is a schematic diagram of time domain structures of a first transmitted symbol and a second transmitted symbol. A symbol component D1in the first transmitted symbol is the same as a symbol component D2in the second transmitted symbol, an end location of the symbol component D1in the first transmitted symbol is a first reference point, and an end location of the symbol component D2in the second transmitted symbol is a second reference point. The first reference point represents an end location of a transmitted symbol, and the second reference point represents a location from which a CP is obtained through truncation in a transmitted symbol. That a symbol component D1in the first transmitted symbol is the same as a symbol component D2in the second transmitted symbol includes: Content included in the symbol component D1and the symbol component D2is the same, and time lengths of the symbol component D1and the symbol component D2are the same. That content included in the symbol component D1and the symbol component D2is the same may be understood as follows: Complex-valued symbols separately corresponding to the symbol component D1and the symbol component D2are the same before the symbol components are generated. It should be noted that, in the foregoing descriptions with reference toFIG.3, “a symbol component D1in the first transmitted symbol is the same as a symbol component D2in the second transmitted symbol” is mentioned. Herein, “the same” is not necessarily “absolutely the same”, and may alternatively indicate “approximately the same”. It should be understood that there may be a slight deviation between the symbol component D1in the first transmitted symbol and the symbol component D2in the second transmitted symbol because of an effect of filtering. The symbol processing method provided in this application may be implemented by a transmit end, for example, may be implemented by a transmitter or a circuit used to implement a transmitter. The transmitted symbol in this embodiment of this application may be an uplink waveform symbol and/or a downlink waveform symbol in a communication system. FIG.4shows receiving cases, at a receive end (corresponding to the transmit end), of the first transmitted symbol and the second transmitted symbol shown inFIG.3(after the first transmitted symbol and the second transmitted symbol are transmitted through a channel). Case 1 indicates that a maximum multipath delay of the channel does not exceed a CP length. In Case 1, because of protection of a CP, a receive window of none of transmitted symbols includes ISI of another transmitted symbol. As shown inFIG.4, a receive window of the second transmitted symbol does not include a component of the first transmitted symbol, so that ISI of the first transmitted symbol can be avoided. Case 2 indicates that a maximum multipath delay of the channel exceeds a CP length. In Case 2, because the channel delay exceeds the CP length, a receive window of one transmitted symbol may include another transmitted symbol. As shown inFIG.4, the symbol component D1in the first transmitted symbol enters the receive window of the second transmitted symbol. However, because the symbol component D1in the first transmitted symbol is the same as the symbol component D2in the second transmitted symbol, that the symbol component D1in the first transmitted symbol enters the receive window of the second transmitted symbol equivalently means that the symbol component D2in the second transmitted symbol enters the receive window of the second transmitted symbol. In addition, because the symbol component D2and a symbol component used as a CP are consecutive in the second transmitted symbol, based on a principle that a CP of the second transmitted symbol does not cause ISI to the second transmitted symbol, the symbol component D1that is in the first transmitted symbol and that enters the receive window of the second transmitted symbol does not cause ISI to the second transmitted symbol, either. It can be learned fromFIG.3andFIG.4that even if a CP length is fixed, a length of the symbol component D1may be flexibly configured (which equivalently means that a length of the symbol component D2is flexibly configured), so that a sum of the length of the symbol component D1and the CP length is greater than a channel delay, and therefore a multipath effect of the channel can be resisted. For better understanding instead of limitation, in the examples ofFIG.3andFIG.4, the symbol component D1in the first transmitted symbol and the CP of the second transmitted symbol may be considered together as an equivalent guard interval between the first transmitted symbol and the second transmitted symbol, as shown inFIG.3. In this application, for the first transmitted symbol and the second transmitted symbol that are consecutive in time domain, the symbol component whose end location is the first reference point in the first transmitted symbol and the symbol component whose end location is the second reference point in the second transmitted symbol are enabled to be the same, so that a guard interval between symbols can be flexibly configured by controlling a length of a common symbol component between the first transmitted symbol and the second transmitted symbol. In addition, it should be understood that the length of the common symbol component between the first transmitted symbol and the second transmitted symbol does not affect a frame structure of the transmitted symbol. Therefore, for users with different channel conditions, such common symbol components with different lengths are configured, so that a guard interval between symbols can be flexibly configured first, and then (frequency division, spatial, and time division) multiplexing can also be performed between users for which different guard intervals are configured. Therefore, in this application, when a CP length is fixed, a guard interval between symbols can be flexibly configured, and a length of the guard interval can also be flexibly configured based on a user requirement. It should be noted that all lengths, for example, a CP length and a length of a symbol component, mentioned in this specification are time lengths. It should be further noted that, for ease of understanding and description instead of limitation, in this specification, three reference points are defined for the transmitted symbol: the first reference point, the second reference point, and a third reference point (to be mentioned in the following embodiments), as shown inFIG.3. The first reference point represents an end location of a transmitted symbol, the second reference point represents a location from which a CP is obtained through truncation in a transmitted symbol, and the third reference point represents a start location of a transmitted symbol. The start location of the transmitted symbol is the start location of the transmitted symbol, and is not a start location of a CP of the transmitted symbol. Alternatively, it may be understood from another perspective that the start location of the transmitted symbol is an end location of a CP of the transmitted symbol. It should be further noted that all locations mentioned in this specification are time domain locations. With reference toFIG.3andFIG.4, the foregoing describes time domain structures of transmitted symbols (for example, the first transmitted symbol and the second transmitted symbol described above) sent by the transmit end. The following describes content related to a process of generating a transmitted symbol. FIG.5is a schematic diagram of a basic procedure of symbol processing according to an embodiment of this application. As shown inFIG.5, the procedure includes the following steps S320, S340, and S360. S320: Group a plurality of complex-valued symbols (complex-valued symbols) to obtain a plurality of sets (sets), where each set corresponds to one transmitted symbol. Alternatively, a plurality of complex-valued symbols are divided into (be divided into) a plurality of sets, and each set corresponds to one transmitted symbol. The plurality of complex-valued symbols may include a modulated symbol obtained by modulating an encoded bit stream. A modulation scheme for modulating the encoded bit stream may include pi/2-binary phase shift keying (BPSK), quadrature phase shift keying (QPSK), 16 quadrature amplitude modulation (QAM), 64QAM, 256QAM, phase shift keying (PSK), amplitude phase shift keying (APSK), non-uniform QAM, or the like. Alternatively, the plurality of complex-valued symbols may further include a reference signal sampling point. For example, the reference signal sampling point may include a phase tracking reference signal (PTRS) sampling point. That each set corresponds to one transmitted symbol means that each finally generated transmitted symbol is generated based on a corresponding set. S340: Perform a complex-valued symbol copy operation on the plurality of sets obtained in step S320. S360: Add a CP based on the sets obtained through the copy operation in step S340, to obtain a transmitted symbol (or add a CP based on the sets obtained through the copy operation in step S340, and then perform another operation to obtain a transmitted symbol, where the another operation described herein includes but is not limited to fast fourier transform, carrier mapping, sampling, filtering, or the like). For two sets corresponding to two transmitted symbols that are consecutive in time domain, the copy operation in step340enables the two sets to have some same complex-valued symbols. The following describes the copy operation in step340. For ease of distinguishing instead of limitation, the following agreement on names of signals is made in this specification: Signals to be grouped (or divided) into sets are referred to as complex-valued symbols. A signal obtained by grouping (or dividing) complex-valued symbols is referred to as a set. A set including some complex-valued symbols in the set is referred to as a subset. A signal sent by a transmit end is referred to as a transmitted symbol. It should be understood that the names are only for ease of understanding and distinguishing instead of limitation. For example, in a future technology evolution process, signals obtained at different stages of a symbol generation procedure may have other names. The transmitted symbol in this embodiment of this application may be a symbol of a single-carrier waveform. For example, the transmitted symbol is a DFT-s-OFDM symbol. The DFT-s-OFDM symbol represents a single-carrier symbol whose waveform is a DFT-s-OFDM waveform. For another example, the transmitted symbol is an SC-QAM symbol. The SC-QAM symbol represents a single-carrier symbol whose waveform is an SC-QAM waveform. In the following descriptions, a wireless communication system to which the DFT-s-OFDM waveform is applicable in this application is denoted as an application scenario 1, and a wireless communication system to which the SC-QAM waveform is applicable in this application is denoted as an application scenario 2. Optionally, an application scenario of this application is the application scenario 1, in other words, the transmitted symbol is a DFT-s-OFDM symbol. As shown inFIG.6, step S350may be further included between step S340and step S360. In step S350, a discrete fourier transform (DFT) operation and an inverse fast fourier transform (IFFT) operation are included. For example, in step S350, a transmitter performs M-point DFT on the sets obtained through the copy operation, then maps M-point frequency domain elements to M consecutive subcarriers (not shown inFIG.6), and performs IFFT on a frequency domain signal after the subcarrier mapping. The IFFT is performed after the subcarrier mapping. The DFT may also be referred to as frequency domain precoding. In the embodiment shown inFIG.6, step360includes: adding a CP to a signal obtained through the IFFT to obtain a DFT-s-OFDM symbol. Optionally, step S350further includes a frequency domain spectrum shaping (FDSS) operation. For example, in step S350, the transmitter performs M-point DFT on the sets obtained through the copy operation, performs cyclic extension and frequency domain filtering (namely, the FDSS operation) on M-point frequency domain elements obtained through the DFT, then maps the frequency domain elements obtained through the FDSS operation to M1(M1≥M) consecutive subcarriers, and performs IFFT on a frequency domain signal after the subcarrier mapping. It should be understood that, in the application scenario 1, the copy operation on the sets is performed before the DFT. Optionally, an application scenario of this application is the application scenario 2, in other words, the transmitted symbol is an SC-QAM symbol. As shown inFIG.7, in step S360, not only the operation of adding a CP is included, but also upsampling and filtering are included. For example, in step S360, a transmitter adds a CP to the sets obtained through the copy operation, to obtain a signal obtained after the CP is added, and then performs upsampling and filtering on the signal obtained after the CP is added, to finally obtain an SC-QAM symbol. It should be understood that, in the application scenario 2, the copy operation on the sets is performed before the CP is added. FIG.8is a schematic flowchart of a symbol processing method according to an embodiment of this application. The method includes the following steps S810to S830. S810: Obtain a plurality of complex-valued symbols. The plurality of complex-valued symbols may include a modulated symbol obtained by modulating an encoded bit stream. Optionally, the plurality of complex-valued symbols may further include a reference signal sampling point. For example, the reference signal sampling point may include a PTRS sampling point. S820: Divide the plurality of complex-valued symbols into a plurality of sets, or in other words, group the plurality of complex-valued symbols to obtain a plurality of sets. Each set corresponds to one transmitted symbol. For example, each set corresponds to one DFT-s-OFDM symbol or SC-QAM symbol. Each set may include several complex-valued symbols. For example, each set may be considered as one multidimensional time domain vector, and complex-valued symbols in the set may be considered as elements in the time domain vector. Step S820corresponds to step S320shown inFIG.5,FIG.6, andFIG.7. S830: Perform a copy operation on the plurality of sets, so that two sets corresponding to two transmitted symbols that are consecutive in time domain have some same complex-valued symbols. Step S830corresponds to step S340shown inFIG.5toFIG.7. It should be understood that, in the application scenario 1, step S830is performed before DFT, and in the application scenario 2, step S830is performed before a CP is added. It should be further understood that, because two sets corresponding to two transmitted symbols that are consecutive in time domain have some same complex-valued symbols, time domain structures, shown inFIG.3, of the two transmitted symbols corresponding to the two sets can be implemented to some extent. Therefore, in this embodiment of this application, a copy operation is performed on two sets corresponding to two transmitted symbols that are consecutive in time domain, so that the two sets have same complex-valued symbols. This helps generate transmitted symbols whose time domain structures are shown inFIG.3. In this way, a guard interval between symbols can be flexibly configured when a CP length is fixed. For ease of understanding and description, in this application, a first transmitted symbol and a second transmitted symbol that are consecutive in time domain are used as examples for description. It should be understood that the symbol processing method provided in this application is applicable to any two or more transmitted symbols that are consecutive in time domain in a signal stream transmitted by a transmit end. In step S830, a copy operation may be performed on the sets in a plurality of manners. This is described below. For ease of understanding and description of the copy operation on the sets, the following first describes an association relationship between a set and a transmitted symbol. For ease of description instead of limitation, three reference points are defined for the transmitted symbol: a first reference point, a second reference point, and a third reference point, as shown inFIG.3. The first reference point represents an end location of a transmitted symbol, the second reference point represents a location from which a CP is obtained through truncation in a transmitted symbol, and the third reference point represents a start location of a transmitted symbol. The following separately uses the application scenario 1 and the application scenario 2 as examples for description. (1) Application scenario 1: The transmitted symbol is a DFT-s-OFDM symbol. In the application scenario 1, a symbol processing procedure is shown inFIG.6. Assuming (denoted as an assumption 1) that the transmit end performs DFT on M points, a dimension of a time domain vector on which DFT needs to be performed should be M. The time domain vector may be denoted as follows: xl=[xl(0),xl(1), . . . ,xl(M−1)]T, wherexlrepresents the time domain vector. The time domain vector xlincludes M elements, and time domain indexes of the first element to the last element are 0, 1, . . . , and M−1. Assuming (denoted as an assumption 2) that the transmit end performs DFT on M points (consistent with the assumption 1), a size of IFFT performed by the transmit end is N, and a quantity of sampling points of a CP is P, a quantity of points that may be equivalent to a length occupied by the CP before DFT is K=P/N·M. If P is not dividable by N, a calculation result of K is a non-integer. In this case, rounding needs to be performed on the calculation result of K, that is, K=└(P/N)·M┘, where └(P/N)·M┘ represents rounding down (P/N)·M. It should be understood that the rounding manner herein may be alternatively rounding up, rounding off, or the like. It should be understood that the quantity P of sampling points of the CP may be obtained based on a CP length. Based on the foregoing assumptions, a time domain index of a first reference point of the DFT-s-OFDM symbol in the time domain vector xlis M−1, a time domain index of a second reference point of the DFT-s-OFDM symbol in the time domain vector xlis M−K−1, and a time domain index of a third reference point of the DFT-s-OFDM symbol in the time domain vector xlis 0. (2) Application scenario 2: The transmitted symbol is an SC-QAM symbol. As shown inFIG.7, in a process of generating the SC-QAM symbol, neither DFT nor IFFT is included before a CP is added, and upsampling and filtering are performed after the CP is added. Therefore, a quantity that is of points in a time domain vector and that is equivalent to a CP length may be directly obtained based on the CP length. Therefore, a transmitter may directly obtain an equivalent CP length value K1. The equivalent CP length value K1 may be obtained based on the CP length. Assuming that a time domain vector before a CP is added is the M-dimensional time domain vector xldescribed in the application scenario 1, and an equivalent CP length value is K1, a time domain index of a first reference point of the SC-QAM symbol in the time domain vector xlis M−1, a time domain index of a second reference point of the SC-QAM symbol in the time domain vector xlis M−K1−1, and a time domain index of a third reference point of the SC-QAM symbol in the time domain vector xlis 0. As described above, a set may be considered as a multidimensional time domain vector. The time domain vector xlin the foregoing example may represent a time domain vector corresponding to a set. An element in the time domain vector xlcorresponds to a complex-valued symbol in the set. A time domain index of the element in the time domain vector xlcorresponds to a location, in the set, of the complex-valued symbol in the set. The foregoing example indicates that there is a correspondence between a time domain index of an element in the time domain vector xland a time domain location (for example, the first reference point, the second reference point, or the third reference point) in a transmitted symbol corresponding to the time domain vector xl. Therefore, there is also a correspondence between a location of a complex-valued symbol in a set and a location in a transmitted symbol corresponding to the set. As shown inFIG.9, optionally, in the embodiment shown inFIG.8, the plurality of sets obtained in step S820include a first set corresponding to the first transmitted symbol and a second set corresponding to the second transmitted symbol, the first transmitted symbol and the second transmitted symbol are consecutive in time domain, and the first transmitted symbol is located before the second transmitted symbol. Step S830includes step S831. Step S831: Perform a first copy operation on the first set and the second set, so that both the first set and the second set have first complex-valued symbols, an end location of a first subset E11including the first complex-valued symbols in the first set corresponds to a first reference point of the first transmitted symbol, and an end location of a second subset E21including the first complex-valued symbols in the second set corresponds to a second reference point of the second transmitted symbol, where the first reference point represents an end location of a transmitted symbol, and the second reference point represents a location from which a CP is obtained through truncation in a transmitted symbol. It should be understood that the first complex-valued symbol may include a plurality of complex-valued symbols. For example, a time domain vector corresponding to the first subset E11is the following subvector xl[1] in a time domain vector xlcorresponding to the first set: xl[1]=[xl(M−Ml1),xl(M−Ml1+1), . . . ,xl(M−1)]T; anda time domain vector corresponding to the second subset E21is the following subvector xl+1[2] in a time domain vector xl+1corresponding to the second set: xl+1[2]=[xl+1(M−Ml1−K),xl+1(M−Ml1−K+1), . . . ,xl+1(M−K−1)]T, whereM represents dimensions of the time domain vector xlcorresponding to the first set and the time domain vector xl+1corresponding to the second set, Ml1represents a length of a subset including same complex-valued symbols included in the first set and the second set, M−K−1 represents a time domain index of the second reference point of the second transmitted symbol in the time domain vector corresponding to the second set, and a value of K is related to a CP length. For example, the value of K is determined based on the CP length. It should be understood that in the application scenario 1, K in this example is calculated based on K=P/N·M (refer to the foregoing descriptions). In the application scenario 2, K in this example is equal to the equivalent CP length value K1. It should be understood that in this example, the subvector xl[1] in the time domain vector xlcorresponding to the first set is the same as an element included in the first subset E11in the first set, and the subvector xl+1[2] in the time domain vector xl+1corresponding to the second set is the same as an element included in the second subset E21in the second set. It can be learned from the association relationship, described above, between a set and a transmitted symbol that, by performing the copy operation in step S831on the first set and the second set, the first transmitted symbol corresponding to the first set and the second transmitted symbol corresponding to the second set may have time domain structures shown inFIG.3. For example, time domain structures of the first transmitted symbol corresponding to the first set and the second transmitted symbol corresponding to the second set are shown inFIG.3, the first subset E11in the first set may correspond to a symbol component D1in the first transmitted symbol, and the second subset E21in the second set may correspond to a symbol component D2in the second transmitted symbol. Therefore, in this embodiment of this application, a copy operation is performed on the first set and the second set corresponding to the first transmitted symbol and the second transmitted symbol that are consecutive in time domain, so that both the first set and the second set have the first complex-valued symbols. In addition, an end location of a subset including the first complex-valued symbols in the first set corresponds to the first reference point of the first transmitted symbol, and an end location of a subset including the first complex-valued symbols in the second set corresponds to the second reference point of the second transmitted symbol, so that the first transmitted symbol and the second transmitted symbol whose time domain structures are shown inFIG.3can be generated based on the first set and the second set obtained through the copy operation. Therefore, in this embodiment of this application, a guard interval between symbols can be flexibly configured when a CP length is fixed. It should be noted that the first copy operation performed on the first set and the second set in step S831may also be understood as mapping the first complex-valued symbols in the first set and the second set. Optionally, in an implementation of step S831, the first complex-valued symbols in the first set are copied into the second set. For example, a time domain vector corresponding to a first subset including the first complex-valued symbols copied in the first set is xl[1] described above, and a time domain vector corresponding to a second subset including the first complex-valued symbols copied into the second set from the first set is xl+1[2] described above. This copying manner may be referred to as backward copying. For example, that time domain structures of the first transmitted symbol corresponding to the first set and the second transmitted symbol corresponding to the second set are shown inFIG.3may equivalently mean that the symbol component D1in the first transmitted symbol is copied into the second transmitted symbol and the symbol component D2in the second transmitted symbol comes from the symbol component D1in the first transmitted symbol. Optionally, in another implementation of step S831, the first complex-valued symbols in the second set are copied into the first set. For example, a time domain vector corresponding to a second subset including the first complex-valued symbols copied in the second set is xl+1[2] described above, and a time domain vector corresponding to a first subset including the first complex-valued symbols copied into the first set from the second set is xl[1] described above. This copying manner may be referred to as forward copying. For example, that time domain structures of the first transmitted symbol corresponding to the first set and the second transmitted symbol corresponding to the second set are shown inFIG.3may equivalently mean that the symbol component D2in the second transmitted symbol is copied into the first transmitted symbol and the symbol component D1in the first transmitted symbol comes from the symbol component D2in the second transmitted symbol. For the embodiment shown inFIG.9, in an ideal case, the first transmitted symbol generated based on the first set and the second transmitted symbol generated based on the second set may have the time domain structures shown inFIG.3, in other words, the symbol component D1whose end location is the first reference point in the first transmitted symbol is the same as the symbol component D2whose end location is the second reference point in the second transmitted symbol. However, in some cases, an extension effect of a guard interval between symbols may be poor because of an effect of filtering. For example, time domain structures of the first transmitted symbol and the second transmitted symbol that are generated based on the first set and the second set in the embodiment inFIG.9are shown inFIG.10, and a symbol component D1′ whose end location is the first reference point in the first transmitted symbol is different from a symbol component D2′ whose end location is the second reference point in the second transmitted symbol. Consequently, a CP of the second transmitted symbol and the symbol component D1′ in the first symbol are non-consecutive, affecting extension of a guard interval between symbols. To resolve this problem, this application provides the following embodiments. As shown inFIG.11, optionally, in an embodiment, in the embodiment shown inFIG.9, after step S831, the method further includes step S840. Step S840: Separately perform a cyclic shift on the first set and the second set based on a same step and direction, so that the end location of the first subset E11corresponds to a location following the first reference point of the first transmitted symbol, and the end location of the second subset E21corresponds to a location following the second reference point of the second transmitted symbol. It should be understood that in step S840, because a cyclic shift is performed on the first set and the second set based on the same step and direction, a symbol component whose end location is the first reference point in the first transmitted symbol can be the same as a symbol component whose end location is the second reference point in the second transmitted symbol. For example, time domain structures of the first transmitted symbol corresponding to the first set and the second transmitted symbol corresponding to the second set are shown inFIG.12. The symbol component D1in the first transmitted symbol is the same as the symbol component D2in the second transmitted symbol, and an end location of the symbol component D1is located at a location following the first reference point of the first transmitted symbol, in other words, the end location of the symbol component D1is located at a header of the first transmitted symbol, and an end location of the symbol component D2is located at a location following the second reference point of the second transmitted symbol. The first subset E11in the first set obtained through the cyclic shift in step S840corresponds to the symbol component D1in the first transmitted symbol, and the second subset E21in the second set obtained through the cyclic shift in step S840corresponds to the symbol component D2in the second transmitted symbol. As shown inFIG.12, a symbol component D11whose end location is the first reference point in the first transmitted symbol is the same as a symbol component D21whose end location is the second reference point in the second transmitted symbol. An execution window of step S840varies with an application scenario. In the application scenario 1, an execution occasion of step S840may include 1, 2, or 3 shown inFIG.6. Optionally, as indicated by 1 inFIG.6, step S840is performed before DFT. For example, after the copy operation in step S831is performed on the first set and the second set, the cyclic shift described in step S840is further performed on the first set and the second set before the DFT. Optionally, as indicated by 2 inFIG.6, step S840is performed after DFT and before IFFT. For example, after the copy operation in step S831is performed on the first set and the second set, DFT is performed on the first set and the second set to separately obtain frequency domain signals of the first set and the second set, and then frequency domain weighting is performed on the frequency domain signals of the first set and the second set, to equivalently implement the cyclic shift described in step S840. Optionally, as indicated by 3 inFIG.6, step S840is performed after IFFT and before a CP is added. For example, after IFFT is performed, time domain symbols of the first set and the second set are obtained, and then a cyclic shift is performed on the time domain symbols of the first set and the second set. In the application scenario 2, as indicated by 4 inFIG.7, an execution occasion of step S840may be 4 shown inFIG.7, that is, step S840is performed after a copy operation and before a CP is added. Therefore, in this embodiment, by performing a cyclic shift on the sets obtained through the copy operation, impact on extension of a guard interval between symbols that is caused by an effect of filtering can be reduced to some extent. Optionally, in another embodiment, in the embodiment shown inFIG.9, the method further includes: performing a second copy operation on the first set and the second set, so that both the first set and the second set have second complex-valued symbols, a start location of a third subset E12including the second complex-valued symbols in the first set corresponds to a third reference point of the first transmitted symbol, and a start location of a fourth subset E22including the second complex-valued symbols in the second set corresponds to the second reference point of the second transmitted symbol, where the third reference point represents a start location of a transmitted symbol. It should be understood that the second complex-valued symbol may include a plurality of complex-valued symbols. It should be understood that the operation in this embodiment can achieve a same effect as that in the embodiment shown inFIG.11. Therefore, the first transmitted symbol and the second transmitted symbol whose time domain structures are shown inFIG.12can also be generated based on the first set and the second set in this embodiment. It should be noted that the second copy operation performed on the first set and the second set in this embodiment may also be understood as mapping the second complex-valued symbols in the first set and the second set. Therefore, in this embodiment, impact on extension of a guard interval between symbols that is caused by an effect of filtering can also be reduced to some extent. It should be understood that the second copy operation performed on the first set and the second set and the first copy operation performed on the first set and the second set in step S831may be separately performed, or may be combined into one copy operation for execution. As shown inFIG.13, optionally, in another embodiment, in the embodiment shown inFIG.8, step S830includes step S832. Step S832: Perform a copy operation on the first set and the second set, so that both the first set and the second set have third complex-valued symbols, an end location of a subset E1including the third complex-valued symbols in the first set corresponds to a location following the first reference point of the first transmitted symbol, and an end location of a subset E2including the third complex-valued symbols in the second set corresponds to a location following the second reference point of the second transmitted symbol. It should be understood that the third complex-valued symbol may include a plurality of complex-valued symbols. It should be noted that the copy operation performed on the first set and the second set in this embodiment may also be understood as mapping the third complex-valued symbols in the first set and the second set. It should be understood that the first transmitted symbol and the second transmitted symbol whose time domain structures are shown inFIG.12may be generated based on the first set and the second set in this embodiment. The subset E1in the first set corresponds to the symbol component D1in the first transmitted symbol, and the subset E2in the second set corresponds to the symbol component D2in the second transmitted symbol. It should be understood that step S832may also include two manners of forward copying and backward copying. Specific descriptions are similar to those of forward copying and backward copying described above. For brevity, details are not described herein again. For example, when backward copying is used in step S832, in the example ofFIG.12, the symbol component D1in the first transmitted symbol is copied into the second transmitted symbol, and the symbol component D2in the second transmitted symbol comes from the symbol component D1in the first transmitted symbol, where a symbol component D21comes from a symbol component D11in the first transmitted symbol, and a symbol component D22comes from a symbol component D12in the first transmitted symbol. For example, when forward copying is used in step S832, in the example ofFIG.12, the symbol component D2in the second transmitted symbol is copied into the first transmitted symbol, and the symbol component D1in the first transmitted symbol comes from the symbol component D2in the second transmitted symbol, where a symbol component D11comes from a symbol component D21in the second transmitted symbol, and a symbol component D12comes from a symbol component D22in the second transmitted symbol. It should be understood that, in this embodiment, impact on extension of a guard interval between symbols that is caused by an effect of filtering can also be reduced to some extent. This embodiment of this application may be applied to generating a DFT-s-OFDM symbol and an SC-QAM symbol. In addition, this application may be further applied to generating a reference signal such as a DMRS. For a DFT-s-OFDM waveform, a reference signal is usually generated in frequency domain, and subcarrier mapping is directly performed. In a possible implementation, when the transmitted symbol is a reference signal, the copy operation in this embodiment of this application is performed after a sampling value of a time domain sequence converted from a reference signal frequency domain sequence is calculated. Considering that the reference signal is used to estimate a channel, to ensure channel performance, a time domain sampling point including another symbol is not desirable in the reference signal. Optionally, in this embodiment of this application, if a transmitted symbol x is a reference signal, and transmitted symbols preceding and following the transmitted symbol x are non-reference signals, forward copying is performed in step S830for transmitted symbols preceding the transmitted symbol x (including the reference signal x), and backward copying is performed in step S830for reference signals following the transmitted symbol x (including the reference signal x). Optionally, in the foregoing embodiments of backward copying, the first transmitted symbol may be a reference signal, and the second transmitted symbol is a non-reference signal. For example, the first transmitted symbol is a DMRS. Optionally, in the foregoing embodiments of forward copying, the first transmitted symbol is a non-reference signal, and the second transmitted symbol may be a reference signal. For example, the second transmitted symbol is a DMRS. In this embodiment, a guard interval between symbols can be flexibly configured, and integrity of a reference signal can also be ensured. FIG.14is a schematic diagram of time domain structures of three transmitted symbols (1, 2, and 3). The transmitted symbol 2 is a DMRS, the transmitted symbol 1 and the transmitted symbol 2 are non-reference signals, a copy relationship between a set 2 corresponding to the transmitted symbol 2 and a set 1 corresponding to the transmitted symbol 1 is forward copying, and a copy relationship between the set 2 corresponding to the transmitted symbol 2 and a set 3 corresponding to the transmitted symbol 3 is backward copying. It can be learned fromFIG.14that a symbol component R1whose end location is a second reference point in the transmitted symbol 2 is copied into the transmitted symbol 1, an end location of the symbol component R1copied into the transmitted symbol 1 is a first reference point of the transmitted symbol 1, a symbol component R2whose end location is a first reference point in the transmitted symbol 2 is copied into the transmitted symbol 3, and an end location of the symbol component R2copied into the transmitted symbol 3 is a first reference point of the transmitted symbol 3. As shown inFIG.14, the copy relationship between the transmitted symbol 2 and the symbol 1 is forward copying. To be specific, the symbol component R1in the DMRS is forward copied into the symbol 1, and by analogy, a symbol component D0in the symbol 1 is forward copied into a symbol (not shown inFIG.14) preceding the symbol 1. The copy relationship between the DMRS and the symbol 3 is backward copying. To be specific, the symbol component R2in the DMRS is backward copied into the symbol 3, and by analogy, a symbol component D3in the symbol 3 is copied into a symbol (not shown inFIG.14) following the symbol 3. It should be understood that, in this embodiment, a guard interval between symbols can be flexibly configured, and accuracy of a reference signal can also be ensured, so that channel performance can be ensured. When two or more transmitted symbols that are consecutive in time domain are reference signals, this application provides a symbol processing method. The method includes the following steps. Step (1): Obtain a plurality of complex-valued symbols. Step (2): Divide the plurality of complex-valued symbols into a plurality of sets, where the plurality of sets include a first set and a second set that are the same, the first set corresponds to a first transmitted symbol, the second set corresponds to a second transmitted symbol, the first transmitted symbol and the second transmitted symbol are consecutive in time domain, and the first transmitted symbol is located before the second transmitted symbol. It should be understood that a time domain vector corresponding to the first set is the same as a time domain vector corresponding to the second set. Step (3): Perform a cyclic shift on the first set and/or the second set, so that complex-valued symbols in a first subset in the first set are the same as complex-valued symbols in a second subset in the second set, where an end location of the first subset corresponds to a first reference point of the first transmitted symbol, and an end location of the second subset corresponds to a second reference point of the second transmitted symbol. The first reference point represents an end location of a transmitted symbol, and the second reference point represents a location from which a CP is obtained through truncation in a transmitted symbol. Optionally, the transmitted symbols corresponding to the first set and the second set are reference signals such as DMRSs. It should be understood that, in this embodiment, no copy operation is performed on the first set and the second set, and when the transmitted symbols corresponding to the first set and the second set are reference signals, a guard interval between the reference signals can be flexibly configured while accuracy of the reference signals is ensured. Optionally, in the foregoing embodiment related toFIG.8, the method further includes: processing the plurality of sets obtained through the copy operation, to obtain a plurality of transmitted symbols, where the plurality of transmitted symbols include the first transmitted symbol and the second transmitted symbol, the first transmitted symbol and the second transmitted symbol are consecutive in time domain, and the first transmitted symbol is located before the second transmitted symbol. A symbol component whose end location is the first reference point in the first transmitted symbol is the same as a symbol component whose end location is the second reference point in the second transmitted symbol, the first reference point represents an end location of a transmitted symbol, and the second reference point represents a location from which a CP is obtained through truncation in a transmitted symbol. In this application, for the first transmitted symbol and the second transmitted symbol that are consecutive in time domain, the symbol component whose end location is the first reference point in the first transmitted symbol and the symbol component whose end location is the second reference point in the second transmitted symbol are enabled to be the same, so that a guard interval between symbols can be flexibly configured by controlling a length of the symbol component. Therefore, in this application, when a CP length is fixed, a guard interval between symbols can be flexibly configured, and a length of the guard interval can also be flexibly configured based on a user requirement. It should be understood that, in addition to the embodiments provided in this specification, any other solution that can generate transmitted symbols whose time domain structures are shown inFIG.3falls within the protection scope of this application. The embodiments described in this specification may be independent solutions, or may be combined based on internal logic. These solutions all fall within the protection scope of this application. It may be understood that, in the foregoing method embodiments, an execution body may be a terminal device or a component (for example, a chip or a circuit) that may be used in a terminal device, or may be a network device or a component (for example, a chip or a circuit) that may be used in a network device. The foregoing describes the method embodiments provided in the embodiments of this application, and the following describes apparatus embodiments provided in the embodiments of this application. It should be understood that descriptions of the apparatus embodiments correspond to the descriptions of the method embodiments. Therefore, for content that is not described in detail, refer to the foregoing method embodiments. For brevity, details are not described herein again. FIG.15is a schematic block diagram of a symbol processing apparatus1500according to an embodiment of this application. The apparatus1500includes the following units:an obtaining unit1510, configured to obtain a plurality of complex-valued symbols;a grouping unit1520, configured to divide the plurality of complex-valued symbols into a plurality of sets, where each set corresponds to one transmitted symbol; anda copying unit1530, configured to perform a copy operation on the plurality of sets, so that two sets corresponding to two transmitted symbols that are consecutive in time domain have some same complex-valued symbols. In this application, a copy operation is performed on two sets corresponding to two transmitted symbols that are consecutive in time domain, so that the two sets have same complex-valued symbols. This helps generate a first transmitted symbol and a second transmitted symbol whose time domain structures are shown inFIG.3. Therefore, in this application, a guard interval between symbols can be flexibly configured when a CP length is fixed. Optionally, in some embodiments, the plurality of sets include a first set corresponding to the first transmitted symbol and a second set corresponding to the second transmitted symbol, the first transmitted symbol and the second transmitted symbol are consecutive in time domain, and the first transmitted symbol is located before the second transmitted symbol. The copying unit1530is configured to perform a first copy operation on the first set and the second set, so that both the first set and the second set have first complex-valued symbols, an end location of a first subset including the first complex-valued symbols in the first set corresponds to a first reference point of the first transmitted symbol, and an end location of a second subset including the first complex-valued symbols in the second set corresponds to a second reference point of the second transmitted symbol, where the first reference point represents an end location of a transmitted symbol, and the second reference point represents a location from which a CP is obtained through truncation in a transmitted symbol. For example, a time domain vector corresponding to the first subset is a subvector xl[1] in a time domain vector corresponding to the first set: xl[1]=[xl(M−Ml1),xl(M−Ml1+1), . . . ,xl(M−1)]T; anda time domain vector corresponding to the second subset is a subvector xl+1[2] in a time domain vector corresponding to the second set: xl+1[2]=[xl+1(M−Ml1−K),xl+1(M−Ml1−K+1), . . . ,xl+1(M−K−1)]T, whereM represents dimensions of the time domain vectors separately corresponding to the first set and the second set, Ml1represents a length of a subset including the first complex-valued symbols, M−K−1 represents a time domain index of the second reference point of the second transmitted symbol in the time domain vector corresponding to the second set, and a value of K is related to a CP length. Optionally, in some embodiments, the copying unit1530is configured to copy the first complex-valued symbols in the first set into the second set. For example, a time domain vector corresponding to a first subset including the first complex-valued symbols copied in the first set is xl[1] described above, and a time domain vector corresponding to a second subset including the first complex-valued symbols copied into the second set from the first set is xl+1[2] described above. Optionally, in this embodiment, the first transmitted symbol is a reference signal, and the second transmitted symbol is a non-reference signal. Optionally, in some embodiments, the copying unit1530is configured to copy the first complex-valued symbols in the second set into the first set. For example, a time domain vector corresponding to a second subset including the first complex-valued symbols copied in the second set is xl+1[2] described above, and a time domain vector corresponding to a first subset including the first complex-valued symbols copied into the first set from the second set is xl[1] described above. Optionally, in this embodiment, the first transmitted symbol is a non-reference signal, and the second transmitted symbol is a reference signal. Optionally, in some embodiments, the apparatus1500further includes a shifting unit1540, configured to separately perform a cyclic shift on the first set and the second set based on a same step and direction, so that the end location of the first subset corresponds to a location following the first reference point of the first transmitted symbol, and the end location of the second subset corresponds to a location following the second reference point of the second transmitted symbol. Optionally, in some embodiments, the copying unit1530is further configured to perform a second copy operation on the first set and the second set, so that both the first set and the second set have second complex-valued symbols, a start location of a third subset including the second complex-valued symbols in the first set corresponds to a third reference point of the first transmitted symbol, and a start location of a fourth subset including the second complex-valued symbols in the second set corresponds to the second reference point of the second transmitted symbol, where the third reference point represents a start location of a transmitted symbol. Optionally, in some embodiments, the apparatus1500further includes a symbol generation unit1550, configured to process the plurality of sets obtained through the copy operation, to obtain a plurality of transmitted symbols, where the plurality of transmitted symbols include the first transmitted symbol and the second transmitted symbol, the first transmitted symbol and the second transmitted symbol are consecutive in time domain, and the first transmitted symbol is located before the second transmitted symbol. A symbol component whose end location is the first reference point in the first transmitted symbol is the same as a symbol component whose end location is the second reference point in the second transmitted symbol, the first reference point represents an end location of a transmitted symbol, and the second reference point represents a location from which a CP is obtained through truncation in a transmitted symbol. Therefore, in this application, when a CP length is fixed, a guard interval between symbols can be flexibly configured, and a length of the guard interval can also be flexibly configured based on a user requirement. Optionally, the obtaining unit1510may include a modulation subunit, configured to modulate an encoded bit stream to obtain a modulated symbol, where the modulated symbol may also be referred to as a complex-valued symbol. Optionally, the obtaining unit1510is configured to obtain, based on a PTRS sampling point and the modulated symbol, a plurality of complex-valued symbols for processing by the grouping unit1520. Optionally, when the apparatus1500is applied to the application scenario 1, for example, as shown inFIG.16, the symbol generation unit1550may include a DFT subunit, a subcarrier mapping subunit, an IFFT subunit, and a CP adding subunit. InFIG.16, the shifting unit1540is located between the copying unit1530and the symbol generation unit1550, but this is only one implementation. Optionally, the shifting unit1540may be located in the symbol generation unit1550. For example, the shifting unit1540is located between the DFT subunit and the IFFT subunit. For another example, the shifting unit1540is located between the IFFT subunit and the CP adding subunit. Optionally, when the apparatus1500is applied to the application scenario 2, for example, as shown inFIG.17, the symbol generation unit1550may include a CP adding subunit, an upsampling subunit, and a filtering subunit. Optionally, in another embodiment of the apparatus1500, the obtaining unit1510is configured to obtain a plurality of complex-valued symbols. The grouping unit1520is configured to divide the plurality of complex-valued symbols into a plurality of sets, where the plurality of sets include a first set and a second set that are the same, the first set corresponds to a first transmitted symbol, the second set corresponds to a second transmitted symbol, the first transmitted symbol and the second transmitted symbol are consecutive in time domain, and the first transmitted symbol is located before the second transmitted symbol. The shifting unit1540is configured to perform a cyclic shift on the first set and/or the second set, so that complex-valued symbols in a first subset in the first set are the same as complex-valued symbols in a second subset in the second set. An end location of the first subset corresponds to a first reference point of the first transmitted symbol, and an end location of the second subset corresponds to a second reference point of the second transmitted symbol. The first reference point represents an end location of a transmitted symbol, and the second reference point represents a location from which a CP is obtained through truncation in a transmitted symbol. Optionally, in this embodiment, both the first transmitted symbol and the second transmitted symbol are reference signals. For example, both the first transmitted symbol and the second transmitted symbol are DMRSs. Optionally, the obtaining unit1510, the grouping unit1520, and the copying unit1530(or the obtaining unit1510, the grouping unit1520, the copying unit1530, the shifting unit1540, and the symbol generation unit1550) may be implemented by using software, may be implemented by using hardware, or may be implemented by using hardware and software. In addition, the obtaining unit1510, the grouping unit1520, and the copying unit1530(or the obtaining unit1510, the grouping unit1520, the copying unit1530, the shifting unit1540, and the symbol generation unit1550) may be different chips, or may be integrated into one chip or integrated circuit. Optionally, in the foregoing embodiment, the obtaining unit1510, the grouping unit1520, the copying unit1530, the shifting unit1540, and the symbol generation unit1550each may be implemented by using a processor or a related circuit of a processor. As shown inFIG.18, an embodiment of this application further provides a symbol processing apparatus1800. The apparatus1800includes a processor1810, a memory1820, and a transceiver1830. The memory1820stores a program. The processor1810is configured to execute the program stored in the memory1820. Execution of the program stored in the memory1820enables the processor1810to perform related processing steps in the foregoing method embodiments, and enables the processor1810to control the transceiver1830to perform receiving/sending related steps in the foregoing method embodiments. An embodiment of this application further provides a communication apparatus1900. The communication apparatus1900may be a terminal device or a chip. The communication device1900may be configured to execute the foregoing method embodiments. When the communication device1900is a terminal device,FIG.19is a simplified schematic diagram of a structure of the terminal device. For ease of understanding and illustration, an example in which the terminal device is a mobile phone is used inFIG.19. As shown inFIG.19, the terminal device includes a processor, a memory, a radio frequency circuit, an antenna, and an input/output apparatus. The processor is mainly configured to: process a communication protocol and communication data, control the terminal device, execute a software program, process data of the software program, and the like. The memory is mainly configured to store the software program and the data. The radio frequency circuit is mainly configured to: perform conversion between a baseband signal and a radio frequency signal, and process the radio frequency signal. The antenna is mainly configured to receive and send a radio frequency signal in a form of an electromagnetic wave. The input/output apparatus, such as a touchscreen, a display, or a keyboard, is mainly configured to: receive data entered by a user and output data to the user. It should be noted that some types of terminal devices may have no input/output apparatus. When data needs to be sent, the processor performs baseband processing on the to-be-sent data, and outputs a baseband signal to the radio frequency circuit. After performing radio frequency processing on the baseband signal, the radio frequency circuit sends a radio frequency signal to the outside in a form of an electromagnetic wave through the antenna. When data is sent to the terminal device, the radio frequency circuit receives the radio frequency signal through the antenna, converts the radio frequency signal into a baseband signal, and outputs the baseband signal to the processor. The processor converts the baseband signal into data, and processes the data. For ease of description,FIG.19shows only one memory and one processor. In an actual terminal device product, there may be one or more processors and one or more memories. The memory may also be referred to as a storage medium, a storage device, or the like. The memory may be disposed independent of the processor, or may be integrated with the processor. This is not limited in this embodiment of this application. In this embodiment of this application, the antenna and the radio frequency circuit that have receiving and sending functions may be considered as a transceiver unit of the terminal device, and the processor that has a processing function may be considered as a processing unit of the terminal device. As shown inFIG.19, the terminal device includes a transceiver unit1910and a processing unit1920. The transceiver unit1910may also be referred to as a transceiver, a transceiver machine, a transceiver apparatus, or the like. The processing unit1920may also be referred to as a processor, a processing board, a processing module, a processing apparatus, or the like. Optionally, a component that is in the transceiver unit1910and that is configured to implement a receiving function may be considered as a receiving unit, and a component that is in the transceiver unit1910and that is configured to implement a sending function may be considered as a sending unit. In other words, the transceiver unit1910includes the receiving unit and the sending unit. The transceiver unit sometimes may also be referred to as a transceiver machine, a transceiver, a transceiver circuit, or the like. The receiving unit sometimes may also be referred to as a receiver machine, a receiver, a receiving circuit, or the like. The sending unit sometimes may also be referred to as a transmitter machine, a transmitter, a transmitter circuit, or the like. For example, in an implementation, the processing unit1920is configured to execute the foregoing method embodiments. The transceiver unit1910is configured to perform related receiving/sending operations in the foregoing method embodiments. For example, the transceiver unit1910is configured to send a DFT-s-OFDM symbol or an SC-QAM symbol. It should be understood thatFIG.19is merely an example instead of a limitation. The terminal device including the transceiver unit and the processing unit may not depend on the structure shown inFIG.19. When the communication device1900is a chip, the chip includes a transceiver unit and a processing unit. The transceiver unit may be an input/output circuit or a communication interface. The processing unit may be a processor, a microprocessor, or an integrated circuit integrated on the chip. An embodiment of this application further provides a communication device2000. The communication device2000may be a network device or a chip. The communication device2000may be configured to execute the foregoing method embodiments. When the communication device2000is a network device, for example, a base station,FIG.20is a simplified schematic diagram of a structure of the base station. The base station includes a part2010and a part2020. The part2010is mainly configured to: receive and send a radio frequency signal, and perform conversion between a radio frequency signal and a baseband signal. The part2020is mainly configured to: perform baseband processing, control the base station, and the like. The part2010may be usually referred to as a transceiver unit, a transceiver machine, a transceiver circuit, a transceiver, or the like. The part2020is usually a control center of the base station, and may be usually referred to as a processing unit, and is configured to control the base station to perform a processing operation on the network device side in the foregoing method embodiments. The transceiver unit in the part2010may also be referred to as a transceiver machine, a transceiver, or the like. The transceiver unit includes an antenna and a radio frequency unit. The radio frequency unit is mainly configured to perform radio frequency processing. Optionally, a component that is in the part2010and that is configured to implement a receiving function may be considered as a receiving unit, and a component that is configured to implement a sending function may be considered as a sending unit. In other words, the part2010includes the receiving unit and the sending unit. The receiving unit may also be referred to as a receiver machine, a receiver, a receiver circuit, or the like. The sending unit may be referred to as a transmitter machine, a transmitter, a transmitter circuit, or the like. The part2020may include one or more boards, and each board may include one or more processors and one or more memories. The processor is configured to read and execute a program in the memory to implement a baseband processing function and control the base station. If there are a plurality of boards, the boards may be interconnected to enhance a processing capability. In an optional implementation, the plurality of boards may share one or more processors, or the plurality of boards may share one or more memories, or the plurality of boards may simultaneously share one or more processors. For example, in an implementation, the part2020is configured to execute the foregoing method embodiments. The part2010is configured to perform related receiving/sending operations in the foregoing method embodiments. For example, the part2010is configured to send a DFT-s-OFDM symbol or an SC-QAM symbol. It should be understood thatFIG.10is merely an example instead of a limitation. The network device including the transceiver unit and the processing unit may not depend on the structure shown inFIG.10. When the communication device2000is a chip, the chip includes a transceiver unit and a processing unit. The transceiver unit may be an input/output circuit or a communication interface. The processing unit may be a processor, a microprocessor, or an integrated circuit integrated on the chip. The terminal device in the embodiments of this application includes a handheld device, a vehicle-mounted device, a wearable device, or a computing device that has a wireless communication function. Specifically, the terminal device may be user equipment (UE), an access terminal, a subscriber unit, a subscriber station, a mobile station, a remote station, a remote terminal, a mobile device, a user terminal, a terminal, a wireless communication device, a user agent, or a user apparatus. For example, the terminal device may be a mobile phone (mobile phone), a tablet computer, or a computer that has a wireless transceiver function. The terminal device may alternatively be a virtual reality (VR) terminal device, an augmented reality (AR) terminal device, a wireless terminal in industrial control, a wireless terminal in self-driving, a wireless terminal in telemedicine, a wireless terminal in a smart grid, a wireless terminal in a smart city (smart city), a wireless terminal in a smart home (smart home), or the like. The terminal device may be a terminal device in a 5G network, a terminal device in a future evolved public land mobile network (PLMN), or the like. The network device in the embodiments of this application may be configured to communicate with one or more terminal devices, or may be configured to communicate with one or more base stations that have some terminal functions (for example, communication between a macro base station and a micro base station, such as an access point). The network device may be referred to as a base station. The base station may be in a plurality of forms, for example, a macro base station, a micro base station, a relay station, and an access point. For example, the network device in the embodiments of this application may be a base station in NR, a base transceiver station (BTS) in a global system for mobile communications (GSM) or code division multiple access (CDMA), a NodeB (NB) in a wideband code division multiple access (WCDMA) system, or an evolved NodeB (eNB or eNodeB) in an LTE system. The base station in 5G NR may also be referred to as a transmission reception point (TRP) or a next generation NodeB (gNB). An embodiment of this application further provides a computer-readable storage medium. The computer-readable storage medium stores a computer program. When the computer program is executed by a computer, the computer is enabled to implement the method in the foregoing method embodiments. An embodiment of this application further provides a computer program product including instructions. When the instructions are executed by a computer, the computer is enabled to implement the method in the foregoing method embodiments. For explanations and beneficial effects of related content of any of the communication apparatuses provided above, refer to the corresponding method embodiments provided above. Details are not described herein again. In the embodiments of this application, the terminal device or the network device includes a hardware layer, an operating system layer running above the hardware layer, and an application layer running above the operating system layer. The hardware layer includes hardware such as a central processing unit (CPU), a memory management unit (MMU), and a memory (which is also referred to as a main memory). The operating system may be any one or more computer operating systems that implement service processing through a process (process), for example, a Linux operating system, a Unix operating system, an Android operating system, an iOS operating system, or a Windows operating system. The application layer includes applications such as a browser, an address book, word processing software, and instant messaging software. In addition, a specific structure of an execution body of a method provided in the embodiments of this application is not specifically limited in the embodiments of this application, provided that a program that records code of the method provided in the embodiments of this application can be run to perform communication according to the method provided in the embodiments of this application. For example, the method provided in the embodiments of this application may be performed by the terminal device or the network device, or by a function module, in the terminal device or network device, that can invoke and execute the program. In addition, aspects or features of this application may be implemented as a method, an apparatus, or a product that uses standard programming and/or engineering technologies. The term “product” used in this application covers a computer program that can be accessed from any computer-readable component, carrier, or medium. For example, the computer-readable medium may include but is not limited to: a magnetic storage component (for example, a hard disk, a floppy disk, or a magnetic tape), an optical disc (for example, a compact disc (CD) or a digital versatile disc (DVD)), a smart card, and a flash memory component (for example, an erasable programmable read-only memory (EPROM), a card, a stick, or a key drive). In addition, various storage media described in this specification may represent one or more devices and/or other machine-readable media that are configured to store information. The term “machine-readable media” may include but is not limited to a radio channel and various other media that can store, include, and/or carry instructions and/or data. It should be understood that, the processor mentioned in the embodiments of this application may be a CPU, or may be another general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or another programmable logic device, a discrete gate or a transistor logic device, a discrete hardware component, or the like. The general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like. It may be further understood that the memory mentioned in the embodiments of this application may be a volatile memory or a nonvolatile memory, or may include a volatile memory and a nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a programmable read-only memory (PROM), an EPROM, an electrically erasable programmable read-only memory (EEPROM), or a flash memory. The volatile memory may be a random access memory (RAM), used as an external cache. Through example but not limitative descriptions, many forms of RAMs may be used, for example, a static random access memory (SRAM), a dynamic random access memory (DRAM), a synchronous dynamic random access memory (SDRAM), a double data rate synchronous dynamic random access memory (DDR SDRAM), an enhanced synchronous dynamic random access memory (ESDRAM), a synchlink dynamic random access memory (SLDRAM), and a direct rambus random access memory (DR RAM). It should be noted that when the processor is a general-purpose processor, a DSP, an ASIC, an FPGA or another programmable logic device, a discrete gate or a transistor logic device, or a discrete hardware component, the memory (a storage module) is integrated into the processor. It should be noted that the memory described in this specification aims to include but is not limited to these memories and any memory of another proper type. A person of ordinary skill in the art may be aware that, in combination with the examples described in the embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether the functions are performed by hardware or software depends on particular applications and design constraint conditions of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of this application. It may be clearly understood by a person skilled in the art that for the purpose of convenient and brief description, for a detailed working process of the foregoing system, apparatus, and unit, refer to a corresponding process in the foregoing method embodiments. Details are not described herein again. In the several embodiments provided in this application, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the apparatus embodiments described above are merely examples. For example, division into the units is merely logical function division, and may be other division during actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or may not be performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electrical, mechanical, or other forms. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected based on actual requirements to achieve the objectives of the solutions of the embodiments. In addition, functional units in the embodiments of this application may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units are integrated into one unit. When the functions are implemented in a form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of this application essentially, or the part contributing to the conventional technology, or some of the technical solutions may be implemented in a form of a software product. The computer software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or some of the steps of the methods described in the embodiments of this application. The foregoing storage medium includes any medium that can store program code, such as a USB flash drive, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disc. The foregoing descriptions are merely specific implementations of this application, but are not intended to limit the protection scope of this application. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in this application shall fall within the protection scope of this application. Therefore, the protection scope of this application shall be subject to the protection scope of the claims.
83,437
11943087
In one or more implementations, not all of the depicted components in each figure may be required, and one or more implementations may include additional components not shown in a figure. Variations in the arrangement and type of the components may be made without departing from the scope of the subject disclosure. Additional components, different components, or fewer components may be utilized within the scope of the subject disclosure. DETAILED DESCRIPTION The detailed description set forth below is intended as a description of various implementations and is not intended to represent the only implementations in which the subject technology may be practiced. As those skilled in the art would realize, the described implementations may be modified in various different ways, all without departing from the scope of the present disclosure. Accordingly, the drawings and description are to be regarded as illustrative in nature and not restrictive. Systems and methods are disclosed for facilitating uplink (UL) transmissions. One or more implementations of such systems and methods may facilitate random access for supporting (e.g., efficiently supporting) UL multi-user (MU) transmissions. The subject technology may be utilized in Institute of Electrical and Electronics Engineers (IEEE) systems, such as high efficiency (HE) WLAN. A station may be allocated a random access channel for a specific time duration. In some aspects, the random access channel may be allocated for stations that have data (e.g., queued data) to send in the uplink. Each station that has some data to send may participate in random access and select a resource for use in random access transmission. In an aspect, during the random transmission process, collisions may occur between the random access transmissions from different stations. In such a case, resources (e.g., frequency resource, code resource) associated with (e.g., selected by, allocated to) the different stations may overlap. In another aspect, each station that may potentially participate in random access transmission may be associated with resources exclusive to (e.g., allocated only to) the station, such that collisions can be avoided. In such a case, random access may be referred to as deterministic random access. In some aspects, random access physical layer convergence procedure (PLCP) protocol data unit (PPDU) design and transmission methods may be provided to facilitate efficient random access resource utilization. In an aspect, the random access PPDU may be referred to as a random access signal. In some cases, such PPDU design and transmission methods may help reduce or avoid collision between the random access transmissions from different stations. In one or more implementations, an access point (AP) may transmit a trigger frame to facilitate UL transmission (e.g., UL MU transmission). For instance, the AP may utilize the trigger frame to schedule a UL MU transmission. The trigger frame may be utilized to solicit response frames from one or more stations. For simultaneous response frames, the one or more stations may transmit their response frames using UL MU transmission technology, such as UL MU OFDMA and/or UL MU-MIMO. A trigger frame may be a frame sent by an AP that seeks data, control, or management frame response(s) from stations that participate in a subsequent UL MU frame. The trigger frame may be utilized to initiate the simultaneous MU transmission in OFDMA. In an aspect, a trigger frame may include, for example, some or all of the following features: (a) a list of stations (STAB) that an AP seeks a response from; (b) resource allocation information for each STA (e.g., a subband(s) assigned to each STA); and/or (c) attributes of the expected UL MU frame, such as the duration, bandwidth, etc., among other features. A trigger frame may be used to allocate resource for UL MU transmission and to solicit an UL MU transmission from the participating stations in response to the trigger frame. The trigger frame may include other information needed by the participating stations, and the UL MU transmission may occur at a predetermined time interval after the trigger frame. In an aspect, the resource allocation information may include frequency allocation information and/or code allocation information. In an aspect, a code-frequency resource may include one or more resource unit(s) (e.g., a frequency subband(s)) and code to be utilized for transmission. In an aspect, the trigger frame may be referred to as an uplink trigger frame, since the trigger frame may be utilized for facilitating (e.g., triggering) UL transmission. In an aspect, the trigger frame may be utilized to solicit a null data packet (NDP) from each STA. The term “resource” may refer to, for example, a bandwidth (e.g., a subband(s), frequencies, frequency band(s)), code, time/duration that the STAB expect to occupy a transmission medium, and/or possibly a number of spatial streams that the STAB may use. In one aspect, the AP may allocate different portions of a channel bandwidth to different stations. In one aspect, a portion of a channel bandwidth may be a resource unit. In another aspect, a portion of a channel bandwidth may be one or more resource units. In yet another aspect, a portion of a channel bandwidth may be one or more blocks of a channel bandwidth. In an aspect, the resource unit may be referred to as a frequency resource unit. FIG.1illustrates a schematic diagram of an example of a wireless communication network100. In the wireless communication network100, such as a wireless local area network (WLAN), a basic service set (BSS) includes a plurality of wireless communication devices (e.g., WLAN devices). In one aspect, a BSS refers to a set of STAs that can communicate in synchronization, rather than a concept indicating a particular area. In the example, the wireless communication network100includes wireless communication devices111-115, which may be referred to as stations (STAs). Each of the wireless communication devices111-115may include a media access control (MAC) layer and a physical (PHY) layer according to an IEEE 802.11 standard. In the example, at least one wireless communication device (e.g., device111) is an access point (AP). An AP may be referred to as an AP STA, an AP device, or a central station. The other wireless communication devices (e.g., devices112-115) may be non-AP STAs. Alternatively, all of the wireless communication devices111-115may be non-AP STAs in an ad-hoc networking environment. An AP STA and a non-AP STA may be collectively called STAs. However, for simplicity of description, in some aspects, only a non-AP STA may be referred to as a STA. An AP may be, for example, a centralized controller, a base station (BS), a node-B, a base transceiver system (BTS), a site controller, a network adapter, a network interface card (NIC), a router, or the like. A non-AP STA (e.g., a client device operable by a user) may be, for example, a device with wireless communication capability, a terminal, a wireless transmit/receive unit (WTRU), a user equipment (UE), a mobile station (MS), a mobile terminal, a mobile subscriber unit, a laptop, a non-mobile computing device (e.g., a desktop computer with wireless communication capability) or the like. In one or more aspects, a non-AP STA may act as an AP (e.g., a wireless hotspot). In one aspect, an AP is a functional entity for providing access to a distribution system, by way of a wireless medium, for an associated STA. For example, an AP may provide access to the internet for one or more STAs that are wirelessly and communicatively connected to the AP. InFIG.1, wireless communications between non-AP STAs are made by way of an AP. However, when a direct link is established between non-AP STAs, the STAs can communicate directly with each other (without using an AP). In one or more implementations, OFDMA-based 802.11 technologies are utilized, and for the sake of brevity, a STA refers to a non-AP high efficiency (HE) STA, and an AP refers to an HE AP. In one or more aspects, a STA may act as an AP. FIG.2illustrates a schematic diagram of an example of a wireless communication device. The wireless communication device200includes a baseband processor210, a radio frequency (RF) transceiver220, an antenna unit230, a memory240, an input interface unit250, an output interface unit260, and a bus270, or subsets and variations thereof. The wireless communication device200can be, or can be a part of, any of the wireless communication devices111-115. In the example, the baseband processor210performs baseband signal processing, and includes a medium access control (MAC) processor211and a PHY processor215. The memory240may store software (such as MAC software) including at least some functions of the MAC layer. The memory may further store an operating system and applications. In the illustration, the MAC processor211includes a MAC software processing unit212and a MAC hardware processing unit213. The MAC software processing unit212executes the MAC software to implement some functions of the MAC layer, and the MAC hardware processing unit213may implement remaining functions of the MAC layer as hardware (MAC hardware). However, the MAC processor211may vary in functionality depending on implementation. The PHY processor215includes a transmitting (TX) signal processing unit280and a receiving (RX) signal processing unit290. The term TX may refer to transmitting, transmit, transmitted, transmitter or the like. The term RX may refer to receiving, receive, received, receiver or the like. The PHY processor215interfaces to the MAC processor211through, among others, transmit vector (TXVECTOR) and receive vector (RXVECTOR) parameters. In one or more aspects, the MAC processor211generates and provides TXVECTOR parameters to the PHY processor215to supply per-packet transmit parameters. In one or more aspects, the PHY processor215generates and provides RXVECTOR parameters to the MAC processor211to inform the MAC processor211of the received packet parameters. In some aspects, the wireless communication device200includes a read-only memory (ROM) (not shown) or registers (not shown) that store instructions that are needed by one or more of the MAC processor211, the PHY processor215and/or other components of the wireless communication device200. In one or more implementations, the wireless communication device200includes a permanent storage device (not shown) configured as a read-and-write memory device. The permanent storage device may be a non-volatile memory unit that stores instructions even when the wireless communication device200is off. The ROM, registers and the permanent storage device may be part of the baseband processor210or be a part of the memory240. Each of the ROM, the permanent storage device, and the memory240may be an example of a memory or a computer-readable medium. A memory may be one or more memories. The memory240may be a read-and-write memory, a read-only memory, a volatile memory, a non-volatile memory, or a combination of some or all of the foregoing. The memory240may store instructions that one or more of the MAC processor211, the PHY processor215, and/or another component may need at runtime. The RF transceiver220includes an RF transmitter221and an RF receiver222. The input interface unit250receives information from a user, and the output interface unit260outputs information to the user. The antenna unit230includes one or more antennas. When multi-input multi-output (MIMO) or multi-user MIMO (MU-MIMO) is used, the antenna unit230may include more than one antenna. The bus270collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal components of the wireless communication device200. In one or more implementations, the bus270communicatively connects the baseband processor210with the memory240. From the memory240, the baseband processor210may retrieve instructions to execute and data to process in order to execute the processes of the subject disclosure. The baseband processor210can be a single processor, multiple processors, or a multi-core processor in different implementations. The baseband processor210, the memory240, the input interface unit250, and the output interface unit260may communicate with each other via the bus270. The bus270also connects to the input interface unit250and the output interface unit260. The input interface unit250enables a user to communicate information and select commands to the wireless communication device200. Input devices that may be used with the input interface unit250may include any acoustic, speech, visual, touch, tactile and/or sensory input device, e.g., a keyboard, a pointing device, a microphone, or a touchscreen. The output interface unit260may enable, for example, the display or output of videos, images, audio, and data generated by the wireless communication device200. Output devices that may be used with the output interface unit260may include any visual, auditory, tactile, and/or sensory output device, e.g., printers and display devices or any other device for outputting information. One or more implementations may include devices that function as both input and output devices, such as a touchscreen. One or more implementations can be realized in part or in whole using a computer-readable medium. In one aspect, a computer-readable medium includes one or more media. In one or more aspects, a computer-readable medium is a tangible computer-readable medium, a computer-readable storage medium, a non-transitory computer-readable medium, a machine-readable medium, a memory, or some combination of the foregoing (e.g., a tangible computer-readable storage medium, or a non-transitory machine-readable storage medium). In one aspect, a computer is a machine. In one aspect, a computer-implemented method is a machine-implemented method. A computer-readable medium may include storage integrated into a processor and/or storage external to a processor. A computer-readable medium may be a volatile, non-volatile, solid state, optical, magnetic, and/or other suitable storage device, e.g., RAM, ROM, PROM, EPROM, a flash, registers, a hard disk, a removable memory, or a remote storage device. In one aspect, a computer-readable medium comprises instructions stored therein. In one aspect, a computer-readable medium is encoded with instructions. In one aspect, instructions are executable by one or more processors (e.g.,210,211,212,213,215,280,290) to perform one or more operations or a method. Instructions may include, for example, programs, routines, subroutines, data, data structures, objects, sequences, commands, operations, modules, applications, and/or functions. Those skilled in the art would recognize how to implement the instructions. A processor (e.g.,210,211,212,213,215,280,290) may be coupled to one or more memories (e.g., one or more external memories such as the memory240, one or more memories internal to the processor, one or more registers internal or external to the processor, or one or more remote memories outside of the device200), for example, via one or more wired and/or wireless connections. The coupling may be direct or indirect. In one aspect, a processor includes one or more processors. A processor, including a processing circuitry capable of executing instructions, may read, write, or access a computer-readable medium. A processor may be, for example, an application specific integrated circuit (ASIC), a digital signal processor (DSP), or a field programmable gate array (FPGA). In one aspect, a processor (e.g.,210,211,212,213,215,280,290) is configured to cause one or more operations of the subject disclosure to occur. In one aspect, a processor is configured to cause an apparatus (e.g., a wireless communication device200) to perform operations or a method of the subject disclosure. In one or more implementations, a processor configuration involves having a processor coupled to one or more memories. A memory may be internal or external to the processor. Instructions may be in a form of software, hardware or a combination thereof. Software instructions (including data) may be stored in a memory. Hardware instructions may be part of the hardware circuitry components of a processor. When the instructions are executed or processed by one or more processors, (e.g.,210,211,212,213,215,280,290), the one or more processors cause one or more operations of the subject disclosure to occur or cause an apparatus (e.g., a wireless communication device200) to perform operations or a method of the subject disclosure. FIG.3Aillustrates a schematic block diagram of an example of a transmitting signal processing unit280in a wireless communication device. The transmitting signal processing unit280of the PHY processor215includes an encoder281, an interleaver282, a mapper283, an inverse Fourier transformer (1FT)284, and a guard interval (GI) inserter285. The encoder281encodes input data. For example, the encoder281may be a forward error correction (FEC) encoder. The FEC encoder may include a binary convolutional code (BCC) encoder followed by a puncturing device, or may include a low-density parity-check (LDPC) encoder. The interleaver282interleaves the bits of each stream output from the encoder281to change the order of bits. In one aspect, interleaving may be applied only when BCC encoding is employed. The mapper283maps the sequence of bits output from the interleaver282into constellation points. When MIMO or MU-MIMO is employed, the transmitting signal processing unit280may use multiple instances of the interleaver282and multiple instances of the mapper283corresponding to the number of spatial streams (Nss). In the example, the transmitting signal processing unit280may further include a stream parser for dividing outputs of the BCC encoders or the LDPC encoder into blocks that are sent to different interleavers282or mappers283. The transmitting signal processing unit280may further include a space-time block code (STBC) encoder for spreading the constellation points from the number of spatial streams into a number of space-time streams (NsTs) and a spatial mapper for mapping the space-time streams to transmit chains. The spatial mapper may use direct mapping, spatial expansion, or beamforming depending on implementation. When MU-MIMO is employed, one or more of the blocks before reaching the spatial mapper may be provided for each user. The 1FT284converts a block of the constellation points output from the mapper283or the spatial mapper into a time domain block (e.g., a symbol) by using an inverse discrete Fourier transform (IDFT) or an inverse fast Fourier transform (IFFT). If the STBC encoder and the spatial mapper are employed, the 1FT284may be provided for each transmit chain. When MIMO or MU-MIMO is employed, the transmitting signal processing unit280may insert cyclic shift diversities (CSDs) to prevent unintentional beamforming. The CSD insertion may occur before or after the inverse Fourier transform operation. The CSD may be specified per transmit chain or may be specified per space-time stream. Alternatively, the CSD may be applied as a part of the spatial mapper. The GI inserter285prepends a GI to the symbol. The transmitting signal processing unit280may optionally perform windowing to smooth edges of each symbol after inserting the GI. The RF transmitter221converts the symbols into an RF signal and transmits the RF signal via the antenna unit230. When MIMO or MU-MIMO is employed, the GI inserter285and the RF transmitter221may be provided for each transmit chain. FIG.3Billustrates a schematic block diagram of an example of a receiving signal processing unit290in a wireless communication device. The receiving signal processing unit290of the PHY processor215includes a GI remover291, a Fourier transformer (FT)292, a demapper293, a deinterleaver294, and a decoder295. The RF receiver222receives an RF signal via the antenna unit230and converts the RF signal into one or more symbols. In some aspects, the GI remover291removes the GI from the symbol. When MIMO or MU-MIMO is employed, the RF receiver222and the GI remover291may be provided for each receive chain. The FT292converts the symbol (e.g., the time domain block) into a block of the constellation points by using a discrete Fourier transform (DFT) or a fast Fourier transform (FFT) depending on implementation. In one or more implementations, the FT292is provided for each receive chain. When MIMO or MU-MIMO is employed, the receiving signal processing unit290may further include a spatial demapper for converting the Fourier transformed receiver chains to constellation points of the space-time streams, and a STBC decoder (not shown) for despreading the constellation points from the space-time streams into the spatial streams. The demapper293demaps the constellation points output from the FT292or the STBC decoder to the bit streams. If the LDPC encoding is used, the demapper293may further perform LDPC tone demapping before the constellation demapping. The deinterleaver294deinterleaves the bits of each stream output from the demapper293. In one or more implementations, deinterleaving may be applied only when BCC decoding is used. When MIMO or MU-MIMO is employed, the receiving signal processing unit290may use multiple instances on the demapper293and multiple instances of the deinterleaver294corresponding to the number of spatial streams. In the example, the receiving signal processing unit290may further include a stream deparser for combining the streams output from the deinterleavers294. The decoder295decodes the streams output from the deinterleaver294and/or the stream deparser. For example, the decoder295may be an FEC decoder. The FEC decoder may include a BCC decoder or an LDPC decoder. FIG.4illustrates an example of a timing diagram of interframe space (IFS) relationships. In this example, a data frame, a control frame, or a management frame can be exchanged between the wireless communication devices111-115and/or other WLAN devices. Referring to the timing diagram400, during the time interval402, access is deferred while the medium (e.g., a wireless communication channel) is busy until a type of IFS duration has elapsed. At time interval404, immediate access is granted when the medium is idle for a duration that is equal to or greater than a distributed coordination function IFS (DIFS)410duration or arbitration IFS (AIFS)414duration. In turn, a next frame406may be transmitted after a type of IFS duration and a contention window418have passed. During the time408, if a DIFS has elapsed since the medium has been idle, a designated slot time420is selected and one or more backoff slots422are decremented as long as the medium is idle. The data frame is used for transmission of data forwarded to a higher layer. In one or more implementations, a WLAN device transmits the data frame after performing backoff if DIFS410has elapsed from a time when the medium has been idle. The management frame is used for exchanging management information that is not forwarded to the higher layer. Subtype frames of the management frame include a beacon frame, an association request/response frame, a probe request/response frame, and an authentication request/response frame. The control frame is used for controlling access to the medium. Subtype frames of the control frame include a request to send (RTS) frame, a clear to send (CTS) frame, and an ACK frame. In the case that the control frame is not a response frame of the other frame (e.g., a previous frame), the WLAN device transmits the control frame after performing backoff if the DIFS410has elapsed. In the case that the control frame is the response frame of the other frame, the WLAN device transmits the control frame without performing backoff if a short IFS (SIFS)412has elapsed. For example, the SIFS may be 16 microseconds. The type and subtype of frame may be identified by a type field and a subtype field in a frame control field of the frame. On the other hand, a Quality of Service (QoS) STA may transmit the frame after performing backoff if AIFS414for access category (AC), e.g., AIFS[AC], has elapsed. In this case, the data frame, the management frame, or the control frame that is not the response frame may use the AIFS[AC]. In one or more implementations, a point coordination function (PCF) enabled AP STA transmits the frame after performing backoff if a PCF IFS (PIFS)416has elapsed. In this example, the PIFS416duration is less than the DIFS410but greater than the SIFS412. In some aspects, the PIFS416is determined by incrementing the SIFS412duration by a designated slot time420. FIG.5illustrates an example of a timing diagram of a carrier sense multiple access/collision avoidance (CSMA/CA) based frame transmission procedure for avoiding collision between frames in a channel. InFIG.5, anyone of the wireless communication devices111-115inFIG.1can be designated as one of STA1, STA2or STA3. In this example, the wireless communication device111is designated as STA1, the wireless communication device112is designated as STA2, and the wireless communication device113is designated as STA3. While the timing of the wireless communication devices114and115is not shown inFIG.5, the timing of the devices114and115may be the same as that of STA2. In this example, STA1is a transmit WLAN device for transmitting data, STA2is a receive WLAN device for receiving the data, and STA3is a WLAN device that may be located at an area where a frame transmitted from the STA1and/or a frame transmitted from the STA2can be received by the STA3. The STA1may determine whether the channel (or medium) is busy by carrier sensmg. The STA1may determine the channel occupation based on an energy level on the channel or correlation of signals in the channel. In one or more implementations, the STA1determines the channel occupation by using a network allocation vector (NAV) timer. When determining that the channel is not used by other devices during the DIFS410(e.g., the channel is idle), the STA1may transmit an RTS frame502to the STA2after performing backoff. Upon receiving the RTS frame502, the STA2may transmit a CTS frame506as a response of the CTS frame506after the SIFS412. When the STA3receives the RTS frame502, the STA3may set a NAV timer for a transmission duration representing the propagation delay of subsequently transmitted frames by using duration information involved with the transmission of the RTS frame502(e.g., NAV(RTS)510). For example, the STA3may set the transmission duration expressed as the summation of a first instance of the SIFS412, the CTS frame506duration, a second instance of the SIFS412, a data frame504duration, a third instance of the SIFS412and an ACK frame508duration. Upon receiving a new frame (not shown) before the NAV timer expires, the STA3may update the NAV timer by using duration information included in the new frame. The STA3does not attempt to access the channel until the NAV timer expires. When the STA1receives the CTS frame506from the STA2, the STA1may transmit the data frame504to the STA2after the SIFS412elapses from a time when the CTS frame506has been completely received. Upon successfully receiving the data frame504, the STA2may transmit the ACK frame508after the SIFS412elapses as an acknowledgment of receiving the data frame504. When the NAV timer expires, the STA3may determine whether the channel is busy by the carrier sensing. Upon determining that the channel is not used by the other WLAN devices (e.g., STA1, STA2) during the DIFS410after the NAV timer has expired, the STA3may attempt the channel access after a contention window418has elapsed. In this example, the contention window418may be based on a random backoff. FIG.6illustrates an example of a high efficiency (HE) frame600. The HE frame600is a physical layer convergence procedure (PLCP) protocol data unit (or PPDU) format. An HE frame may be referred to as an OFDMA frame, a PPDU, a PPDU format, an OFDMA PPDU, an MU PPDU, another similar term, or vice versa. An HE frame may be simply referred to as a frame for convenience. A transmitting station (e.g., AP, non-AP station) may generate the HE frame600and transmit the HE frame600to a receiving station. The receiving station may receive, detect, and process the HE frame600. The HE frame600may include an L-STF field, an L-LTF field, an L-SIG field, an RL-SIG field, an HE-SIG-A field, an HE-SIG-B field, an HE-STF field, an HE-LTF field, and an HE-DATA field. The HE-SIG-A field may include NHESIGAsymbols, the HE-SIG-B field may include NHESIGBsymbols, the HE-LTF field may include NHELTFsymbols, and the HE-DATA field may include NDATAsymbols. In an aspect, the HE-DATA field may also be referred to as a payload field, data field, data, data signal, data portion, payload, PSDU, or Media Access Control (MAC) Protocol Data Units (MPDU) (e.g., MAC frame). In one or more implementations, an AP may transmit a frame for downlink (DL) using a frame format shown in this figure or a variation thereof (e.g., without any or some portions of an HE header). A STA may transmit a frame for uplink (UL) using a frame format shown in this figure or a variation thereof (e.g., without any or some portions of an HE header). The table below provides examples of characteristics associated with the various components of the HE frame600: DFTSubcarrierElementDefinitionDurationPeriodGISpacingDescriptionLegacyNon-high8 us——EquivalentL-STF of a non-(L)-STFthroughputto 1.250trigger-based(HT) ShortKHzPPDU has aTrainingperiodicity of 0.8Fieldus with 10 periodsL-LTFNon-HT8 us3.2 us1.6 us312.5 KHzLongTrainingFieldL-SIGNon-HT4 us3.2 us0.8 us312.5 KHzSIGNALFieldRL-SIGRepeated4 us3.2 us0.8 us312.5 KHzNon-HTSIGNALFieldHE-SIG-AHENRESIGA*3.2 us0.8 us312.5 KHzHE-SIG-A isSIGNAL4 usduplicated onA Fieldeach 20 MHzsegment afterthe legacypreamble toindicateHEcommon controlinformation.NHESIGAmeans thenumber ofOFDMsymbols of theHE-SIG-Afield and isequal to 2 or 4.HE-SIG-BHENHESIGB*3.2 us0.8 us312.5 KHzNHESIGBSIGNAL4 usmeans theB Fieldnumber ofOFDMsymbols of theHE-SIG-Bfield and isvariable.DL MU packetcontainsHE-SIG-B.Single user(SU) packetsand UL Triggerbased packetsdo not containHE-SIG-B.HE-STFHE Short4 or 8 us——Non-HE-STF of a non-Trainingtrigger-trigger-basedFieldbasedPPDU has aPPDUperiodicity of 0.8(equivalentus with 5 periods.to) 1,250A non-trigger-KHzbased PPDU is notTrigger-sent in response tobaseda trigger frame.PPDU:The HE-STF of a(equivalenttrigger-basedto) 625 KHzPPDUhas aperiodicity of1.6 us with 5periods. Atrigger-basedPPDU is a ULPPDU sent inresponse to atrigger frame.HE-LTFHE LongNHELTF*2xLTF:Supports2xLTFHE PPDUTraining(DFT6.4 us,0.8, 1.6,(equivalentmay supportFieldperiod +4xLTF:3.2 usto) 156.252xLTF modeGI) us12.8 usKHz,and 4xLTF4xLTF:mode.78.125 KHzIn the 2xLTFmode, HE-LTFsymbolexcluding GI isequivalent tomodulating everyother tone in anOFDM symbol of12.8 us excludingGI, and thenremoving thesecond half of theOFDMsymbol in timedomain.NHELTFmeansthe number ofHE-LTFsymbols and isequal to 1, 2, 4,6, 8.HE-HE DATANDATA*12.8 usSupports78.125 KHzNDATAmeans theDATAField(DFT0.8, 1.6,number of HE dataperiod +3.2 ussymbolsGI) us Referring toFIG.6, the HE frame600contains a header and a data field. The header includes a legacy header comprised of the legacy short training field (L-STF), the legacy long training field (L-LTF), and the legacy signal (L-SIG) field. These legacy fields contain symbols based on an early design of an IEEE 802.11 specification. Presence of these symbols may facilitate compatibility of new designs with the legacy designs and products. The legacy header may be referred to as a legacy preamble. In one or more aspects, the term header may be referred to as a preamble. In one or more implementations, the legacy STF, LTF, and SIG symbols are modulated/carried with FFT size of 64 on a 20 MHz sub-channel and are duplicated every 20 MHz if the frame has a channel bandwidth wider than 20 MHz (e.g., 40 MHz, 80 MHz, 160 MHz). Therefore, the legacy field (i.e., the STF, LTF, and SIG fields) occupies the entire channel bandwidth of the frame. The L-STF field may be utilized for packet detection, automatic gain control (AGC), and coarse frequency-offset (FO) correction. In one aspect, the L-STF field does not utilize frequency domain processing (e.g., FFT processing) but rather utilizes time domain processing. The L-LTF field may be utilized for channel estimation, fine frequency-offset correction, and symbol timing. In one or more aspects, the L-SIG field may contain information indicative of a data rate and a length (e.g., in bytes) associated with the HE frame600, which may be utilized by a receiver of the HE frame600to calculate a time duration of a transmission of the HE frame600. The header may also include an HE header comprised of an HE-SIG-A field and a HE-SIG-B field. The HE header may be referred to as a non-legacy header. These fields contain symbols that carry control information associated with each PLCP service data unit (PSDU) and/or radio frequency (RF), PHY, and MAC properties of a PPDU. In one aspect, the HE-SIG-A field can be carried/modulated using an FFT size of 64 on a 20 MHz basis. The HE-SIG-B field can be carried/modulated using an FFT size of e.g., 64 or 256 on a 20 MHz basis depending on implementation. The HE-SIG-A and HE-SIG-B fields may occupy the entire channel bandwidth of the frame. In some aspects, the size of the HE-SIG-A field and/or the HE-SIG-B field is variable (e.g., can vary from frame to frame). In an aspect, the HE-SIG-B field is not always present in all frames. To facilitate decoding of the HE frame600by a receiver, the size of (e.g., number of symbols contained in) the HE-SIG-B field may be indicated in the HE-SIG-A field. In some aspects, the HE header also includes the repeated L-SIG (RL-SIG) field, whose content is the same as the L-SIG field. The HE header may further include HE-STF and HE-LTF fields, which contain symbols used to perform necessary RF and PHY processing for each PSDU and/or for the whole PPDU The HE-LTF symbols may be modulated/carried with an FFT size of 256 for 20 MHz bandwidth and modulated over the entire bandwidth of the frame. Thus, the HE-LTF field may occupy the entire channel bandwidth of the frame. In one aspect, the HE-LTF field may occupy less than the entire channel bandwidth. In one aspect, the HE-LTF field may be transmitted using a code-frequency resource. In one aspect, a HE-LTF sequence may be utilized by a receiver to estimate MIMO channel between the transmitter and the receiver. Channel estimation may be utilized to decode data transmitted and compensate for channel properties (e.g., effects, distortions). For example, when a preamble is transmitted through a wireless channel, various distortions may occur, and a training sequence in the HE-LTF field is useful to reverse the distortion. This may be referred to as equalization. To accomplish this, the amount of channel distortion is measured. This may be referred to as channel estimation. In one aspect, channel estimation is performed using an HE-LTF sequence, and the channel estimation may be applied to other fields that follow the HE-LTF sequence. The HE-STF symbols may have a fixed pattern and a fixed duration. For example, the HE-STF symbols may have a predetermined repeating pattern. In one aspect, the HE-STF symbols do not require FFT processing. The HE frame600may include the data field, represented as HE-DATA, that contains data symbols. The data field may also be referred to as a payload field, data, payload or PSDU. In one or more aspects, additional one or more HE-LTF fields may be included in the header. For example, an additional HE-LTF field may be located after a first HE-LTF field. In one or more implementations, a TX signal processing unit280(or an 1FT284) illustrated inFIG.3Amay carry out the modulation described in this paragraph as well as the modulations described in other paragraphs above. In one or more implementations, an RX signal processing unit290(or an FT292) may perform demodulation for a receiver. In one or more implementations, random access PPDU format design and transmission methods are provided to facilitate efficient random access resource utilization. In some aspects, random access resources allocated per station by an AP may be expected to be small in capacity (e.g., a small frequency subband(s)). In an aspect, utilization of smaller random access resources may allow a lower collision probability between random access transmission of different stations than a case in which larger random access resources are utilized. For instance, for a given channel bandwidth, the channel bandwidth can be divided into a large number of small random access resources rather than a small number of large random access resources. The large number of small random access resources may be allocated to the stations for random access transmission. In an aspect, the random access PPDU may be referred to as a random access signal. In one or more aspects, alternatively or in addition, a random access PPDU format may be utilized to reduce (e.g., further reduce) the collision probability and improve random access signal detection/decoding performance. In some aspects, the random access PPDU may include a first part and a second part. The first part may be identical between stations. The second part may be different between the stations. Each station may transmit the second part on a random access resource (e.g., allocated by the AP and/or selected by the station). In one aspect, the first part may be referred to as a common preamble portion and the second part may be referred to as a STA specific portion or user specific portion. In some aspects, the STA specific portion may contain more than one HE-LTF symbol even though or even in a case when the random access signal is limited to one spatial stream per station. In one aspect, the common preamble portion may be aligned and identical between random access transmissions (e.g., from different stations). In this aspect, the total number of HE-LTF symbols may also be the same (e.g., aligned and identical between the random access transmissions). However, in an aspect, a transmitted signal for HE-LTF may be different from that of other stations such that the HE-LTF between stations can be orthogonal to each other. The AP may indicate (e.g., in a trigger frame) the number of HE-LTF symbols to be transmitted by the stations. FIG.7illustrates an example of a random access PPDU700. The description fromFIG.6generally applies toFIG.7, with examples of differences betweenFIG.6andFIG.7and other description provided herein for purposes of clarity and simplicity. The vertical dimension represents the frequency dimension. In one aspect, a non-AP station utilizes the random access PPDU700. The random access PPDU700includes a common preamble portion and a STA specific portion. The common preamble portion may include a legacy preamble portion (e.g., L-STF, L-LTF, L-SIG) and an HE-specific preamble portion (e.g., RL-SIG, HE-SIG-A, HE-SIG-B). In some aspects, the various fields of the common preamble portion (e.g., legacy preamble portion, HE-specific preamble portion) may occupy an entire channel bandwidth of the random access PPDU700. In some aspects, the various fields of the STA specific portion (e.g., HE-STF, HE-LTF, HE-DATA) may occupy less than the entire channel bandwidth. For instance, the STA specific portion may occupy one or more resource units within thechannel bandwidth (e.g., a frequency subband(s)) associated with (e.g., allocated to, selected by) the station that transmits the random access PPDU700. In one aspect, the random access PPDU700does not include HE-DATA (e.g., does not include any data symbols). In such an aspect, the random access PPDU700may be, or may be referred to as, a null data packet (NDP) or a non-data packet. In one aspect, the random access PPDU700may include an L-STF, L-LTF, L-SIG, RL-SIG, and HE-SIG-A field. In some aspects, the HE-LTF may span more spatial streams than the number of receive (Rx) antennas at the AP. As an example, N HE-LTF symbols may be utilized even if the random access PPDU700is transmitted using one spatial stream, where N>1. In other words, N may be larger than the total number of receive antennas at the AP. As another example, the AP may only have two receive antennas, but request that stations use eight HE-LTF symbols (e.g., as if there are a total of eight transmit (Tx) antennas). In some cases, the station may select (e.g., randomly select, pseudorandomly select) a spatial stream to utilize for transmission. In contrast, typically, an AP with M (e.g., four) receive antennas may only be able to process up to M spatial streams and, therefore, would expect to receive an HE-LTF that spans M spatial streams. In one or more aspects, the stations may transmit the HE-LTF symbols using respective random access resource(s) associated with the stations. In an aspect, the random access resource may have two dimensions: one dimension in frequency domain and one dimension in code domain. In this aspect, the random access resource may include, or may be referred to as, a code-frequency resource(s). FIG.8illustrates an example of resources that may be utilized for random access. The random access resources may be utilized for transmission of HE-LTF symbols in a random access channel (e.g., of channel bandwidth 20 MHz). The vertical dimension represents the code dimension and the horizontal dimension represents the frequency dimension. InFIG.8, in the frequency domain, the channel bandwidth is divided into nine resource units (e.g., frequency subbands). In an aspect, if eight HE-LTF orthogonal frequency division multiplexing (OFDM) symbols are transmitted, HE-LTF may have in total 72 random access resource candidates (e.g., 8 code spaces×9 resource units=72 resource candidates). Each random access resource candidate is represented as a rectangular block inFIG.8. In an aspect, rectangular block inFIG.8is associated with a resource unit (RU) index and a spatial stream (SS) index. For instance, a block802is associated with a resource unit of resource unit index9(RU9) and spatial stream of spatial stream index2(SS2). Each station may select (e.g., randomly select, pseudorandomly select) one of the eight code spaces. In an aspect, a code space may be referred to simply as a code. In an aspect, the eight code spaces are orthogonal to each other. Such code spaces may traditionally be mapped to different spatial streams in downlink/uplink MIMO transmissions. In an aspect, correct decoding of a single stream transmission (e.g., rank 1 transmission) may be achieved using only one HE-LTF symbol in some cases. In an aspect, additional HE-LTF symbols may be utilized to facilitate detecting a presence of a data portion (e.g., HE-DATA field) in the same frequency resource unit and/or to improve decoding of the data portion when random access signals have collided in frequency. FIG.9illustrates an example of a mapping matrix PLTF. InFIG.9, the PLTFmatrix is an 8×8 matrix. In an aspect, the PLTFmatrix may be referred to as a PSXSmatrix. The column dimension of the PLTFmatrix may correspond to the number of HE-LTF symbols and/or time domain. The row dimension of the PLTFmatrix may correspond to the number of spatial streams. In this regard, each row of the PLTFmatrix may be associated with a respective code space. In an aspect, if eight HE-LTF OFDM symbols are used, the PLTFmatrix ofFIG.9may be utilized to modulate the HE-LTF OFDM symbols. For instance, if a station selects (e.g., randomly selects, pseudorandomly selects) the third code space, then the station may use the eight elements of the third row of the PLTFmatrix to modulate the eight HE-LTF symbols. In some aspects, to help ensure that the HE-LTF from different stations are orthogonal, the AP may indicate to the stations the number of HE-LTF symbols (or a dimension of a virtual antenna) in a trigger frame. In an aspect, the number of HE-LTF symbols can be larger than the number of receive antennas of the AP. The station may select (e.g., randomly select, pseudorandomly select) a code (e.g., a row of the PLTFmatrix) to apply to the transmission of the HE-LTF symbols. In an aspect, the station may select (e.g., randomly select, pseudorandomly select) a spatial stream among all possible spatial streams. In some aspects, a PPDU capture effect may occur when the AP is able to receive and successfully decode a frame, even though the frame has collided with another frame. For uplink random access, this is typically not possible since different stations transmit simultaneously and the HE-LTF of the different stations may be air-combined (e.g., radio frequency (RF)-combined). The air-combined HE-LTFs of the different stations may cause incorrect (e.g., completely wrong in some cases) channel estimation results. If two stations occupying the same resource unit (e.g., a frequency position(s), a frequency subband(s)) use different code to send the HE-LTF, the AP may be able to perform channel estimation correctly. In an aspect, whether respective data (e.g., payload, HE-DATA) of the two stations may be decoded correctly by the AP may be based on a modulation and coding scheme (MCS) that is utilized. For instance, in some cases, the HE-LTF sent using different code may be orthogonal (e.g., may be guaranteed to be orthogonal). In such cases, if the data is sent in a sufficiently low MCS, the AP may be able to decode the respective data of each station. Although the foregoing description makes reference to collisions involving transmissions from two stations, the collisions may also involve transmissions from more than two stations. In such a case, if these stations (e.g., more than two stations) occupy the same resource unit but use different code to send the HE-LTF, the AP may be able to perform channel estimation correctly. FIG.10illustrates an example of a comparison of two detection methods. InFIG.10, the detection methods include a detection method that utilizes code separation and a detection method that does not utilize code separation, respectively referred to as code separated detection method and non-code separated detection method. In the non-code separated detection method, STA1and STA2may each transmit (e.g., simultaneously transmit) a respective uplink frame that includes a respective HE-LTF1002and1006and respective data1004and1008. STA1and STA2may transmit the uplink frames in the same resource units (e.g., frequency subbands). The HE-LTFs1002and1006may be modulated using the same code and may be air-combined (e.g., RF-combined). Since the same code and same resource units are used in the transmission of the HE-LTFs1002and1006, channel estimation based on the HE-LTFs1002and1006may be incorrect (e.g., corrupted). In such a case where the channel estimation is incorrect, the AP is generally unable to decode the data1004and1008correctly. For instance, in this case, the data1004and1008may be almost impossible to decode correctly. In the code separated detection method, STA1and STA2may each transmit (e.g., simultaneously transmit) a respective uplink frame that includes a respective HE-LTF and the respective data1004and1008. STA1and STA2may transmit the uplink frames in the same resource units. In this regard, STA1and STA2may have selected (e.g., randomly selected) the same resource units. STA1and STA2may modulate their respective HE-LTF using different code. For instance, inFIG.10, STA1utilizes a second code space (e.g., associated with a second row of the PLTFmatrix) and STA2utilizes a seventh code space (e.g., associated with a seventh row of the PLTFmatrix). In an aspect, the eight code spaces are orthogonal to each other. In this aspect, the second code space (e.g., used by STA1) and the seventh code space (e.g., used by STA2) are orthogonal to each other. In an aspect, STA1and STA2may select (e.g., randomly select, pseudorandomly select) the code space to utilize. When the HE-LTFs utilized by STA1and STA2are orthogonal, the AP may obtain correct channel estimation for STA1and STA2. In such a case, although the data1004and1008occupy the same resource unit and thus interfere with each other, the AP may be able to correctly decode the data1004and1008since the AP has the correct channel estimation for both the data1004and1008. For instance, the AP may be able to correctly decode the data1004and1008in cases where the MCS is lower. In one or more aspects, the AP may utilize the HE-LTF OFDM symbols to detect presence of data transmissions.FIG.11illustrates another example of a comparison between two detection methods. InFIG.11, the detection methods include a code separated detection method and a non-code separated detection method. In some aspects, the detection of the presence of a random access signal can be performed by detection of the HE-LTF signals in a particular code space and/or frequency domain (e.g., resource unit). Detection performance may be enhanced when multiple HE-LTF OFDM symbols are utilized, such as in the code separated detection method. In an aspect, similar to detection of L-STF and L-LTF, a structure (e.g., repetitive structure) of the HE-LTFs may allow improvement in detection of a signal (e.g., detection of presence of data). When the presence of data is detected by the AP, the AP may decode those resource unit(s) where HE-LTF has been detected (e.g., rather than decoding all resource units in which data may possibly be transmitted). In some cases, the use of a single HE-LTF symbol is not sufficiently reliable for detecting the presence of data in a certain resource (e.g., frequency resource unit). For instance, in the non-code separated detection method, the AP may have to decode all possible resource units (e.g., potential candidates) to determine the resource unites) within which data has been transmitted. In one or more implementations, random access may be utilized by stations to indicate existence of information to send in the uplink. In some aspects, the AP may transmit a trigger frame to request/solicit the stations to transmit such an indication. In some aspects, such as in response to the trigger frame, the stations may transmit a short random access PPDU to indicate existence of information to send in the uplink. In such aspects, the trigger frame may be referred to as a random access trigger frame or a random access trigger. In one aspect, the short random access PPDU may include a small payload (e.g., small HE-DATA). By way of non-limiting example, the payload may include an indication of whether the station has data to send in the uplink and/or a buffer status report of the station. In another aspect, the random access PPDU may be a non-data containing PPDU (e.g., NDP packet). For instance, instead of a station transmitting a short data packet (e.g., data packet with a small HE-DATA field), the station may transmit the common preamble portion and only up to and including the HE-LTF of the STA specific portion of the preamble. In other words, the station does not send any data (e.g., data field, HE-DATA). Use of the non-data containing PPDU may allow a reduction in overhead associated with signaling, to the AP, the existence of information to send in the uplink. In an aspect, the trigger frame may include an indication to the stations whether to send a short random access PPDU that includes a small payload or a short random access PPDU with no payload (e.g., non-data containing PPDU, NDP frame) in response to the trigger frame. In an aspect, the station may transmit the short random access PPDU (e.g., nondata containing PPDU) to the AP when (e.g., only when) the station has data to send in the uplink. In an aspect, the AP may transmit a trigger frame to facilitate UL MU transmission based on the short random access PPDU(s) received (or not received) from the stations. In one or more aspects, the AP may assign a specific code-frequency resource to a specific station or group of stations. The specific code-frequency resource may be associated with one or more spatial stream indices and one or more resource unit indices. In these aspects, the spatial stream index or indices and resource unit index or indices may be used to identify the specific station or group of stations during random access. In this regard, the specific station or group of stations may utilize the specific code-frequency resource(s) that are assigned/allocated to indicate existence of data to be sent to the AP. The code-frequency resource may be utilized for transmission of HE-LTF symbols. In an aspect, the specific code-frequency resource may be referred to as a specific code-frequency LTF resource or specific code-frequency HE-LTF resource. For example, each code-frequency resource can be assigned to a specific association identification (AID) of a station. In one embodiment, the station may transmit a short random access PPDU to the AP using a code to indicate that the station has data to send in the uplink, and another code to indicate that the station does not have data to send in the uplink. In an aspect, this may require that the AP assign a set of codes, for example one code for positive acknowledgement of data to send and one code for negative acknowledgement of data to send, to each station or group of stations during a short random access. FIG.12illustrates an example of an allocation of a code-frequency resource. InFIG.12, 72 code-frequency resources are shown. Each code-frequency resource is associated with a resource unit index and a spatial stream index. In an aspect, the term index may be referred to as a number, such that each code-frequency resource is associated with a resource unit number and a spatial stream number. The AP may assign different stations (or different AIDs) to different frequency and/or code blocks. Each station (or each AID) may be mapped to a particular resource unit index and a spatial stream index. The mapping may change (including no mapping) depending on signaling (e.g., in a trigger frame). In some cases, each AID may be uniquely mapped to a RU index and SS index. InFIG.12, a station or group of stations associated with AID1may be assigned the code-frequency resource (RU9, SS2). In some cases, the mapping may not be unique, such that a code-frequency resource (RU #, SS #) may be shared by multiple AIDs. In such a case, the shared codefrequency resource(s) may be assigned in a manner such that collision probability is low. In some aspects, the detection of an HE-LTF in a specific code-frequency resource may indicate that a station has data to send to the AP. Different mapping of station (or group of stations) to a code-frequency resource can be possible. In an aspect, the different mapping, denoted as different random access types, may be indicated in a trigger frame transmitted by the AP. In an aspect, the random access type may be referred to as a random access group or random access sequence number. The random access type may be utilized to aid stations in identifying and differentiating between different random access opportunities. Each station may be associated with a random access type, a code, and a resource unit. A station, upon reception of the trigger frame, may identify the mapping between code-frequency resource and associated AID (or group of AIDs) and transmit random access signal according to the identified mapping. FIG.13illustrates a schematic diagram of an example of an exchange of frames among wireless communication devices for communication in a wireless network for UL MU transmission. The AP may transmit a trigger frame1302. The trigger frame1302may include an indication of a random access type. InFIG.13, the trigger frame1302includes an indication of a random access type 1, which includes a mapping of resources allocated to the group of stations formed of STA1, STA2, STA3, . . . , STA200. The trigger frame1302may include code-frequency resource allocation for each of the stations in the group of stations associated with the random access type 1. For instance, STA1, STA2, STA3, and STA200may be allocated (RU1, SS1), (RU2, SS1), (RU3, SS1), and (RU30, SS6), respectively. STA1and STA200may process the trigger frame1302received from the AP and may transmit a random access PPDU1304and1306, respectively, in their allocated code-frequency resource. In an aspect, STA1and STA200may transmit the random access PPDU1304and1306, respectively, as an indication that they have data to send in the uplink. In contrast, in this aspect, although STA2and STA3are allocated a code-frequency resource, STA2and STA3do not send a random access PPDU since they do not have any data to send in the uplink. The AP may transmit a trigger frame1308that includes an indication of a random access type 6, which includes a mapping of resources allocated to STA2007. For instance, STA2007may be allocated (RU28, SS4). In response to the trigger frame1308, STA2007may transmit a random access PPDU1310(e.g., as an indication that STA2007has data to send in the uplink). In some cases, the random access PPDUs1304,1306, and1310may be non-data packets (e.g.,1304,1306,1310have no data field). In these cases, an NDP is used for STA1and STA200's random access in response to the trigger frame1302, and an NDP is used for STA2007's random access in response to the trigger frame1308. In an aspect, the same code-frequency resource may be allocated to stations associated with different random access types. As an example different from that shown inFIG.13, the code-frequency resource (e.g., RU1, SSI) that is allocated to one station (e.g., STA1) associated with the random access type 1 may also be allocated to a station (e.g., STA2007) of another random access type. It is noted that the ellipses between the STA3and STA200may indicate that one or more additional stations or no stations are present between the STA3and STA200. Similarly, it is noted that the ellipses between the STA200and STA2007may indicate that one or more additional stations or no stations are present between the STA200and STA2007. It is noted that the ellipses between the random access PPDUs1304and1306and the trigger frame1308may indicate that one or more additional frame exchanges or no frame exchanges are present between the random access PPDUs1304and1306and the trigger frame1308. For instance, the AP may transmit trigger frames to stations associated with a random access type different from random access type 1 and 6. In some cases, the code-frequency resources of the random access PPDU can be allocated to different access categories (ACs). For example, if STA1has data in its AC voice (AC-VO) queue and STA2has data in its AC best effort (AC-BE) queue, STA1may send a random access signal (e.g., random access PPDU) using a set of code-frequency resources allocated to AC-VO and STA2may send a random access signal using a set of code-frequency resources allocated to AC-BE. In an aspect, the allocated code-frequency resources may not need to be singular. In such a case, the station may select (e.g., pseudorandomly select) one of the code-frequency resources for random access transmission. In some cases, the random access resources assigned to one or more stations may be dependent on a lowest backoff timer value of an uplink traffic enhanced distributed channel access function (EDCAF). For example, stations with the lowest backoff timer of 1 may use a set of random access resources to transmit (e.g., potentially transmit) a random access signal and stations with a lowest backoff timer of 4 may use a different set of random access resources to transmit (e.g., potentially transmit) a random access signal. In such a case, the random access resources assigned to a lower backoff timer value may be prioritized and assigned more resources compared to random access resources available for higher backoff timer values. In an aspect, the random access signal may contain information on the backoff timer of each non-empty AC queue. In some aspects, a non-data carrying random access signal (e.g., the non-data carrying random access signal described above) may allow multiplexing a large number of resources in a single random access PPDU opportunity. In one aspect, a drawback, however, can be lack of information conveyed to the AP. To overcome this drawback, a random access procedure can be performed in a two-step approach, as described below. In a first step, the AP may transmit a trigger frame to solicit a random access PPDU (e.g., short random access PPUD) from one or more stations. In a second step, the AP may transmit a trigger frame to solicit UL MU PPDUs containing data (e.g., HE-DATA) from the station(s) that transmitted a random access PPDU(s) in the first step. In an aspect, the resources allocated in the first step may be smaller than the resources allocated in the second step. For instance, smaller and/or fewer frequency subbands may be allocated to each station during the first step. FIG.14illustrates a schematic diagram of an example of exchanges of frames among wireless communication devices for UL MU transmission, where the two-step approach is utilized. The AP may transmit a trigger frame1402. In an aspect, the trigger frame1402may be referred to as a random access trigger frame or a random access trigger. In an aspect, the trigger frame1402may be utilized for soliciting random access signals (e.g., non-data carrying random access signals). The random access signals may be non-data carrying random access signals (e.g., no HE-DATA field). In an aspect, the AP may allocate the resources utilized by each station for random access signal transmission and indicate the allocated resources in the trigger frame1402. For instance, the AP may allocate resources to STA1, STA2, and STA3. In some cases, the resources (e.g., frequency resource unites), code(s)) that each station can use for random access signal transmission may be unique (e.g., orthogonal), such that each station is exclusively allocated its own respective resources. In some cases, the resources that each station may use are shared between stations. In such cases, the total resources (e.g., potential resources to be allocated) are sufficiently large so as to be statistically sufficient to achieve a low probability of collision (e.g., to facilitate no two stations using the same resources with high probability). STA1and STA3may transmit random access PPDU1404and1406, respectively. STA1may transmit the random access PPDU1404using a random access resource RA9allocated to STA1(e.g., in the trigger frame1402) for random access transmission. STA3may transmit the random access PPDU1406using a random access resource RA4. Each of RA9and RA4may be associated with one or more resource unit indices and one or more spatial stream indices. In an aspect, RA9and RA4are used for transmitting a STA specific portion of the random access PPDU1404and1406, respectively. In an aspect, STA1and STA3may transmit the random access PPDU1404and1406, respectively, to indicate that they have data to send in the uplink. In contrast, in this aspect, STA2does not send a random access PPDU since STA2does not have any data to send in the uplink. Once the AP receives the random access PPDUs1404and1406, the AP may transmit a second trigger frame1408to solicit uplink data from STA1and STA3. The trigger frame1408may include resource allocation information (e.g., resource unit, spatial code) to be utilized for data transmission by STA1and STA3. In response to the trigger frame1408, STA1and STA3may transmit a PPDU1410and1412that contains data. The PPDUs1410and1412may be transmitted based on respective resources allocated to STA1and STA3. The AP may transmit an acknowledgement frame1414(e.g., multi-user (MU) acknowledgement frame) upon receipt of the PPDUs1410and1412. In some aspects, the resources allocated for data transmission may be larger (e.g., larger resource unit(s)) than the resources allocated for random access transmission. In some cases, the AP may allocate resources (e.g., indicated using the trigger frame1402) for a larger number of stations for the random access transmission. The AP may then allocate resources (e.g., indicated using the second trigger frame1408) to a smaller number of stations for the data transmission. For instance, the AP may allocate resources for data transmission for those stations that indicated they have data to send (e.g., by sending PPDU1404,1406). FIG.15illustrates a schematic diagram of another example of exchanges of frames among wireless communication devices for UL MU transmission, where the two-step approach is utilized. The AP may transmit a trigger frame1502to station(s) associated with a random access type A. The trigger frame1502may allocate resources to the station(s) associated with the random access type A. InFIG.15, the station(s) associated with the random access type A may include STA1and STA3. In response to the trigger frame1502, STA1and STA3may transmit a random access PPDU1504and1506, respectively. STA1may transmit the random access PPDU1504using a random access resource RA9allocated to STA1(e.g., in the trigger frame1502) for random access transmission. STA3may transmit the random access PPDU1506using a random access resource RA4. The random access PPDU1504may include a common preamble portion1504A and a STA-specific portion1504B. The random access PPDU1506may include a common preamble portion1506A and a STA-specific portion1506B. The common preamble portions1504A and1506A may occupy the entire channel bandwidth of the random access PPDUs1504and1506, respectively. The STA-specific portions1504B and1506B may be transmitted using RA9and RA4, respectively. The AP may transmit a trigger frame1508to station(s) associated with a random access type B. The trigger frame1508may allocate resources to the station(s) associated with the random access type B. InFIG.15, the station(s) associated with the random access type B may include STA2. In response to the trigger frame1508, STA2may transmit a random access PPDU1510that includes a common preamble portion1510A and a STA-specific portion1510B. The STA-specific portion1510B may be transmitted using a random access resource RA3. In an aspect, the random access PPDUs1504,1506, and1510may be non-data carrying random access PPDUs (e.g., no data field, no HE-DATA field). The AP may transmit a trigger frame1512to solicit uplink data from STA1, STA2, and STA3. The trigger frame1512may include resource allocation information (e.g., resource unit, spatial code) to be utilized for data transmission by STA1, STA2, and STA3. In response to the trigger frame1512, STA1, STA2, and STA3may transmit a PPDU1514,1516, and1518, respectively, that contains data. The PPDUs1514,1516, and1518may be transmitted based on respective resources allocated to STA1, STA2, and STA3by the trigger frame1512. The AP may transmit an acknowledgement frame1520(e.g., multi-user (MU) acknowledgement frame) upon receipt of the PPDUs1514,1516, and1518. AlthoughFIG.15illustrates an example in which the AP transmits trigger frames (e.g.,1502,1508) to solicit random access PPDUs (e.g.,1504,1506,1510) from stations associated with two different random access types (e.g., type A, type B), the AP may transmit trigger frames to solicit random access PPDUs from more, fewer, and/or different random access types than those shown inFIG.15. In an aspect, the AP may transmit trigger frames (e.g.,1512) to solicit uplink data from stations of different random access types (e.g., STA1and STA3of random access type A, STA2of random access type B). The horizontal dimension inFIGS.10,11,13,14, and15represent the time dimension. In some aspects, a time interval between any two frames inFIGS.13through15may be an SIFS, PIFS, or any other time interval. Referring toFIGS.6,7,13,14, and15, in one or more implementations, a trigger frame (e.g.,1302,1308,1402,1408,1502,1508,1512) may include all or some of the fields of an HE frame600, and an uplink random access frame (e.g.,1304,1306,1310,1404,1406,1504,1506,1510) may include all or some of the fields of a random access PPDU700, excluding the HE-DATA field. In one or more implementations, an uplink data frame (e.g.,1410,1412,1514,1516,1518) may include the HE-DATA field of a random access PPDU700and some or all of the other fields of the random access PPDU700. It should be noted that like reference numerals may designate like elements. These components with the same reference numerals have certain characteristics that are the same, but as different figures illustrate different examples, the same reference numeral does not indicate that a component with the same reference numeral has the exact same characteristics. While the same reference numerals are used for certain components, examples of differences with respect to a component are described throughout this disclosure. The embodiments provided herein have been described with reference to a wireless LAN system; however, it should be understood that these solutions are also applicable to other network environments, such as cellular telecommunication networks, wired networks, etc. An embodiment of the present disclosure may be an article of manufacture in which a non-transitory machine-readable medium (such as microelectronic memory) has stored thereon instructions which program one or more data processing components (generically referred to here as a “processor” or “processing unit”) to perform the operations described herein. In other embodiments, some of these operations may be performed by specific hardware components that contain hardwired logic (e.g., dedicated digital filter blocks and state machines). Those operations may alternatively be performed by any combination of programmed data processing components and fixed hardwired circuit components. In some cases, an embodiment of the present disclosure may be an apparatus (e.g., an AP STA, a non-AP STA, or another network or computing device) that includes one or more hardware and software logic structure for performing one or more of the operations described herein. For example, as described above, the apparatus may include a memory unit, which stores instructions that may be executed by a hardware processor installed in the apparatus. The apparatus may also include one or more other hardware or software elements, including a network interface, a display device, etc. FIGS.16A and16Billustrate flow charts of examples of methods for facilitating wireless communication. For explanatory and illustration purposes, the example processes1610and1620may be performed by the wireless communication devices111-115ofFIG.1and their components such as a baseband processor210, a MAC processor211, a MAC software processing unit212, a MAC hardware processing unit213, a PHY processor215, a transmitting signal processing unit280and/or a receiving signal processing unit290; however, the example processes1610and1620are not limited to the wireless communication devices111-115ofFIG.1or their components, and the example processes1610and1620may be performed by some of the devices shown inFIG.1, or other devices or components. Further for explanatory and illustration purposes, the blocks of the example processes1610and1620are described herein as occurring in serial or linearly. However, multiple blocks of the example processes1610and1620may occur in parallel. In addition, the blocks of the example processes1610and1620need not be performed in the order shown and/or one or more of the blocks/actions of the example processes1610and1620need not be performed. Various examples of aspects of the disclosure are described below as clauses for convenience. These are provided as examples, and do not limit the subject technology. As an example, some of the clauses described below are illustrated inFIGS.16A and16B. Clause A. A station for facilitating communication in a wireless network for multi-user transmission, the station comprising: one or more memories; and one or more processors coupled to the one or more memories, the one or more processors configured to cause: processing a first trigger frame received from an access point, wherein the first trigger frame schedules a first uplink multi-user transmission and indicates a plurality of resources for indicating existence of data to be sent to the access point; and transmitting, during the first uplink multi-user transmission to the access point and in response to the first trigger frame, a first uplink frame that has no data field and that has a signal indicating existence of data to be sent from the station to the access point, wherein the transmitting comprises transmitting the signal on a first resource of the plurality of resources. Clause B. An access point for facilitating communication in a wireless network for multi-user transmission, the access point comprising: one or more memories; and one or more processors coupled to the one or more memories, the one or more processors configured to cause: transmitting a first trigger frame to one or more stations, to schedule an uplink multi-user transmission with the one or more stations, wherein the first trigger frame indicates a plurality of resources for indicating existence of data to be sent to the access point; and receiving, during the uplink multi-user transmission, a first uplink frame from a first station of the one or more stations, wherein the first uplink frame has no data field and has a signal indicating existence of data to be sent from the first station to the access point, and wherein the receiving comprises receiving the signal on a first resource of the plurality of resources. Clause C. A computer-implemented method of facilitating communication in a wireless network for multi-user transmission, the method comprising: processing a first trigger frame received from an access point, wherein the first trigger frame is for scheduling a first uplink multi-user transmission and indicates a plurality of resources for indicating existence of data to be sent to the access point; and transmitting, during the first uplink multiuser transmission to the access point and in response to the first trigger frame, a first uplink frame that has no data field and that has a signal indicating existence of data to be sent from a station to the access point, wherein the signal is sent on a first resource of the plurality of resources. In one or more aspects, additional clauses are described below. A method comprising one or more methods or operations described herein. An apparatus or a station comprising one or more memories (e.g., 240, one or more internal, external or remote memories, or one or more registers) and one or more processors (e.g., 210) coupled to the one or more memories, the one or more processors configured to cause the apparatus to perform one or more methods or operations described herein. An apparatus or a station comprising one or more memories (e.g., 240, one or more internal, external or remote memories, or one or more registers) and one or more processors (e.g., 210 or one or more portions), wherein the one or more memories store instructions that, when executed by the one or more processors, cause the one or more processors to perform one or more methods or operations described herein. An apparatus or a station comprising means (e.g., 210) adapted for performing one or more methods or operations described herein. A computer-readable storage medium (e.g., 240, one or more internal, external or remote memories, or one or more registers) comprising instructions stored therein, the instructions comprising code for performing one or more methods or operations described herein. A computer-readable storage medium (e.g., 240, one or more internal, external or remote memories, or one or more registers) storing instructions that, when executed by one or more processors (e.g., 210 or one or more portions), cause the one or more processors to perform one or more methods or operations described herein. In one aspect, a method may be an operation, an instruction, or a function and vice versa. In one aspect, a clause may be amended to include some or all of the words (e.g., instructions, operations, functions, or components) recited in other one or more clauses, one or more sentences, one or more phrases, one or more paragraphs, and/or one or more claims. To illustrate the interchangeability of hardware and software, items such as the various illustrative blocks, modules, components, methods, operations, instructions, and algorithms have been described generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application. A reference to an element in the singular is not intended to mean one and only one unless specifically so stated, but rather one or more. For example, “a” module may refer to one or more modules. An element proceeded by “a,” “an,” “the,” or “said” does not, without further constraints, preclude the existence of additional same elements. Headings and subheadings, if any, are used for convenience only and do not limit the invention. The word exemplary is used to mean serving as an example or illustration. To the extent that the term include, have, or the like is used, such term is intended to be inclusive in a manner similar to the term comprise as comprise is interpreted when employed as a transitional word in a claim. Relational terms such as first and second and the like may be used to distinguish one entity or action from another without necessarily requiring or implying any actual such relationship or order between such entities or actions. Phrases such as an aspect, the aspect, another aspect, some aspects, one or more aspects, an implementation, the implementation, another implementation, some implementations, one or more implementations, an embodiment, the embodiment, another embodiment, some embodiments, one or more embodiments, a configuration, the configuration, another configuration, some configurations, one or more configurations, the subject technology, the disclosure, the present disclosure, other variations thereof and alike are for convenience and do not imply that a disclosure relating to such phrase(s) is essential to the subject technology or that such disclosure applies to all configurations of the subject technology. A disclosure relating to such phrase(s) may apply to all configurations, or one or more configurations. A disclosure relating to such phrase(s) may provide one or more examples. A phrase such as an aspect or some aspects may refer to one or more aspects and vice versa, and this applies similarly to other foregoing phrases. A phrase “at least one of” preceding a series of items, with the terms “and” or “or” to separate any of the items, modifies the list as a whole, rather than each member of the list. The phrase “at least one of” does not require selection of at least one item; rather, the phrase allows a meaning that includes at least one of anyone of the items, and/or at least one of any combination of the items, and/or at least one of each of the items. By way of example, each of the phrases “at least one of A, B, and C” or “at least one of A, B, or C” refers to only A, only B, or only C; any combination of A, B, and C; and/or at least one of each of A, B, and C. It is understood that the specific order or hierarchy of steps, operations, or processes disclosed is an illustration of exemplary approaches. Unless explicitly stated otherwise, it is understood that the specific order or hierarchy of steps, operations, or processes may be performed in different order. Some of the steps, operations, or processes may be performed simultaneously. The accompanying method claims, if any, present elements of the various steps, operations or processes in a sample order, and are not meant to be limited to the specific order or hierarchy presented. These may be performed in serial, linearly, in parallel or in different order. It should be understood that the described instructions, operations, and systems can generally be integrated together in a single software/hardware product or packaged into multiple software/hardware products. The disclosure is provided to enable any person skilled in the art to practice the various aspects described herein. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology. The disclosure provides various examples of the subject technology, and the subject technology is not limited to these examples. Various modifications to these aspects will be readily apparent to those skilled in the art, and the principles described herein may be applied to other aspects. All structural and functional equivalents to the elements of the various aspects described throughout the disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112, sixth paragraph, unless the element is expressly recited using a phrase means for or, in the case of a method claim, the element is recited using the phrase step for. The title, background, brief description of the drawings, abstract, and drawings are hereby incorporated into the disclosure and are provided as illustrative examples of the disclosure, not as restrictive descriptions. It is submitted with the understanding that they will not be used to limit the scope or meaning of the claims. In addition, in the detailed description, it can be seen that the description provides illustrative examples and the various features are grouped together in various implementations for the purpose of streamlining the disclosure. The method of disclosure is not to be interpreted as reflecting an intention that the claimed subject matter requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed configuration or operation. The following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separately claimed subject matter. The claims are not intended to be limited to the aspects described herein, but are to be accorded the full scope consistent with the language claims and to encompass all legal equivalents. Notwithstanding, none of the claims are intended to embrace subject matter that fails to satisfy the requirements of the applicable patent law, nor should they be interpreted in such a way.
83,591
11943088
DETAILED DESCRIPTION The propagation CIR estimation based on SRS transmission is based on periodic cross correlation of the received signal with a reference signal in the receiver. The reference signal s*(k), k=0, 1, . . . , N−1, is the complex-conjugate version of SRS, where “*” denotes complex conjugation. At the transmitter the SRS is extended by a preamble signal scp(l)=s(N−1−(D−1−l)), l=0, 1, . . . , D−1 of D samples, equal to the last D≤N samples of the SRS. In systems using orthogonal frequency division multiplex (OFDM) waveforms, the preamble {scp(l)} is called the cyclic prefix (CP). The detection time window DWis not supposed to be larger than the CP length D and could be smaller, if the receiver knows that the delay spread of propagation channel is smaller than the CP. In some transmission schemes, the SRS is also extended by a postamble signal, of the same size as the CP, but being equal to the first D samples of the SRS. The received signal r(k) is convolution of the transmitted signal (concatenated CP and SRS) and the propagation channel impulse response h(l), l=0, 1, . . . , C−1. If the CP length is such that D≥C−1, the samples of the received SRS signal can be rewritten as circular convolution between {s(k)} and {h(l)}, i.e., r(k)=Σi=0C-1h(l)s((k−l)modN),k=0,1, . . . ,N−1.  (1) The transmission of the SRS is synchronized with timing of the receiver through the so-called TA mechanism at the transmitter based on the feedback from the receiver, so that the time of arrival (TOA) of the earliest received replica of the SRS corresponds to the position of the first sample (r(0)) in the first block of N received signal samples correlated in the matched filter. It means that the matched filter always starts its operation at a delay p=0. Therefore the operation of the matched filter can be described as calculation of the D values of the periodic cross-correlation θrs(p), p=0, 1, . . . , D−1, between the reference signal s*(k), k=0, 1, . . . , N−1, and D overlapping blocks r(k+p) of the received signal. Mathematically θrs(p) can be described as θrs(p)=Σk=0N-1s*(k)r(k+p) =Σl=0C-1h(l)Σk=0N-1s*(k)s(k−l+p) =Σl=0C-1h(l)θss(p−l),p=0,1, . . . ,D−1  (2) where θss(τ), τ=0, 1, . . . , N−1, is the periodic autocorrelation of the SRS. Let us suppose now that the periodic autocorrelation function θss(τ) of the SRS has a ZAZ of length DZAZdefined as θss(τ)⁢{=0,❘"\[LeftBracketingBar]"τ❘"\[RightBracketingBar]"=1,2,…,DZAZ≠0,❘"\[LeftBracketingBar]"τ❘"\[RightBracketingBar]">DZAZ(3) such that C−1≤DZAZ≤D−1.  (4) From (2)-(4) it follows that θrs(l)={h⁡(l)⁢θss(0)l=0,1,…,C-10l=C,C+1,…,DZAZ.(5) In case of multiple SRSs transmitted in parallel, the aforementioned TA mechanism will result in aligned TOAs of their earliest received replicas at the BS. Under that condition it is expected that the CIR estimation at the matched filter output of one SRS does not suffer interference from any other concurrently transmitted SRS. As before, let r(k) denote the samples of the received signal from one SRS, e.g. sx(k), k=0, 1, . . . , N−1, while use the reference signal corresponding to another SRS, e.g. sy(k) to perform matched filtering. The matched filter output is then given by θrxsy(p)=Σl=0C-1h(l)θsxsy(p−l),p=0,1, . . . ,D−1.  (6) Let us suppose that the periodic cross correlation function θsxsyof two SRSs has a ZCCZ of length DZCCZ, defined as θsx⁢sy(τ)⁢{=0,❘"\[LeftBracketingBar]"τ❘"\[RightBracketingBar]"=0,1,…,DZCCZ≠0,❘"\[LeftBracketingBar]"τ❘"\[RightBracketingBar]">DZCCZ(7) such that C−1≤DZCCZ≤D−1  (8) From (6)-(8) it follows that θrxsy(τ)=0, τ=0, 1, . . . , DZCCZ. Thus, as long as the detection time window DW≤D is such that DW=DZCZ+1  (9) where DZCZ=min{DZAZ, DZCCZ} is the length of zero correlation zone (ZCZ), the CIR of the propagation channel experienced by the SRS sy(k) can be estimated without any interference from the simultaneously received SRS sx(k) having the same DZAZ. The upper bound of DZCZfor the set of M complex signals of length N is given by DZCZ≤NM-1.(10) The estimated CIR is then converted to the frequency domain via DFT to obtain the estimated channel coefficient at each subcarrier and each BS antenna for use in the subsequent DL data transmission. In a first conventional solution, a class of “semi-perfect” binary sequences are proposed as midamble sequences for CIR estimation in the GSM mobile communication systems with BPSK, MSK-type, QPSK and offset-QPSK modulation. The proposed semi-perfect sequences can have any length of form N=2kwith k being a positive integer and are constructed recursively from shorter semi-perfect sequence pairs of length N′=N/2. The total number of sequences that can be generated is (k−2)! 2k-1. It is proved that these sequences have a ZAZ of length N/4−1 in their periodic autocorrelation functions, and a ZCCZ of length N/4−1 in the periodic cross-correlation function of each pair. In a second conventional solution, the SRS is constructed based on OFDM modulation, as a signal consisting of a sum of N complex sinusoids whose frequencies are consecutive integer multiples of a constant. The complex sinusoids are often called subcarriers and the constant is referred to as subcarrier spacing (SCS). Each sinusoid is multiplied, i.e. modulated by a complex number. These complex numbers can be ordered onto a sequence, often called a frequency-domain modulation sequence. The SRS frequency-domain modulation sequences are Zadoff-Chu (ZC) sequences of a certain root indices and lengths, with periodical extensions according to the available SRS bandwidth. A ZC sequence {a(u)} of length L is defined as a(u)=WLau(u+L mod 2+2q)/2,u=0,1, . . . ,L−1  (11) where WL=exp(−√{square root over (−1)} 2π/L), α≠0 and q are integers, and α, which is called root index, is relatively prime to L. There are 30 ZC root indices defined. For a given root index of the ZC sequence, a generic SRS is obtained by using the elements of the corresponding generic ZC sequence to modulate every KTC-th subcarrier (KTC=2, or 4, or 8, referred to as Comb-2, Comb-4 and Comb-8 frequency resources, respectively), and then summing all the subcarriers. The modulation and summing of subcarriers can be efficiently done through IDFT. Obviously, there are KTC=2, or 4, or 8 possible starting frequency positions for mapping the generic ZC sequence. Such generic SRS is then used to generate a group of A number of orthogonal SRS s, having certain ZAZ and ZCCZ in their periodic correlation functions, by A (=8, or 12, or 6) equidistant cyclic shifts of the basic SRS. The cyclic shifts of the basic SRS can be implemented by the corresponding phase offsets of the basic ZC sequence. Such group of A orthogonal SRSs can be generated for each possible starting frequency position. Thus, the total number of orthogonal SRSs for each type of frequency resources is as follows: for Comb-2 there are 16 (KTC=2×A=16 with A=8) orthogonal SRSs; for Comb-4 there are 48 (K7-c=4×A=48 with A=12) orthogonal SRSs; and for Comb-8 there are 48 (KTC=8×A=48 with A=6) orthogonal SRSs. The corresponding zero correlation zones of length DZCZ=min{DZAZ, DZCCZ} are given by DZCZ=N8⁢(cyclic⁢shifts)×2⁢(initial⁢locations)-1=N/16-1(Comb-2)DZCZ=N12⁢(cyclic⁢shifts)×4⁢(initial⁢locations)-1=N/48-1(Comb-4)DZCZ=N6⁢(cyclic⁢shifts)×8⁢(initial⁢locations)-1=N/48-1(Comb-8) Each SRS is extended by a cyclic prefix as described earlier, to obtain a SRS OFDM symbol. Multiple SRS OFDM symbols, e.g. 1, 2, 4, 8 or 12 OFDM symbols, can be transmitted consecutively, i.e. repeated at a certain antenna port. A third conventional solution disclose the generation of UL random access (RACH) preambles proposed for the new radio (NR) unlicensed spectrum. Although the RACH preambles have different purpose (i.e. for synchronization) and design requirements than SRSs (i.e. for channel estimation), the RACH preambles are considered relevant, due to certain correlation properties and frequency resource allocation of the proposed RACH preambles. These frequency resources are allowed to be defined more flexibly than in the above described conventional solutions: the whole band of N=δt subcarriers, with δ and t being positive integers, is divided into t subbands of S consecutive subcarriers. The same ASC(ASC≤δ) number of subcarriers is selected in each subband to be used for transmission. Each of such ASCnumber of subcarriers are modulated by A=ASCorthogonal or quasi-orthogonal sequences to produce A RACH preambles having certain ZAZ and ZCCZ in their periodic correlation functions. The inventors have identified a number of drawbacks of the conventional solutions. The drawback of the first conventional solution is that according to equation (10) the number M of proposed midamble sequences that might have the zero correlation zone of length DZCZ=N/4−1 is theoretically limited to 4. The drawback of the second conventional solution is that it has a limited SRS capacity per cell in case of a high mobility system, where a UE needs to transmit SRS with shortest repetition period T to track the channel variation over time due to UE mobility. As mentioned previously, the second conventional solution can support at most 16 or 48 orthogonal and simultaneous SRS transmissions over a given time-frequency resource per cell. If additional simultaneously active UEs have to be supported, their SRS transmission timing could be delayed by some time offset Toffsetwithin deployed SRS period T, so that they can reuse without any interference the SRSs already allocated to other UEs. Possible time offsets depend on the value of T. In the second conventional solution, the SRS period T (called periodicity TSRSin the second conventional solution) and the corresponding possible time offsets Toffsetare expressed in slots, not in absolute time because slots have different durations in NR for different subcarrier spacing, as shown in the following table: SubcarrierDuration of onespacing (kHz)slot (ms)151300.5600.251200.1252400.0625 If the SRS period is N slots then there are N−1 possible time offsets within that period. The minimum SRS period T is only one slot duration, so there is no corresponding time offset. In that case, supporting additional orthogonal SRS transmissions by time division multiplex (TDM) of reused SRSs is only possible if the SRS period is doubled. However, doubling the SRS period results in significant mismatch between the CIRs experienced by UL SRS and DL data. The second conventional solution also allows an alternative solution, by giving up the SRS orthogonality and allowing SRSs generated by different ZC root indices to be transmitted concurrently in the same cell. This brings significantly increased interference over the whole CIR detection window, primarily from intra-cell UEs, but also to some extent from inter-cell UEs, as the SRS reuse factor in the system is reduced due to a less number of supported cells (this second source of interference can be easily eliminated by increasing the subset of root indices specified in the standard). Consequently, in both these two approaches, the precoder for DL data transmission designed based on estimated UL CIR does not match well the CIR experienced by DL data, causing significant performance degradation for DL data transmission. The drawback of the third conventional solution is similar to that of the second conventional solution, i.e. it does not disclose how to generate non-orthogonal SRSs and how to handle interference between non-orthogonal SRSs. Consequently, there is a need for a solution which address the above mentioned drawbacks. An objective of the invention is therefore to provide a solution that increases the number of reference signals that can be transmitted simultaneously while minimizing the interference such that the channel estimation performances can be improved. Embodiments of the invention discloses a reference signal design which can be used e.g. for UL SRS design for OFDM based mMIMO wireless communication systems but is not limited thereto. The reference signal design explores a special property, the sparsity, of the pairwise cross correlation of non-orthogonal block-interlaced frequency domain multiple access (B-IFDMA) signals. The sparsity of the pairwise cross correlation can be defined as a property that the cross correlation is non-zero only for a set of delays that are multiples of a certain delay offset, such that it is significantly larger than 1 and it is a factor of the SRS duration. Embodiments of the invention can utilize the fact that the timing of all UL signals is tightly controlled by the base station by so-called TA mechanism, resulting in simultaneous arrival of earliest replicas of all UL SRSs at the BS. By simple manipulations of different non-orthogonal B-IFDM signals, it is achieved that the sparse pairwise interference caused by the cross correlation of a non-orthogonal SRS appears only between the tail of the CIR experienced by one SRS and the head of the CIR experienced by another SRS. Thereby, the head of the CIR, where typically the strongest and thus most important CIR components lay, only suffers marginal interference from the other non-orthogonal SRSs, even if the number of concurrently transmitted new SRSs per cell is doubled compared to the corresponding conventional SRSs in 3GPP NR. It is however noted that embodiments of the invention can be implemented in the downlink, in sidelink or any other suitable communication link. FIG.1shows a first communication device100according to an embodiment of the invention. In the embodiment shown inFIG.1, the first communication device100comprises an obtaining block102, a mapping block104, a phase rotating block106, an inverse Fourier transforming block108, and a transmitting block110coupled to each other. The obtaining block102is configured to obtain at least one sequence from a subset of sequences with index q in a set of sequences. The set of sequences comprises Q number of subsets of sequences and each subset of sequences comprising A number of orthogonal sequences. Furthermore, any two sequences from different subsets of sequences are non-orthogonal. In embodiments, the obtaining block102may obtain two or more sequences, wherein the two or more sequences are from the same subset of sequences. The obtained two or more sequences are then used to generated two or more time domain signals and transmitted from different antenna ports of the first communication device. By generating the two or more time domain signals based on the two or more sequence from the same subset of sequence, a ZCCZ can be maintained between them whose length/time span can be as long as that of their ZAZ. Thereby, minimized interference between the two or more time domain signals at the second communication device can be achieved during the estimation of their correspondingly experienced propagation channels. The mapping block104is configured to map the sequence from the subset of sequences onto a subset of subcarriers in a set of subcarriers to obtain a sequence in the frequency domain. The set of subcarriers are divided into t number of subbands. In embodiments, the number of subcarriers in the subset of subcarriers is equal to the length of the sequence. The mapping block104may then be configured to map each element of the sequence onto each subcarrier in the subset of subcarriers. Each element of the sequence can be mapped onto each subcarrier in the subset of subcarriers in either ascending or descending order. For example, the kth element of the sequence can be mapped onto the kth subcarrier in the subset of subcarriers, where k=0, 1, 2, etc. and the subcarriers in the subset of subcarriers can be indexed either from the lowest frequency to the highest frequency, or vice versa. The subset of subcarriers may comprise ASCnumber of subcarriers in each of the t number of subbands. The distribution of the ASCnumber of subcarriers may be the same in all the t number of subbands, and the ASCnumber of subcarriers in each subband may be equal to the A number of orthogonal sequences in each subset of sequences, i.e. ASC=A. The distribution of the ASCnumber of subcarriers in each subband may be consecutive or non-consecutive. Furthermore, the distribution of the ASCnumber of subcarriers in each subband may be uniform or non-uniform. In this way, the distribution of all the subcarriers in the subset of subcarriers may be mapped to possible distributions of subcarriers in the set of subcarriers in frequency domain according to the 3GPP standard. The phase rotating block106is configured to phase rotate the sequence in the frequency domain based on a constant value Δqto obtain a phase rotated sequence in the frequency domain. The constant value Δqis dependent on the Q number of subsets of sequences, the t number of subbands and the index q of the subset of sequences. By phase rotating the sequence based on the constant value Δqa predetermined cyclic shift in the time domain can be achieved for the sequence which is subset specific. In embodiments, the constant value Δqis a positive integer and determined according to any of the formulas Δq=q└t/Q┘, q=0,1, . . .Q−1, or Δq=└qt/Q┘, q=0,1, . . . ,Q−1 where └ . . . ┘ is the flooring-operator. The inverse Fourier transforming block108is configured to inverse Fourier transform the phase rotated sequence in the frequency domain to obtain a time domain signal. The time domain signal may be a reference signal such as e.g. a sounding reference signal (SRS). Due to the phase rotation performed by the phase rotating block106, the ZCCZ property between the time domain signals generated based on sequences from different subsets of sequences in the set of sequences can be guaranteed, where the ZCCZ length/time span is dependent on the subset specific constant value Δq. The transmitting block110is configured to transmit the time domain signal to a second communication device300. The transmitting block110may transmit the time domain signal using a transceiver of the first communication device100, such as the transceiver120shown inFIG.1. With reference toFIG.1, the transceiver120may be coupled to an antenna or antenna array122, which means that the first communication device100may be configured for wireless communications in a communication system500. The obtaining block102, the mapping block104, the phase rotating block106, and the inverse Fourier transforming block108may be implemented in a processor(s) (not shown in the Figs. of the disclosure) of the first communication device100such that the processor is configured to execute the actions or corresponding steps and functions performed and illustrated by the obtaining block102, the mapping block104, the phase rotating block106, and the inverse Fourier transforming block108inFIG.1. The obtaining block102, the mapping block104, the phase rotating block106, and the inverse Fourier transforming block108could for example represent different sections of a program code running on the processor. In a similar way, the transmitting block110may be implemented in a transceiver120of the first communication device100, such that the transceiver120is configured to execute the actions or corresponding steps and functions performed and illustrated by the transmitting block110inFIG.1. However, the obtaining block102, the mapping block104, the phase rotating block106, the inverse Fourier transforming block108, and the transmitting block110may be implemented in a processor system (not shown) comprising one or more processors coupled to an input and an output. FIG.2shows a flow chart of a corresponding method200which may be executed in a first communication device100, such as the one shown inFIG.1. The method200comprises obtaining202at least one sequence from a subset of sequences with index q in a set of sequences, wherein the set of sequences comprises Q number of subsets of sequences, each subset of sequences comprising A number of orthogonal sequences, and wherein any two sequences from different subsets of sequences are non-orthogonal. The method200further comprises mapping204the sequence from the subset of sequences onto a subset of subcarriers in a set of subcarriers to obtain a sequence in the frequency domain, wherein the set of subcarriers are divided into t number of subbands. The method200further comprises phase rotating206the sequence in the frequency domain based on a constant value Δqto obtain a phase rotated sequence in the frequency domain, wherein the constant value Δqis dependent on the Q number of subsets of sequences, the t number of subbands and the index q of the subset of sequences. Furthermore, the method200comprises inverse Fourier transforming208the phase rotated sequence in the frequency domain to obtain a time domain signal; and transmitting210the time domain signal to a second communication device300. FIG.3shows a second communication device300according to an embodiment of the invention. In the embodiment shown inFIG.3, the second communication device300comprises a receiving block302, a Fourier transforming block304, an obtaining block306, a mapping block308, a phase rotating block310, and an estimating block312coupled to each other. The receiving block302is configured to receive a time domain signal in a radio channel from the first communication device100. The received time domain signal is associated with a sequence. In embodiments, the received time domain signal may comprise a time domain signal associated with a sequence, wherein the time domain signal comprised in the received time domain signal may be a reference signal such as e.g. a sounding reference signal (SRS). The receiving block302may receive the time domain signal using a transceiver, such as the transceiver320shown inFIG.3. With reference toFIG.3, the transceiver320may be coupled to an antenna or antenna array322, which means that the second communication device300may be configured for wireless communications in a communication system500. The Fourier transforming block304is configured to Fourier transform the received time domain signal to obtain the sequence in the frequency domain of the received time domain signal. The obtaining block306is configured to obtain at least one local sequence from a subset of sequences with index q in a set of sequences comprising Q number of subsets of sequences. Each subset of sequences comprising A number of orthogonal sequences and any two sequences from different subsets of sequences are non-orthogonal. A local sequence can be understood to mean a sequence which is obtained locally in the second communication device300and not received by the second communication device300e.g. from the first communication device100. In embodiments, the obtaining block306may obtain two or more local sequences, wherein the two or more local sequences can be from the same or different subsets of sequences. The mapping block308is configured to map the local sequence onto a subset of subcarriers in a set of subcarriers to obtain the local sequence in the frequency domain. The set of subcarriers are divided into t number of subbands. In a similar way as described above for the first communication device100, the number of subcarriers in the subset of subcarriers may in embodiments be equal to the length of the sequence. The mapping block307may then be configured to map each element of the sequence onto each subcarrier in the subset of subcarriers. Each element of the sequence can be mapped onto each subcarrier in the subset of subcarriers in either ascending or descending order. For example, the kth element of the sequence can be mapped onto the kth subcarrier in the subset of subcarriers, where k=0, 1, 2, etc. and the subcarriers in the subset of subcarriers can be indexed either from the lowest frequency to the highest frequency, or vice versa. Furthermore, the subset of subcarriers may comprise ASCnumber of subcarriers in each of the t number of subbands. The distribution of the ASCnumber of subcarriers may be the same in all the t number of subbands, and the ASCnumber of subcarriers in each subband may be equal to the A number of orthogonal sequences in each subset of sequences, i.e. ASC=A. The distribution of the ASCnumber of subcarriers in each subband may e.g. be consecutive or non-consecutive. Furthermore, the distribution of the ASCnumber of subcarriers in each subband may be uniform or non-uniform. In this way, the distribution of all the subcarriers in the subset of subcarriers may be mapped to possible distributions of subcarriers in the set of subcarriers in frequency domain according to the 3GPP standard. The phase rotating block310is configured to phase rotate the local sequence in the frequency domain based on a constant value Δqto obtain a phase rotated local sequence in the frequency domain, wherein the constant value Δqis dependent on the Q number of subsets of sequences, the t number of subbands and the index q of the subset of sequences. By phase rotating the local sequence based on the constant value Δqa predetermined cyclic shift in the time domain can be achieved for the local sequence which is subset specific. In embodiments, the constant value Δqis a positive integer and determined according to any of the formulas Δq=q└t/Q┘, q=0,1, . . . ,Q−1, or Δq=└qt/Q┘, q=0,1, . . . ,Q−1 where └ . . . ┘ is the flooring-operator. The estimating block312is configured to estimate the radio channel based on the phase rotated local sequence in the frequency domain and the sequence in the frequency domain of the received time domain signal, i.e. based on the outputs of the Fourier transforming block304and the phase rotating block310. Estimate the radio channel can comprise estimate the channel impulse response in the time domain or the channel coefficient of each subcarrier in the set of subcarriers in the frequency domain. The Fourier transforming block304, the obtaining block306, the mapping block308, the phase rotating block310, and the estimating block312may be implemented in a processor(s) (not shown in the Figs. of the disclosure) of the second communication device300such that the processor is configured to execute the actions or corresponding steps and functions performed and illustrated by the Fourier transforming block304, the obtaining block306, the mapping block308, the phase rotating block310, and the estimating block312inFIG.3. The Fourier transforming block304, the obtaining block306, the mapping block308, the phase rotating block310, and the estimating block312could for example represent different sections of a program code running on the processor. In a similar way, the receiving block302may be implemented in a transceiver320of the second communication device300, such that the transceiver320is configured to execute the actions or corresponding steps and functions performed and illustrated by the receiving block302inFIG.3. However, the receiving block302, the Fourier transforming block304, the obtaining block306, the mapping block308, the phase rotating block310, and the estimating block312may be implemented in a processor system (not shown) comprising one or more processors coupled to an input and an output. FIG.4shows a flow chart of a corresponding method400which may be executed in a second communication device300, such as the one shown inFIG.3. The method400comprises receiving402a time domain signal in a radio channel from the first communication device100, wherein the received time domain signal is associated with a sequence, and Fourier transforming404the received time domain signal to obtain the sequence in the frequency domain of the received time domain signal. That the received time domain signal is associated with a sequence can mean that the received time domain signal comprises at least one time domain signal associated with a sequence, wherein the time domain signal comprised in the received time domain signal is generated based on the sequence at a first communication device100and transmitted from the first communication device100. The method400further comprises obtaining406at least one local sequence from a subset of sequences with index q in a set of sequences comprising Q number of subsets of sequences, each subset of sequences comprising A number of orthogonal sequences, and wherein any two sequences from different subsets of sequences are non-orthogonal. The method400further comprises mapping408the local sequence onto a subset of subcarriers in a set of subcarriers to obtain the local sequence in the frequency domain, wherein the set of subcarriers are divided into t number of subbands. The method400further comprises phase rotating410the local sequence in the frequency domain based on a constant value Δqto obtain a phase rotated local sequence in the frequency domain, wherein the constant value Δqis dependent on the Q number of subsets of sequences, the t number of subbands and the index q of the subset of sequences. Furthermore, the method400comprises estimating412the radio channel based on the phase rotated local sequence in the frequency domain obtained in the phase rotating410method step and the sequence in the frequency domain of the received time domain signal obtained in the Fourier transforming404method step. FIG.5shows a communication system500according to an embodiment of the invention. The communication system500comprises multiple first communication devices100a,100b, . . . ,100nand a second communication device300configured to operate in the communication system500. In the embodiment shown inFIG.5, the first communication devices100a,100b, . . . ,100nmay be client devices which may also be denoted user equipment (UE) devices and the second communication device300may be a network access node such as e.g. a base station in a radio access network (RAN). The first communication devices100a,100b, . . . ,100nmay be configured to transmit reference signals to the second communication device300. Each of the transmitted reference signals is obtained based on a sequence as described above with reference toFIG.1. Based on the received reference signal, the second communication device300may estimate the quality of radio channels510a,510b, . . . ,510nexperienced by the time domain signals transmitted from the first communication devices100a,100b, . . . ,100n, respectively. According to embodiments of the invention the first communication device100may obtain the sequence used to determine the time domain signal further based on information provided by the second communication device300.FIG.6shows the signaling between the first communication device100and the second communication device300according to such an embodiment. In step I inFIG.6, the second communication device300provides a control message502to the first communication device100, the control message502comprises at least one parameter indicating the sequence. The second communication device300may e.g. transmit the control message502to the first communication device100. The first communication device100obtains, e.g. receives, the control message502comprising the at least one parameter indicating the sequence from the second communication devices300. Based on the control message502, the first communication device100obtains the sequence in step II inFIG.6. Step II may comprise the first communication device100obtaining the at least one parameter indicating the sequence from the control message502and obtaining the sequence based on the at least one parameter indicating the sequence. In embodiments, the control message502may comprise a first parameter indicating an index of the sequence in a superset of sequences for the communication system500, where the superset of sequences defined for the whole communication system500comprising multiple cells may be divided into multiple sets of sequences for use in different cells of the communication system500. The control message502may comprise a first parameter indicating an index of the set of sequences in a superset of sequences for the communication system500and a second parameter indicating an index of the sequence in the set of sequences. Or the control message502may comprise a first parameter indicating an index of the set of sequences in a superset of sequences for the communication system500, a second parameter indicating an index of the subset of sequences in the set of sequences and a third parameter indicating an index of the sequence in the subset of sequences. Thus, the first communication device100may in embodiments obtain the sequence based on at least one of the first parameter, the second parameter, and the third parameter. In embodiments where the first communication device100may obtain two or more sequences, the second communication device300may provide, e.g. transmit, a control message502to the first communication device100to indicate the two or more sequences, wherein the two or more sequences are from the same subset of sequences. The control message502may in this case comprise at least two parameters indicating the two or more sequences. The two or more sequences may be indicated separately or jointly. When the two or more sequences are indicated separately, each sequence may be indicated as described above with a distinct set of parameters each comprising a first parameter, a first parameter and a second parameter, or a first, second and third parameters. When the two or more sequences are indicated jointly, one first parameter may be used to indicate their common index of the set of sequence, and at least two second parameters may be used to indicate their respective indices in the set of sequences. In a further example, one first parameter may be used to indicate their common index of the set of sequence, one second parameter may be used to indicate their common index of the subset of sequence in the set of sequences, and at least two third parameters may be used to indicate their respective indices in the subset of sequences. Further details related to the invention will now be described with reference to embodiments for obtaining a set of time domain signals based on the set of sequences according to the invention, where the set of time domain signals are a set of SRSs. FIG.7shows a flow chart of a method700for obtaining the set of SRSs. According to an embodiment of the invention. The method700comprises the following steps: Step 1: Determine common frequency resources for all the SRSs in a set of SRSs. The set of corresponding subcarrier indices is denoted byΩm, withΩm⊆{0, 1, . . . , N−1} and |Ωm|=L≤N, whereΩm(m=0, 1, . . . , B−1) refers to the m-th possible frequency resource location for SRS in the SRS frequency band and |·| denotes the size of a set; Step 2: Generate a set of QA length-L modulation sequences {cq,n(u)|q=0, 1, . . . , Q−1, n=0, 1, . . . , A−1, u=0, 1, . . . , L−1}; Step 3: Map the modulation sequences {cq,n(u)} to the frequency resourcesΩm, to obtain Fourier coefficients {Sm,q,n(f)|q=0, 1, . . . , Q−1, n=0, 1, . . . , A−1, f=0, 1, . . . , N−1}; Step 4: Add proper phase rotations to obtain modified Fourier coefficients {Sm,q,n(f)|q=0, 1, . . . , Q−1, n=0, 1, . . . , A−1, f=0, 1, . . . , N−1}, in order to achieve desired cyclic shifts in time domain; Step 5: Convert the modified Fourier coefficients {Sm,q,n(f)} to the time domain SRS via scaled IDFT, yielding the set of SRSs {sm,q,n(k)|q=0, 1, . . . , Q−1, n=0, 1, . . . , A−1, k=0, 1, . . . , N−1}. Each SRS in the obtained set of SRSs has a ZAZ in its periodic autocorrelation function, and any two SRSs have a ZCCZ in their periodic cross-correlation function. The ZCCZ length DZCCZis equal to the ZAZ length DZAZif the SRSs are from the same orthogonal SRS subset, or is approximately equal to a multiple of DZAZ/Q if the SRSs are from different orthogonal SRS subsets, where the multiple depends on the difference of the cyclic shifts associated with each subset. The steps in the SRS construction is further described in the subsequent subsections. Step 1: Common Frequency Resources for a SRS Set The SRS frequency band consists of N subcarriers, with frequencies f={f0, . . . , fN-1} with uniform spacing Δf between neighboring frequencies. Without loss of generality it is assumed that f0=0 and Δf=1, and the totally available frequency resources is denoted as f={0, 1, . . . , N−1}, i.e. the subcarrier frequencies will be referred to by the corresponding subcarrier indices. It is assumed that N=δt, with δ and t being positive integers, and N subcarriers are divided into t subbands of δ consecutive subcarriers. The same number of ASC(ASC=A≤δ) subcarriers are selected in each subband with the same distribution to be used for transmission. There can be B=└δ/A┘ such selections without selecting any subcarrier more than one time, and each of the A subcarriers in one selection is called the generalized physical resource blocks (gPRB), where └x┘ is the greatest integer less than or equal to x. The indices of the A subcarriers in the m-th (m=0, 1, . . . , B−1) gPRB are denoted by {jm,l|l=0, 1, . . . , A−1} such that κ≤jm,0<jm,1< . . . <jm,A-1≤δ−1. Then all the m-th gPRBs in t different subbands form an interlace. An interlace is mathematically described as a set of L=At used, i.e. allowed, subcarrier frequenciesΩm(m=0, 1, . . . , B−1), defined as Ωm={δi+jm,l|i=0,1, . . . ,t−1,l=0,1, . . . ,A−1}.  (12) B orthogonal interlaces are constructed in a given SRS frequency band, where it holds that Ωx∩Ωy=ϕ ifx≠y,∪m=0B-1Ωm{0,1, . . . ,N−1}.  (13) Waveforms whose spectrum is defined as in (12) can be classified as a generalized version of block-interleaved frequency-division multiple access (B-IFDMA) waveforms. A B-IFDMA waveform, originally proposed by structuring the frequency resources of OFDM waveforms, is a base-band analog signal with a comb spectrum, with non-zero DFT coefficients only at discrete frequencies within certain number of equidistant and identical blocks of uniformly spaced subcarriers. Step 2: Modulation Sequences Each of QA modulation sequences {cq,n(u)|q=0, 1, . . . , Q−1, n=0, 1, . . . , A−1, u=0, 1, . . . , L−1}, is in embodiments a constant-envelope sequence of length L=At, constructed as cq,n(u)=bq,n(umodA)a(u),u=0,1, . . . ,L−1,L=At(14) wherebq,n(l), l=0, 1, . . . , A−1, is the n-th (n=0, 1, . . . , A−1) sequence in the q-th (q=0, 1, . . . , Q−1) orthogonal subset. Such sequences will be labelled as “short” sequences. All the short sequences in the q-th orthogonal subset are mutually orthogonal.a(u) is a constant-envelope sequence of length L=At. Such a sequence will be labelled as a “long” sequence. As it will be shown later, the long sequence has a decisive impact on the peak-to-average power ratio (PAPR) for the corresponding SRS; additionally, multiple different sequences a(u) can be used to generate additional sets of SRSs. For example, the long sequence a(u) can be generated as in the third conventional solution, i.e. a(u)=Wtμ(u mod A)└u/A┘, (u=0, 1, . . . , L−1), where Wt=exp (−√{square root over (−1)} 2π/t) and μ(·) is a permutation over the set {0, 1, . . . , A−1}. In this case, the generated SRS can have 0 dB PAPR if t=A. Different permutations μ(·) can be used to generate different sets of SRSs, which e.g. could be deployed in different cells. A more general construction of long sequences a(u) that produce 0 dB PAPR without restriction t=A will be presented at the end of this section. The proposed modulation sequences {cq,n(u)} differ from the sequences constructed in the third conventional solution, both by the design of the short and of the long sequences. The set of proposed short sequences has QA sequences, while in the third conventional solution it has A sequences. A rather general construction of proposed {bq,n(l)}, potentially applicable in future systems based on extensions/adaptations of the second conventional solution is as follows. Example construction of the proposed short sequences: the q-th (q=0, 1, . . . , Q−1) orthogonal subset {bq,n(l)} is the set of all columns of an A×A matrix, where all the different cyclically shifted versions of a ZC sequence (11) of prime length A, obtained for a sub-set specific root index, are mapped to different rows, where the relation between a row index and a corresponding cyclic shift is determined by some permutation of the set of integers {0, 1, . . . , A−1}; and where each column is multiplied element-by-element with elements of a cover sequence. Both row indices permutation and a cover sequence could be specific for the q-th orthogonal subset. Obviously, Q≤A−1. A general construction of long sequences a(u) that produce 0 dB PAPR without restriction t=A and for arbitrary number δ of subcarriers in a subband, is based on modulatable constant amplitude zero-autocorrelation (mCAZAC) sequences. A CAZAC sequence {a(k)} has an ideal periodic autocorrelation function, i.e. θaa(P)=0, |p|=1, 2, . . . , L−1. As a consequence of this autocorrelation property and the constant amplitude, the sequence of the DFT coefficients {A(f)} of {a(k)} is also a CAZAC sequence. This can be seen from the frequency-domain definition of the periodic autocorrelation, given by θaa(p)=Σf=0L-1Λ*(f)Λ(f)WL−pf(15) which is zero for any 0<|p|<L only if |Λ(f)|=constant=|a(k)| for 0≤f<L. A CAZAC sequence fa(k)) is called modulatable if it can be represented as a product of two sequences, a specific carrier sequence {χ(k)} of length L=At and an arbitrary, shorter and t times periodically extended, modulation sequence {η(l)} of length A, i.e. when a(k)=χ(k)η(kmodA),k=0,1, . . . ,L−1,L=At.(16) The DFT sequence {Λ(f)} can be thus described as Λ⁡(f)=1L⁢∑k=0L-1a⁡(k)⁢WLfk=1L⁢∑l=0A-1WLfl⁢∑i=0t-1η⁡(l)⁢ϰ⁡(Ai+l)⁢Wtfi=1L⁢∑l=0A-1WLfl⁢∑i=0t-1a⁡(Ai+l)⁢Wtfi.(17) To have |Λ(f)|=constant=|a(k)| for 0≤f<L and arbitrary constant-amplitude sequence {η(k)}, the sum over l in Eq. (17) must have only one non-zero element for any f. This is possible only if the sum over i is non-zero and has the absolute value equal to |Λ(f)| only for a single value of l for given value of f. If the mCAZAC sequence {a(k)} has the unit amplitude, then from Eq. (17) it follows that the non-zero absolute value of the sum over i is equal to √{square root over (L)}. This conclusion is summarized in the following Lemma: Lemma: If a sequence {a(k)} is a mCAZAC sequence (16) of the unit amplitude, then ❘"\[LeftBracketingBar]"∑i=0t-1a⁡(Ai+l)⁢Wtfi❘"\[RightBracketingBar]"={L,l=l^0,l≠l^(18) where {circumflex over (l)} is one and only one value of l among {0, 1, . . . , A−1} for given value of f. This Lemma is used further below to prove the 0 dB PAPR property of the proposed SRSs, for arbitrary size δ of interlacing subbands and arbitrary number A≤δ of allowed subcarriers, as long as the long sequence {a(k)} of length L=At is a modulatable CAZAC sequence, where t is a multiple (including 1) of A. All known modulatable CAZAC sequences are of a length that can be expressed as an integer multiple of another squared integer. It means that for a given t any A can be selected that is a factor of t, i.e. t=sA, and generate the corresponding mCAZAC sequence {a(k)} of length L=At=sA2. The earliest construction of mCAZAC produces modulatable Frank sequences of length L=A2. The mCAZAC sequences of length L=sA2have been constructed, as a class of generalized chirp-like (GCL) sequences obtained by modulating ZC sequences of length L=sA2, i.e. by using Eq. (17) where the carrier sequence {χ(k)} is a ZC sequence of length L=sA2. GCL sequences with minimum alphabets have been derived, where it has been shown that the aforementioned modulatable Frank sequences are a special case of GCL sequences. The GCL sequences have been further generalized, in the so-called unified construction of mCAZAC sequences, given by {a⁡(iA+l)=η⁡(l)⁢WLE⁡(iA+l)❘i=0,1,…,sA-1,l=0,1,…,A-1}⁢E⁡(iA+l)=A2(s+1)2[r0+n0⁢l⁡(l+1)2]⁢i2+A[r1⁢μ⁡(l)+n1]⁢i⁢(s+1)⁢n0≡0⁢(mod⁢2),(r0+n0⁢l⁡(l+1)2,s)=1,(r1,A)=1(19) where r0, n0, r1and n1are integers, and μ(·) is a permutation over the set {0, 1, . . . , A−1}. The construction in Eq. (19) can be further generalized and in the same time simplified as {a⁡(iA+l)=η⁡(l)⁢WLE⁡(iA+l)❘i=0,1,…,sA-1,l=0,1,…,A-1}⁢E⁡(iA+l)=A2(s+1)2⁢r0(l)⁢i2+A[r1⁢μ⁡(l)+n1]⁢i⁢(r0(l),s)=1,(r1,A)=1(20) where r0(l), r1and n1are integers, and μ(·) is a permutation over the set {0, 1, . . . , A−1}. For example, previously mentioned long sequence a(u)=WAμ(u mod A)└u/A┘|u=0, 1, . . . , L−1}, of length L=A2is actually the generalized modulatable Frank sequence, which can be obtained from Eq. (20) as a special case for s=1, r0=n0=n1=0, r1=1. If the permutation μ(·) is equal to 0, 1, . . . , A−1}, modulatable Frank sequences are obtained. Step 3: Fourier Coefficients A sequence of N Fourier coefficients is obtained as S~m,q,n(f)={cq,n(Ai+l),f=δ⁢i+jm,l∈Ω_m;0,otherwise.(21) As there are B disjoint interlacesΩm(m=0, 1, . . . , B−1), up to B sets of non-orthogonal SRSs can be constructed using the same set of modulation sequences {cq,n(u)}. Since different interlaces are orthogonal to each other by definition, any two SRSs from different interlaces have zero periodic cross-correlation, and thus can be transmitted in parallel in the same cell without introducing interference to each other. Step 4: Time-Domain Cyclic Shifts Fourier coefficients {{tilde over (S)}m,q,n(f)} are modified into {Sm,q,n(f)}, such that Sm,q,n(f)=WNΔqf{tilde over (S)}m,q,n(f),f=0,1, . . . ,N−1,q=0,1, . . . ,Q−1  (22) where WN=exp (−√{square root over (−1)} 2π/N) and 0≤Δq<t is the phase rotation increment for all SRSs in the q-th orthogonal SRS subset. Without loss of generality, it is assumed Δ0=0<Δ1< . . . <ΔQ-1<t.(23) The purpose of adding phase rotations to Fourier coefficients is to achieve different desired time-domain cyclic shifts for the SRSs. When multiple such SRSs are concurrently transmitted with aligned arrival time (for example, via TA mechanism described earlier), the above phase rotation operation guarantees that the periodic cross-correlation between any two SRSs from two different orthogonal SRS subsets qxand qyhas non-zero values only at delays p=|Δx−Δy|+tφ, φ=0, 1, . . . , δ−1, as it will be shown below. As all the SRSs in the q-th orthogonal subset are cyclically shifted by the same time offset Δqthe pairwise ZCCZ of SRSs in the same orthogonal SRS subset will not be affected by the time offset. In one embodiment, the phase rotation step Δqis taken to be Δq=q└t/Q┘, q=0,1, . . . ,Q−1.  (24) In another embodiment, the phase rotation step Δqis taken to be Δq=└qt/Q┘, q=0,1, . . . ,Q−1.  (25) Step 5: Time-Domain SRSs The time-domain SRS {sm,q,n(k)} is obtained by IDFT of its corresponding Fourier coefficients {Sm,q,n(f)}, yielding sm,q,n(k)=1N⁢∑f=0N-1Sm,q,n(f)⁢WN-kf,q=0,1,…,Q-1,n=0,1,…,A-1,k=0,1,…,N-1(26) where the scaling constant 1/√{square root over (N)} ensures that the sequence {Sm,q,n(f)} and the corresponding SRS {sm,q,n(k)} have the same energy E=L. Alternative Implementations The first communication device100may in embodiments be configured to perform the steps of the first communication device100according to the fifth or sixth aspect of the invention disclosed in the summary, i.e. obtain the time domain signal according to the invention using alternative steps, as will be further described later. The corresponding alternative implementations for embodiments where the set of time domain signals are a set of SRSs will now be described. In one equivalent implementation, the order of step 4 and step 5 may be switched, so that in a new step 4 the Fourier coefficients {{tilde over (S)}m,q,n(f)} are converted to time domain sequence {{tilde over (s)}m,q,n(k)|k=0, 1, . . . , N−1} as s~m,q,n(k)=1N⁢∑f=0N-1S~m,q,n(f)⁢WN-kf,q=0,1,…,Q-1,n=0,1,…,A-1,k=0,1,…,N-1.(27) Then, in a new step 5, all the time-domain sequences {{tilde over (s)}m,q,n(k)} in the q-th orthogonal SRS subset are cyclically shifted by the same time offset of Δqsamples to obtain the time-domain SRSs, i.e. sm,q,n(k)={tilde over (s)}m,q,n((k+Δq)modN),  (28) q=0, 1, . . . , Q−1, n=0, 1, . . . , A−1, k=0, 1, . . . , N−1. In another equivalent implementation of the SRS generation, step 4 can be eliminated by an alternative implementation of step 2, i.e. included in the generation of the modulation sequence {cn,q(u)}. This can be seen as substituting Eq. (14) and (21) into Eq. (22) to achieve Sm,q,n(f)=WNΔqf⁢S~m,q,n(f)={(WNΔqjm,l⁢bq,n(l))⁢(WtΔqi⁢a⁡(Ai+l))f=δ⁢i+jm,l;0otherwise.(29) Obviously, Eq. (29) could be obtained just from the concatenation of step 2 and step 5, if the sequence {cq,n(u)} is redefined in step 2 as cq,n(u)={tilde over (b)}q,n(umodA)aq(u),u=0,1, . . . ,L−1  (30) where {tilde over (b)}q,n(l)=WNΔqjm,lbq,n(l), aq(u)=WtΔq└u/δ┘a(u), and bq,n(u) and a(u) have the same definition as those in Eq. (14). In the following subsections properties of the proposed SRS will be discussed and the correlation and PAPR properties for the proposed SRSs will be proven. Periodic Autocorrelation Starting from the frequency-domain definition of the periodic autocorrelation function of an SRS {sm,q,n(k)}, the periodic autocorrelation is obtained by θsm,q,n,sm,q,n(p)=∑f=0N-1Sm,q,n*(f)⁢Sm,q,n(f)⁢WN-pf=∑l=0A-1WN-pjm,l⁢∑i=0t-1Wt-pi={E,p=00,p≢0⁢(mod⁢t)t⁢∑l=0A-1Wδφ⁢jm,l,p=t⁢φ,φ=1,…,δ-1(31) From Eq. (31) it is concluded that the periodic autocorrelation function has a ZAZ of length at least DZAZ=t−1, what is the case when Σl=0A-1Wδjm,l≠0. However, this sum could be equal to zero in some special cases of jm,l. For example, if A and δ can be decomposed as products of integers, i.e. if A=A′σ and δ=A′σB, then jm,lcan be defined as jm,l=δσ⁢i′+jm,l′′,l=A′⁢i′+l′,A′=A/σi′=0,1,…,σ-1,l′=0,1,…,A′-1.(32) From Eq. (31) it follows that Σl=0A-1Wδφjm,l=Σl′=0A′-1Wδφj′m,l′Σl′=0σ-1Wσφi′=0, for φ≠0(mod σ) i.e.p≠0(modtσ).  (33) From Eq. (33) it can be seen that the periodic autocorrelation function has a ZAZ of length DZAZ=tσ−1 if Eq. (32) holds, even if the short sequences {bq,n(l)} are of length A. Periodic Cross Correlation For any two SRSs sm,q=x,n=w(k) and sm,q=y,n=v(k), θsm,y,v,sm,x,w(p≥0)=(∑l=0A-1bx,w*(l)⁢by,v(l)⁢WN(Δy-Δx-p)⁢jm,l)⁢(∑i=0t-1Wt(Δy-Δx-p)⁢i)={t⁢∑l=0A-1bx,w*(l)⁢by,v(l)⁢WN(Δy-Δx-p)⁢jm,l,p≡Δy-Δx(mod⁢t)0,p≢Δy-Δx(mod⁢t)(34) assuming Δy≥Δxwithout loss of generality. For negative delays −p the identity θrs(−p)=θ*sr(p), p>0 can be used, θsm,y,vsm,x,w(p′=−p<0)=(Σl=0A-1b*x,w(l)by,v(l)WN(Δy−Δx+p)jm,l)(Σi=0t-1Wt(Δy−Δx+p)i).   (35) The summation over i is zero either for p≡−(Δy−Δx) (mod t), which is negative, or for p≡t−(Δy−Δx) (mod t), which is positive. As p in Eq. (35) is positive, θsm,y,v,sm,x,w(-p)={t⁢∑l=0A-1bx,w*(l)⁢by,v(l)⁢WN(Δy-Δx+p)⁢jm,l,p≡t-(Δy-Δx)⁢(mod⁢t)0,p≢t-(Δy-Δx)⁢(mod⁢t)(36) From Eq. (34) and (36) it is obvious that the periodic cross-correlation θsm,y,v,sm,x,w(p) has zero value for a range of consecutive delays p around and including p=0. This range of delays is on the positive side (p≥0) bounded at the distance (Δy−Δx) and on the negative side (p<0) bounded at the distance t−(Δy−Δx). As the ZCCZ is defined according to (7) as symmetrical around p=0, the ZCCZ length in the periodic cross-correlation function between any two SRSs sm,q=x,n=w(k) and sm,q=y,n=v(k) is given by (assuming Δy>Δx) DZCCZ=min⁢{miny>x{Δy-Δx},miny>x{t-(Δy-Δx)}}-1=min⁢{X,Y}-1(37)whereX=min0≤q≤Q-1{Δq+1-Δq},(38)Y=t-(ΔQ-1-Δ0).(39) and where inequality Δy−Δx≥Δq+1−Δqthat follows from Eq. (23) are used when y>x. Now min{X,Y} can be found for each of the two constructions of Δq. When Δq=q└t/Q┘, obviously X=└t/Q┘, while Y=t-(Q-1)⁢⌊tQ⌋. Thus, the difference Z=Y−X is given by Z=t-Q⁢⌊tQ⌋≥0(40) which means that min{X,Y}=X=└t/Q┘. When Δq=└qt/Q┘, X=min0≤q≤Q-1{⌊(q+1)⁢(Q⁢⌊tQ⌋+(t⁢mod⁢Q))Q⌋-⌊q(Q⁢⌊tQ⌋+(t⁢mod⁢Q))Q⌋}=min0≤q≤Q-1{(q+1)⁢⌊tQ⌋+⌊(q+1)⁢(t⁢mod⁢Q)Q⌋-(q⁢⌊tQ⌋+⌊q⁡(t⁢mod⁢Q)Q⌋)}=⌊tQ⌋+min0≤q≤Q-1{⌊(q+1)⁢(t⁢mod⁢Q)Q⌋-⌊q⁡(t⁢mod⁢Q)Q⌋}(41) And then ⌊q⁡(t⁢mod⁢Q)Q+(t⁢mod⁢Q)Q⌋-⌊q⁡(t⁢mod⁢Q)Q⌋≤⌊(t⁢mod⁢Q)Q⌋+1=1(42) where the right-hand side of Eq. (42) is obtained by first applying a well-known inequality └x+y┘≤└x+y┘≤└x┘+└y┘+1 to the left-hand side of Eq. (42), and then using the fact that (t mod Q)≤Q−1, which causes that └(t mod Q)/Q┘=0. Therefore, the term └(q+1)(t mod Q)/Q┘−└q(t mod Q)/Q┘ in Eq. (41) is either 0 or 1, meaning that its minimum value is 0. Thus, from Eq. (41) and (42) it is obtained that X=⌊tQ⌋.As⁢Y=t-⌊(Q-1)⁢tQ⌋, Z=Y−X is given by Z=t-⌊(Q-1)⁢tQ⌋-⌊tQ⌋≥t-(Q-1)⁢tQ-tQ=0(43) which means that min{X,Y}=X=└t/Q┘. Therefore, for both constructions of Δq, DZCCZ=⌊tQ⌋-1.(44) In special case when y=x and w≠v, i.e. when observed SRSs are in the same orthogonal SRS subset, Eq. (34) and (36) reduces to the same formula, given by θsm,x,v,sm,x,w(p)={t⁢∑l=0A-1bx,w*(l)⁢bx,v(l)⁢WN-pjm,l,p≡0⁢(mod⁢t)0,p≢0⁢(mod⁢t)(45) If the short sequences {bx,w(l)} and {bx,v(l)}(w≠v) are orthogonal, from Eq. (45) it follows that the corresponding SRSs will also be orthogonal, i.e. θsm,x,v,sm,x,w(p=0)=0. For p=xt, x=1, . . . , δ−1, θsm,x,v,sm,x,w(p=xt)=tΣl=0A-1b*x,w(l)bx,v(l)Wδ−xjm,l(46) which might be not equal to zero and therefore it is concluded that it has ZCCZ of length DZCCZ≥t−1. The minimum ZCCZ length, t−1, is the same as the minimum ZAZ length. The pairwise cross-correlations given either by Eq. (34) and (36) or by (45) are equally sparse, i.e. have only δ (100/t percent) non-zero values. Peak-to-Average-Power Ratio (PAPR) The proposed SRSs have 0 dB PAPR, for arbitrary size δ of interlacing subbands and arbitrary number A≤δ of allowed subcarriers, as long as the long sequence {a(k)} of length L=At is a modulatabe CAZAC sequence, where t is a multiple (including 1) of A. To see that, let SRS {sm,q=x,n=w(k)} be expressed as sm,x,w(k)=1N⁢∑l=0A-1WN(Δx-k)⁢jm,l⁢bx,w(l)⁢∑i=0t-1Wt(Δx-k)⁢i⁢a⁡(iA+l),k=0,1,…,N-1.(47) According to Lemma (18), if the sequence {a(k)} is a mCAZAC sequence of the unit amplitude, then the sum over i is non-zero and has the absolute value equal to √{square root over (L)}=√{square root over (At)} only for a single value of 1 for given value of (Δx−k). Therefore, it is obtained |sm,x,w(k)|=√{square root over (A/δ)},k=0,1, . . . ,N−1.  (48) The following subsections discusses the deployment of the invention in 3GPP NR systems. Setting CIR Detection Time Window Setting of CIR detection time window of length D at the receiver (e.g. BS) has been briefly discussed in the background, where it was stated that D≥min{DZAZ,DZCCZ}≥C−1. As shown in the previous section, DZAZ=t−1 while DZCCZcan have two possible values: DZCCZnon-orth=└t/Q┘−1 and DZCCZorth=t−1. To be able to deploy both of them, the length N=δt of SRSs is selected using t such that DZAZ=t−1≥C−1, where C is the maximum expected length of channel impulse response. The BS is the one which allocates the SRSs to the active UEs and is also the one that knows which of them are concurrently active. Depending on whether the concurrently active UEs are assigned the SRSs from the same orthogonal subset, or from multiple orthogonal subsets, the BS can select D to be either D=min{t−1, t−1}=t−1, or D=min{t−1,└t/Q┘−1}=└t/Q┘−1. Obviously, if all SRSs are from the same orthogonal subset, all the corresponding CIRs of maximum length C can be estimated without interference. If the SRSs are from different orthogonal SRS subsets, only └t/Q┘ coefficients of each CIR can be estimated. If none of the concurrent propagation channel realizations has the channel impulse response longer than └t/Q┘, all the related CIRs can be estimated without interferences. However, if some of the concurrent propagation channel realizations have the channel impulse responses of length C>└t/Q┘, the estimated CIR samples of the observed UE will be interfered by the tail (C−└t/Q┘ latest coefficients) of the CIR of an interfering UE. This might not have a detrimental effect on the estimated CIR, as the CIR power is mostly carried by its head instead of by its tail. These two cases are illustrated inFIGS.8a-b. FIGS.8a-billustrates the CIR estimation when an interfering non-orthogonal SRS is received with the same arrival time as the target SRS at the BS, where Q=2. The graph on the left inFIGS.8a-billustrates the CIR of the propagation channel experienced by the interfering non-orthogonal SRS, and the graph on the right inFIGS.8a-billustrates the matched filter output for the estimation of the propagation channel experienced by the target SRS, where the match filter output comprise replicas of both the target CIR (solid curves) and interfering CIR (dashed curves). FIG.8a, illustrates the case where C≤└t/Q┘. In this case, none of the propagation channels experienced by concurrent SRSs has its CIR length longer than └t/Q┘ and hence no part of the target CIR is interfered by the interfering CIR, as shown in the right graph ofFIG.8a. Thus, the target CIR can be estimated without interference. FIG.8billustrates the case where └t/Q┘<C≤t. In this case, the head and tail of the target CIR suffer from, respectively, the interference from the tail and head of a certain replica of the interfering CIR, as shown in the right graph ofFIG.8b. Since in practice, the CIR power is mostly carried by its head instead of by its tail, the head of the target CIR can be estimated with only marginal interference from the tail of an interfering CIR, and the tail of the target CIR can be simply ignored and not estimated, which will not have a detrimental effect on the estimation of the target CIR. Setting of Other Parameters The disclosed SRS design in this invention can be readily adopted to the current 3GPP NR systems by taking proper values for the related parameters. In one embodiment of the invention, the size of an interlacing subband is taken to be δ=16 and the length of the short sequence is taken to be A=8, with the 8 subcarriers in each gPRB to be every second subcarrier in each interlacing subband. Consequently, the number of interlacing subbands is taken to be t=N/16 depending on the SRS bandwidth. The advantage of such a parameter setting is that it results in the same frequency resource location and the same number of orthogonal SRSs as Comb-2 in 3GPP NR. For example, the values of these parameters are given in Table 1 below, where the values of SRS bandwidth are those supported in 3GPP NR systems. Table 1 shows examples of parameter setting that result in the same frequency domain location and the same number of orthogonal SRSs as Comb-2 in 3GPP NR. TABLE 1SRSbandwidth(# of PRBs)δAt81686121689161681220168152416818281682132168243616827401683044168334816836521683956168426016845641684868168517216854761685780168608416863881686692168699616872104168781081688111216884120168901281689613216899136168102144168108152168114160168120168168126176168132184168138192168144208168156216168162224168168240168180256168192264168198272168204 In one embodiment of the invention, the size of an interlacing subband is taken to be δ=24 and the length of the short sequence is taken to be A=12, with the 12 subcarriers in each gPRB to be every second subcarrier in each interlacing subband. Consequently, the number of interlacing subbands is taken to be t=N/24 depending on the SRS bandwidth. The advantage of such a parameter setting is that it results in the same frequency resource location as Comb-2 in 3GPP NR. For example, the values of these parameters are given in Table 2 below, where the values of SRS bandwidth are those supported in 3GPP NR systems. Table 2 shows examples of parameter setting that result in the same frequency domain location as Comb-2 in 3GPP NR. TABLE 2SRSbandwidth(# of PRBs)δAt8241241224126162412820241210242412122824121432241216362412184024122044241222482412245224122656241228602412306424123268241234722412367624123880241240842412428824124492241246962412481042412521082412541122412561202412601282412641322412661362412681442412721522412761602412801682412841762412881842412921922412962082412104216241210822424121122402412120256241212426424121322722412134 In one embodiment of the invention, the length of the short sequence is taken to be A=└√{square root over (N/2s)}┘ for a proper positive integers, and the size of an interlacing subband is taken to be δ=2A with the subcarriers of each gPRB to be every second subcarrier in each interlacing subband. Consequently, the number of interlacing subbands is taken to be t=[N/δ]≈sA depending on the SRS bandwidth, where [·] is the rounding operation to the closest integer. In this case, the length-L (L=At) modulating sequence may need be properly shortened or periodically extended to length N/2 to fit the available SRS bandwidth. The advantage of such a parameter setting is that it results in the same frequency resource location as Comb-2 in 3GPP NR, and in the meanwhile the resultant SRSs have low PAPR. For example, the values of these parameters are given in Table 3 below with s=1, where the values of SRS bandwidth are those supported in 3GPP NR systems. Table 3 shows examples of parameter setting that result in the same frequency domain location as Comb-2 in 3GPP NR and low-PAPR SRSs. TABLE 3SRSbandwidth(# of PRBs)δAt8147712168916189112020101224241212282412143224121636281415403015164432161748321618523417185636181960361820643819206840202072402022764221228042212384442223884422249246232496482424104482426108502526112502527120522628128542728132562828136562829144582930152603030160603032168623132176643233184663333192663335208703536216723636224723637240743739256763839264783941272804040 In one embodiment of the invention, the size of an interlacing subband is taken to be δ=48 and the length of the short sequence is taken to be A=12, with the 12 subcarriers in each gPRB to be every fourth subcarrier in each interlacing subband. Consequently, the number of interlacing subbands is taken to be t=N/48 depending on the SRS bandwidth. The advantage of such a parameter setting is that it results in the same frequency resource location and the same number of orthogonal SRSs as Comb-4 in 3GPP NR. For example, the values of these parameters are given in Table 4 below, where the values of SRS bandwidth are those supported in 3GPP NR systems. Table 4 shows examples of parameter setting that result in the same frequency domain location as Comb-4 in 3GPP NR. TABLE 4SRSbandwidth(# of PRBs)δAt1248123164812420481252448126284812732481283648129404812104448121148481212524812135648121460481215644812166848121772481218764812198048122084481221884812229248122396481224104481226108481227112481228120481230128481232132481233136481234144481236152481238160481240168481242176481244184481246192481248208481252216481254224481256240481260256481264264481266272481268 In one embodiment of the invention, the length of the short sequence is taken to be A=└√{square root over (N/4s)}┘ for a proper positive integer s, and the size of an interlacing subband is taken to be δ=4A with the subcarriers of each gPRB to be every fourth subcarrier in each interlacing subband. Consequently, the number interlacing subbands is taken to be t=[N/6]≈sA depending on the SRS bandwidth. In this case, the length-L (L=At) modulating sequence may need be properly shortened or periodically extended to length N/4 to fit the available SRS bandwidth. The advantage of such a parameter setting is that it results in the same frequency resource location as Comb-4 in 3GPP NR, and in the meanwhile the resultant SRSs have low PAPR. For example, the values of these parameters are given in Table 5 below with s=1, where the values of SRS bandwidth are those supported in 3GPP NR systems. Table 5 shows examples of parameter setting that result in the same frequency domain location as Comb-4 in 3GPP NR and low-PAPR SRSs. TABLE 5SRSbandwidth(# of PRBs)δAt122466162468202879242889283699323691136401011404010124444111248481212524812135648121460521314645213156856141572561415766015158060151684601517886416169264161796641618104681718108721818112721819120721820128761920132761921136802020144802022152842122160842123168882223176882224184922324192962424208962426216100252622410025272401042628256108272826411228282721122829 SRS Transmission Over Multiple Consecutive OFDM Symbols In 3GPP NR systems, a SRS may be repeated multiple times to span a number of G consecutive OFDM symbols within each SRS period during the transmission. The purpose of SRS transmission over multiple consecutive OFDM symbols is to achieve a higher accumulated receive power of the transmitted SRS and in turn a higher channel estimation quality at the BS, which is advantageous for cell-edge UEs with large propagation loss between it and the BS. For example, the BS can perform channel estimation independently based on each received duplicate of the transmitted SRS. Although the accuracy of each estimated CIR may be low due to low received signal-to-noise ratio (SNR). This accuracy can be enhanced after taking average of the estimated CIRs from these multiple received SRS duplicates within one SRS period. When the disclosed SRS is repeated over G consecutive OFDM symbols in each SRS period, it can be transmitted together with a length-G orthogonal cover code (OCC). By this means, one disclosed SRS can be reused together with up to G different OCCs that are orthogonal to each other and transmitted from different UEs and/or SRS antenna ports in parallel (with aligned arrival time). In this case, the channel estimation of them from the signal received in a single OFDM symbol always suffers full interference from each other. However, thanks to the orthogonality between different OCCs, such interference can be fully cancelled after averaging the estimated CIRs from G consecutive OFDM symbols according to the adopted OCC, assuming the channel variation across these G consecutive OFDM symbols is marginal and ignorable. By this means, the SRS capacity can be further increased by G times using the disclosed SRS design, compared to the case when a SRS is transmitted only over one OFDM symbol per SRS period. In one embodiment, the OCCs can be taken to be the columns of a G×G DFT matrix, or columns of a G×G DFT matrix with row permutation, or columns of a G×G DFT with/without row permutation, which are further element-wisely multiplied with an arbitrary length-G sequence with constant envelope. For example, the OCCs can be [1, 1] and [1,−1] when G=2, or can be [1, 1, 1, 1], [1,j,−1,−j], [1,−1,1,−1] and [1,−j,−1,j] with G=4. In one embodiment, the OCCs can be taken to be the columns of a G×G Hadamard matrix, or columns of a G×G Hadamard matrix with row permutation, or columns of a G×G Hadamard matrix with/without row permutation, which are further element-wisely multiplied with an arbitrary length-G sequence with constant envelope. For example, the OCCs can be [1,1] and [1,−1] when G=2, or can be [1, 1, 1, 1], [1,−1,1,−1], [1, 1,−1,−1] and [1,−1,−1, 1] with G=4. Signaling of SRS Before a UE transmits a SRS, it needs to determine the configuration of the SRS to be transmitted from the higher layer signaling. The configuration of the SRS may include the SRS time-frequency resource for each SRS antenna port (e.g. the starting position in the time domain, the number of consecutive OFDM symbols in each SRS period, the starting position in frequency domain, the SRS bandwidth and the comb number) and the SRS sequence identity nIDSRS. The SRS time-frequency resource can be signaled in the same manner as in 3GPP NR, e.g. through SRS-Resource, and the SRS sequence identity nIDSRScan be signaled in different ways as detailed below. It is noted that the values of δ, A and t can be derived from the SRS time-frequency resource configuration, and so do not need to be signaled separately. In one embodiment when a UE needs to transmit SRSs from multiple SRS antenna ports over the same time-frequency resource, these SRSs are selected from the same orthogonal SRS subset on the same interlace and signaled to the UE. In one embodiment when a UE needs to transmit SRSs from multiple SRS antenna ports over the same time-frequency resource, these SRSs are selected to have modulating sequences constructed based on short sequences from the same orthogonal short sequence subset and the same long sequence, mapped to different interlaces, and then signaled to the UE. In one embodiment, the value of Q (i.e. the number of orthogonal SRS subsets in each SRS group) is a constant pre-specified in the standard and so unnecessary to be signaled. In another embodiment, the value of Q is configurable and can be taken from a number of NQvalues {Q0, Q1, . . . , QNQ−1} pre-specified in the standard, and its detailed value is signaled to the UE via a ┌log2NQ┐-bit string. In one embodiment, the SRS sequence identity nIDSRSis signaled to the UE by a single bit string of ┌log2(NsetQA)┐ bits, where Nsetis the number of SRS sets defined in the system with each SRS set assigned to one cell, Q is the number of orthogonal SRS subsets in each SRS set, and A is the number of orthogonal SRSs within the orthogonal SRS subset. In one embodiment, the SRS sequence identity nIDSRSis signaled by two bit strings of lengths ┌log2(Nset)┐ and ┌log2(QA)┐, respectively, where the first bit string is used to indicate the index of the SRS set, and the second bit string is used to indicate the index of the SRS sequence within the SRS set. In one embodiment, the SRS sequence identity nIDSRSis signaled by three bit strings of length ┌log2(Nset)┐, ┌log2(Q)┐ and ┌log2(A)┐, respectively, where the first bit string is used to indicate the index of the SRS set, the second bit string is used to indicate the index of the orthogonal SRS subset within the SRS set, and the third bit string is used to indicate the index of the SRS within the orthogonal SRS subset. Link Level Performance Evaluation A single-cell system at 5 GHz carrier frequency with 15 kHz SCS and bandwidth of 52 PRBs (i.e. there are N=52×12=624 subcarriers) is considered. To generate SRSs based on the disclosed method, it is assumed that δ=48, t=13, A=12 and that the A=12 subcarriers are distributed in each gPRB to be every fourth subcarrier in each interlacing subband. Consequently, there are B=└δ/A┘=4 orthogonal interlaces. Without loss of generality, the first interlace is selected and QA=24 SRS sequences consisting of Q=2 orthogonal subsets are constructed, where the short sequences in each orthogonal short sequence subset are the 12 columns of the 12×12 DFT matrix, covered by a length-13 ZC sequence shortened to length 12 with two different ZC root indices (1 and 2) for the two orthogonal subsets, respectively, and the long sequence is selected to be a(u)=Wt(u mod A)└u/A┘. For comparison, the SRSs from the second conventional solution, based on Comb-4 frequency resource allocation and ZC sequences, are also generated as reference. Each such SRS is constructed by a length-151 ZC sequence of a certain root index, cyclically extended to length 156, and then mapped to every fourth subcarrier of the frequency band. For each root index, 12 equally spaced cyclic shifts are adopted to generate 12 orthogonal SRSs. To generate the same number of SRSs as in the proposed scheme, two different root indices are used in each cell to generate two orthogonal subsets of SRSs. All the generated SRSs are transmitted over only one OFDM symbol in each re-transmission period T. For each SRS scheme two interference scenarios are evaluated: interference-free scenario, where T=1 ms (one slot) and T=2 ms (2 slots), and non-orthogonal interference scenario, where T=1 ms. FIG.9shows a diagram of a system to be simulated, where there is a target UE besides some interfering UEs. It is assumed that the BS is equipped with a 4×8 antenna array with half-wavelength spacing between adjacent columns/rows, and each UE is equipped with a single antenna. For each UE, a mobility speed of 30 km/h is assumed, and the channel between it and the BS is generated according to the CDL-C channel model (i.e. a fading channel model) with 100 ns desired delay spread and random arrival/departure angles of azimuth (AoA/DoA) rotations. The generated channel is identical for the UL and DL of each UE (i.e. the reciprocity between the UL and DL channels of each UE is assumed), but keeps varying with the time due to the UE mobility. When the target UE transmits an SRS in its UL channel, some other UEs may also transmit SRSs that are non-orthogonal to the target SRS (i.e. selected from different orthogonal SRS subsets) in the same time-frequency resource and will appear as interference to the target UE at the BS. The BS upon receiving the SRS(s) will then perform UL channel estimation for the target UE at each BS antenna by periodically correlating the received signal with the target SRS, yielding the estimated CIR of the target UE as the segment of its correlator output falling in the detection time window. For the Comb-4 based SRS scheme the detection time window is always set to t−1=12 (i.e. the minimum ZCCZ length), while for the proposed B-IFDM based SRS scheme, the detection time window is set to t−1=12 when there is no interfering UE, and set to └t/Q┘−1=└13/2┘−1=5 when there is at least one interfering UE. The estimated CIR is then converted to the frequency domain via DFT to obtain the estimated channel coefficient at each subcarrier and each BS antenna for use in the subsequent DL data transmission. For the subsequent DL data transmission, the un-coded information bits are first modulated with QPSK modulation and then allocated to subcarriers in the same frequency band of SRSs, where the beamforming vector for the QPSK data symbol to be transmitted on each subcarrier is designed as the conjugate version of the corresponding estimated UL channel vector on this subcarrier with power normalization. At each BS antenna, all the QPSK data symbols weighted by their corresponding beamforming coefficients are then converted to the time domain via IDFT followed by CP addition and sent to the target UE via its DL channel. For simplicity, only the worst case is simulated when the DL data is transmitted in the end of one SRS period (i.e. the time gap between UL SRS transmission and DL data transmission is set at T, equal to the SRS period). At the receiver, the target UE first performs equalization on the received data signal assuming perfect channel knowledge, and then demodulates the equalized signals on different subcarriers. Finally bit error rate (BER) is calculated based on the demodulation results under each SNR. FIG.10shows the BER performances of the proposed and the reference SRS schemes vs the SNR of DL data transmission obtained based on the above described simulation setting. InFIG.10, the B-IFDM curves indicate the BER performance of the proposed SRS scheme and the Comb-4 curves indicate the BER performance of the reference SRS scheme. In the interference-free case (i.e. the lines for 0 Int UE), the proposed SRS scheme has the same performance as the Comb-4 based SRS scheme, for both T=1 ms and T=2 ms. However, in the full interference case (i.e. the curves for 12 Int UEs), the proposed SRS scheme significantly outperforms the Comb-4 based SRS scheme by about 6 dB at 1% BER, as indicated inFIG.10. Most importantly, the proposed SRS scheme in the full interference case (24 SRSs per cell with T=1 ms) even outperforms by about 0.8 dB at 1% BER the interference free case of the reference scheme with the same number (24) of SRSs transmitted in the TDM manner with the time offset of 1 ms over the period T=2 ms. As previously mentioned, the first communication device100may in embodiments obtain the time domain signal according to the invention using alternative steps. The first communication device100may e.g. be configured to perform the steps of the first communication device100according to the fifth or sixth aspect of the invention disclosed in the summary. Thus, the first communication device100may in embodiments instead of phase rotating the sequence in the frequency domain based on the constant value Δq, cyclically shift the time domain signal based on the constant value Δqto obtain the cyclically shifted time domain signal, as described for the fifth aspect of the first communication device100. The first communication device100may further obtain the cyclically shifted time domain signal based on the sequence itself, where any two sequences from different subsets of sequences are non-orthogonal when each q-th subset of sequences are inversely phase rotated based on the constant value Δq, as described for the sixth aspect of the first communication device100. Consequently, the first communication device100may in embodiments comprise a set of functional blocks configured to perform the steps of the first communication device100according to the fifth or sixth aspect of the invention disclosed in the summary. Furthermore, the second communication device300may in embodiments estimate the radio channel using alternative steps. The second communication device300may e.g. be configured to perform the steps of the second communication device300according to the seventh, eighth, ninth, or tenth aspect of the invention disclosed in the summary. Thus, the second communication device300may in embodiments phase rotate the local sequence in the frequency domain based on the constant value Δq, cyclically shift the local time domain signal based on the constant value Δq, or obtain the cyclically shifted time domain signal based on the sequence itself and then estimate the radio channel based on the received time domain signal and the local time domain signal, as described for the seventh, eighth, or ninth aspect of the second communication device300, respectively. The second communication device300may further obtain the phase rotated local signal based on the sequence itself and then estimate the radio channel based on the phase rotated local sequence in the frequency domain and the sequence in the frequency domain of the received time domain signal, as described for the tenth aspect of the second communication device300. Consequently, the second communication device300may in embodiments comprise a set of functional blocks configured to perform the functions of the second communication device300according to the seventh, eighth, ninth, or tenth aspect of the invention disclosed in the summary. The first communication device100and/or the second communication device300herein, may be denoted as a user device, a User Equipment (UE), a mobile station, an internet of things (IoT) device, a sensor device, a wireless terminal and/or a mobile terminal, is enabled to communicate wirelessly in a wireless communication system, sometimes also referred to as a cellular radio system. The UEs may further be referred to as mobile telephones, cellular telephones, computer tablets or laptops with wireless capability. The UEs in this context may be, for example, portable, pocket-storable, hand-held, computer-comprised, or vehicle-mounted mobile devices, enabled to communicate voice and/or data, via the radio access network, with another entity, such as another receiver or a server. The UE can be a Station (STA), which is any device that contains an IEEE 802.11-conformant Media Access Control (MAC) and Physical Layer (PHY) interface to the Wireless Medium (WM). The UE may also be configured for communication in 3GPP related LTE and LTE-Advanced, in WiMAX and its evolution, and in fifth generation wireless technologies, such as New Radio. However, the first communication device100and/or the second communication device300herein may also be denoted as a radio network access node, an access network access node, an access point, or a base station, e.g. a Radio Base Station (RBS), which in some networks may be referred to as transmitter, “gNB”, “gNodeB”, “eNB”, “eNodeB”, “NodeB” or “B node”, depending on the technology and terminology used. The radio network access nodes may be of different classes such as e.g. macro eNodeB, home eNodeB or pico base station, based on transmission power and thereby also cell size. The radio network access node can be a Station (STA), which is any device that contains an IEEE 802.11-conformant Media Access Control (MAC) and Physical Layer (PHY) interface to the Wireless Medium (WM). The radio network access node may also be a base station corresponding to the fifth generation (5G) wireless systems. The first communication device100and/or the second communication device300herein may also be denoted as a road side unit such as e.g. a road side unit in a V2X application. The road side unit may be any device/node deployed along roads to improve vehicular network performance, as well as extend the coverage. The road side unit may be a stand-alone device/node or may be integrated with e.g. a network access node. Furthermore, any method according to embodiments of the invention may be implemented in a computer program, having code means, which when run by processing means causes the processing means to execute the steps of the method. The computer program is included in a computer readable medium of a computer program product. The computer readable medium may comprise any memory, such as a ROM (Read-Only Memory), a PROM (Programmable Read-Only Memory), an EPROM (Erasable PROM), a Flash memory, an EEPROM (Electrically Erasable PROM), or a hard disk drive. Moreover, it is realized by the skilled person that embodiments of the first communication device100and the second communication device300comprises the necessary communication capabilities in the form of e.g., functions, means, units, elements, etc., for performing the solution. Examples of other such means, units, elements and functions are: processors, memory, buffers, control logic, encoders, decoders, rate matchers, de-rate matchers, mapping units, multipliers, decision units, selecting units, switches, interleavers, de-interleavers, modulators, demodulators, inputs, outputs, antennas, amplifiers, receiver units, transmitter units, DSPs, MSDs, TCM encoder, TCM decoder, power supply units, power feeders, communication interfaces, communication protocols, etc. which are suitably arranged together for performing the solution. Especially, the processor(s) of the first communication device100and the second communication device300may comprise, e.g., one or more instances of a Central Processing Unit (CPU), a processing unit, a processing circuit, a processor, an Application Specific Integrated Circuit (ASIC), a microprocessor, or other processing logic that may interpret and execute instructions. The expression “processor” may thus represent a processing circuitry comprising a plurality of processing circuits, such as, e.g., any, some or all of the ones mentioned above. The processing circuitry may further perform data processing functions for inputting, outputting, and processing of data comprising data buffering and device control functions, such as call processing control, user interface control, or the like. Finally, it should be understood that the invention is not limited to the embodiments described above, but also relates to and incorporates all embodiments within the scope of the appended independent claims.
83,268
11943089
DETAILED DESCRIPTION One unique aspect of the signal modulation techniques described herein is the concept of spreading the data of a single symbol over a relatively large range of times, frequencies, and spectral shapes. In contrast, prior communication systems have been predicated upon assigning a given data symbol to a specific time-spreading interval or time slice uniquely associated with such data symbol. As is discussed below, the disclosed OTFS method is based at least in part upon the realization that in many cases various advantages may accrue from spreading the data of a single symbol over multiple time-spreading intervals shared with other symbols. In contrast with prior art modulation techniques, the OTFS method may involve convolving a single data symbol over both a plurality of time slots, a plurality of frequencies or spectral regions (spread spectrum), and a plurality of spectral shapes. As is described below, this approach to data convolution results in superior performance over impaired communications links. System Overview FIG.1illustrates an example of a wireless communication system100that may exhibit time/frequency selective fading. The system100includes a transmitter110(e.g., a cell phone tower) and a receiver120(e.g., a cell phone). The scenario illustrated inFIG.1includes multiple pathways (multi-path) that the signal transmitted from the transmitter100travels through before arriving at the receiver100. A first pathway130reflects through a tree132, second pathway140reflects off of a building142and a third pathway150reflects off of a second building152. A fourth pathway160reflects off of a moving car162. Because each of the pathways130,140,150and160travels a different distance, and is attenuated or faded at a different level and at a different frequency, when conventionally configured the receiver120may drop a call or at least suffer low throughput due to destructive interference of the multi-path signals. Turning now toFIG.43, a high-level representation is provided of a conventional transceiver4300which could be utilized in the wireless communication system100ofFIG.1. The transceiver4300could, for example, operate in accordance with established protocols for time-division multiple access (TDMA), code-division multiple access (CDMA) or orthogonal frequency-division multiple access (OFDM) systems. In conventional wireless communication systems such as TDMA, CDMA, and OFDM) systems, the multipath communication channel4310between a transmitter4304and a receiver4308is represented by a one-dimensional model. In these systems channel distortion is characterized using a one-dimensional representation of the impulse response of the communication channel. The transceiver4300may include a one-dimensional equalizer4320configured to at least partially remove this estimated channel distortion from the one-dimensional output data stream4330produced by the receiver4308. Unfortunately, use of a one-dimensional channel model presents a number of fundamental problems. First, the one-dimensional channel models employed in existing communication systems are non-stationary; that is, the symbol-distorting influence of the communication channel changes from symbol to symbol. In addition, when a channel is modeled in only one dimension it is likely and possible that certain received symbols will be significantly lower in energy than others due to “channel fading”. Finally, one-dimensional channel state information (CSI) appears random and much of it is estimated by interpolating between channel measurements taken at specific points, thus rendering the information inherently inaccurate. These problems are only exacerbated in multi-antenna (MIMO) communication systems. As is discussed below, embodiments of the OTFS method described herein can be used to substantially overcome the fundamental problems arising from use of a one-dimensional channel model. As is indicated below by Equation (1), in one aspect the OTFS method recognizes that a wireless channel may be represented as a weighted superposition of combination of time and Doppler shifts: ha(φ)⁢(t)=∑τ,υa⁡(τ,υ)⁢ej⁢2⁢π⁢υ⁢t⁢φ⁡(t+τ)↑Time-frequency⁢weights(1) In contrast to the parameters associated with existing channel models, the time-frequency weights (τ) of Equation (1) are two-dimensional and are believed to fully characterize the wireless channel. The time-frequency weights (τ) are intended to represent essentially all of the diversity branches existing in the wireless channel. This is believed to substantially minimize the fading effects experienced by the OTFS system and other communication systems generally based upon two-dimensional channel models relative to the fading common in systems predicated upon one-dimensional models. Finally, in contrast to the non-stationary, one-dimensional channel models employed in conventional communication systems, the time-frequency weights (τ) of Equation (1) are substantially stationary; that is, the weights change very slowly relative to the time scale of exemplary embodiments of the OTFS system. Use of the two-dimensional channel model of Equation (1) in embodiments of the OTFS communication system affords a number of advantages. For example, use of the channel model of Equation (1) enables both channel multipath delay and Doppler shift to be accurately profiled simultaneously. Use of this model and the OTFS modulation techniques described herein also facilitate the coherent assembly of channel echoes and the minimization of fading phenomena, since every symbol experience substantially all of the diversity branches present within the channel. Given that the two-dimensional channel model is essentially stationary, every symbol is deterministically distorted (smeared) according to substantially the same two-dimensional pattern. This stable, accurate characterization of the communication channel in two dimensions on an ongoing basis further enables the OTFS system to minimize data distortion by “customizing” how each bit is delivered across the channel. Finally, use of a two-dimensional channel model enables effective signal separation by decoupling and eliminating mutual interference between multiple sources. Attention is now directed toFIG.2, which illustrates an example of a mathematical model200that can be used to model time/frequency selective fading. A transmit side of the model200includes a pre-equalizer210, a transmitter/modulation component220, a channel model230, and additive noise240which is combined with the transmitted signal via a summer250. A receive side of the model200includes a receiver/demodulator260and a post equalizer270. The pre-equalizer210is used to model a pre-distortion transfer function htthat can be used to make up for changing channel conditions in the channel model hcbased on feedback received over the channel from the receive side of the model, as determined by measurements made by the receiver/demodulator260and/or the post equalizer270. The transmitter/modulator220uses modulation schemes described herein to transmit the data over the channel230. The receiver/demodulator260demodulates the signal received over the channel230. The received signal has been distorted by time/frequency selective fading, as determined by the channel transfer function hc, and includes the additive noise240. The receiver/demodulator260and the post equalizer270utilize methods discussed herein to reduce the distortion caused by the time/frequency selective fading and additive noise due to the channel conditions. The mathematical model200can be used to determine the nature of the equalized data Deqby performing a mathematical combination of three transfer functions operating on the original data D. The three transfer functions include the transmitter transfer function ht, the channel transfer function hcand the equalizer transfer function hr. Embodiments of the OTFS methods and systems described herein are based, in part, upon the realization that spreading the data for any given symbol over time, spectrum, and/or spectral shapes in the manner described herein yields modulated signals which are substantially resistant to interference, particularly interference caused by Doppler effects and multi-path effects, as well as general background noise effects. Moreover, the OTFS method is believed to require less precise frequency synchronization between receiver and transmitter than is required by existing communication systems (e.g., OFDM systems). In essence, the OTFS method convolves the data for a group of N2symbols (herein called a “frame”) over both time, frequency, and in some embodiments spectral shape in a way that results in the data for the group of symbols being sent over a generally longer period of time than in prior art methods. Use of the OTFS method also results in the data for any given group of symbols being accumulated over a generally longer period of time than in prior art methods. However, in certain embodiments the OTFS method may nonetheless enable favorable data rates to be achieved despite the use of such longer transmission periods by exploiting other transmission efficiencies enabled by the method. For example, in one embodiment a group of symbols may be transmitted using the same spread-spectrum code. Although this could otherwise result in confusion and ambiguity (since each symbol would not be uniquely associated with a code), use of the OTFS method may, for example, enable the symbols to be sent using different (but previously defined) spread-spectrum convolution methods across a range of time and frequency periods. As a consequence, when all of the data corresponding to the symbols is finally accumulated within the receiver, the entire frame or group of symbols may be reconstructed in a manner not contemplated by prior art techniques. In general, one trade-off associated with the disclosed approach is that either an entire multi-symbol frame of data will be correctly received, or none of the frame will be correctly received; that is, if there is too much interference within the communication channel, then the ability to successfully deconvolve and retrieve multiple symbols may fail. However, as will be discussed, various aspects of the OTFS may mitigate any degradation in performance which would otherwise result from this apparent trade-off. FIG.3Ais a block diagram of components of an exemplary OTFS communication system300. As shown, the system300includes a transmitting device310and a receiving device330. The transmitting device310and the receiving device330include first and second OTFS transceivers315-1and315-2, respectively. The OTFS transceivers315-1and315-2communicate, either unidirectionally or bidirectionally, via communication channel320in the manner described herein. Although in the exemplary embodiments described herein the system300may comprise a wireless communication system, in other embodiments the communication channel may comprise a wired communication channel such as, for example, a communication channel within a fiber optic or coaxial cable. As was described above, the communication channel320may include multiple pathways and be characterized by time/frequency selective fading. FIG.4illustrates components of an exemplary OTFS transceiver400. The OTFS transceiver400can be used as one or both of the exemplary OTFS transceivers315illustrated in the communication system300ofFIG.3. The OTFS transceiver400includes a transmitter module405that includes a pre-equalizer410, an OTFS encoder420and an OTFS modulator430. The OTFS transceiver400also includes a receiver module455that includes a post-equalizer480, an OTFS decoder470and an OTFS demodulator460. The components of the OTFS transceiver may be implemented in hardware, software, or a combination thereof. For a hardware implementation, the processing units may be implemented within one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described above, and/or a combination thereof. The disclosed OTFS methods will be described in view of the various components of the transceiver400. In one aspect a method of OTFS communication involves transmitting at least one frame of data ([D]) from the transmitting device310to the receiving device330through the communication channel320, such frame of data comprising a matrix of up to N2data elements, N being greater than 1. The method comprises convolving, within the OTFS transceiver315-1, the data elements of the data frame so that the value of each data element, when transmitted, is spread over a plurality of wireless waveforms, each waveform having a characteristic frequency, and each waveform carrying the convolved results from a plurality of said data elements from the data frame [D]. Further, during the transmission process, cyclically shifting the frequency of this plurality of wireless waveforms over a plurality of times so that the value of each data element is transmitted as a plurality of cyclically frequency shifted waveforms sent over a plurality of times. At the receiving device330, the OTFS transceiver315-2receives and deconvolves these wireless waveforms thereby reconstructing a replica of said at least one frame of data [D]. In the exemplary embodiment the convolution process is such that an arbitrary data element of an arbitrary frame of data ([D]) cannot be guaranteed to be reconstructed with full accuracy until substantially all of these wireless waveforms have been transmitted and received. FIG.5illustrates a comparison of bit error rates (BER) predicted by a simulation of a TDMA system and an OTFS system. Both systems utilize a 16 QAM constellation. The simulation modeled a Doppler spread of 100 Hz and a delay spread of 3 microsec. As can be seen from the graphs, the OTFS system offers much lower BER than the TDMA system for the same signal-to-noise ratio (SNR). Attention is now directed toFIG.45, which is a flowchart representative of the operations performed by an OTFS transceiver4500which may be implemented as, for example, the OTFS transceiver400. The OTFS transceiver4500includes a transmitter including a modulator4510and a receiver including a demodulator4520and two-dimensional equalizer4530. In operation, a transmitter of the OTFS transceiver4500receives a two-dimensional symbol stream in the form of an N×N matrix of symbols, which may hereinafter be referred to as a TF matrix: x∈CN×N As is illustrated inFIG.46, in one embodiment the modulator4510functions as an orthogonal map disposed to transform the two-dimensional TF matrix to the following transmitted waveform: ϕt=M(x)=Σx(i,j)ϕi,jϕi,j⊥ϕk,l Referring toFIG.47, the demodulator4520transforms the received waveform to a two-dimensional TF matrix in accordance with an orthogonal map in order to generate an output stream: ϕry=D(ϕr) In one embodiment the OTFS transceiver4500may be characterized by a number of variable parameters including, for example, delay resolution (i.e., digital time “tick” or clock increment), Doppler resolution, processing gain factor (block size) and orthonormal basis function. Each of these variable parameters may be represented as follows. Delay resolution (digital time tick): Δ⁢T∈R>0(Δ⁢T=1Bw) Doppler resolution: Δ⁢F∈R>0(Δ⁢F=1Trans) Processing gain factor (block size): N>0 Orthonormal basis of CN×1(spectral shapes): U={u1,u2, . . . ,uN} As is illustrated byFIG.45, during operation the modulator4510takes a TF matrix x∈CN×and transforms it into a pulse waveform. In one embodiment the pulse waveform comprises a pulse train defined in terms of the Heisenberg representation and the spectral shapes: ϕt=M⁡(x)=(∏(x)︸b1⁢u1,∏(x)︸b2⁢u2,…,∏(x)︸bN⁢uN) where b1, b2. . . bNare illustrated inFIG.48and where, in accordance with the Heisenberg relation: Π(h*x)=Π(h)·Π(x) in particular: Π(δ(τ,o)*x=Lr·Π(x) Π(δ*(0,w)*x)=Mw·Π(x) The Heisenberg representation provides that: ∏:CN×N⟶≈CN×N⁢given⁢by:∏(x)=∑τ,w=0N-1x⁡(τ,w)⁢Mw⁢Lτ,x∈CN×N where Ltand Mware respectively representative of cyclic time and frequency shifts and may be represented as: Lτ∈CN×N:Lτ(φ)⁢(t)=φ⁡(t+τ),τ=0,…,N-1Mw∈CN×N:Mw(φ)⁢(t)=ej⁢2⁢πN⁢w⁢τ⁢φ⁡(t),w=0,…,N-1 The demodulator4520takes a received waveform and transforms it into a TF matrix y∈CN×Ndefined in terms of the Wigner transform and the spectral shapes: ϕr=(b1,b2,…,bN)y⁡(τ,w)=D⁡(ϕr)⁢(τ,w)=1N⁢∑n=1N〈Mw⁢Lτ⁢un,bn〉︷Wigner⁢transform Main property of M and D (Stone von Neumann theorem): D(hαM(x))=h*xwhere: h(τ,w)≈α(τΔT,wΔF) As illustrated inFIG.49, the equalizer4530may be implemented as a two-dimensional decision feedback equalizer configured to perform a least means square (LMS) equalization procedure such that: y{circumflex over (x)} Matrix Formulation Throughout this description, the use of matrix terminology should be understood as being a concise description of the various operations that will be carried out by either the OTFS transceiver315-1or the OTFS transceiver315-2. Thus the series of steps used to obtain the coefficients of a particular matrix generally correspond to a set of instructions for the transmitter or receiver electronic circuitry (e.g., the various components of the transmitter405and the receiver455illustrated inFIG.4A). For example, one set of coefficients may instruct the transmitter405or receiver455to perform a spread spectrum operation, a different set of coefficients may instruct the transmitter405or receiver455to perform a spectral shaping modulation or demodulation operation, and another set of coefficients may instruct the transmitter to perform various time spreading or time accumulation functions. Here standard matrix mathematics is used as a shorthand way of reciting the series of instructions used to transmit and receive these complex series of wireless signals. Thus, when the discussion speaks of multiplying matrices, each data element in the matrix formed by the multiplication can be understood in terms of various multi-step operations to be carried out by the transmitter or receiver electronic circuitry (e.g., the transmitter405or the receiver455as illustrated inFIG.4A), rather than as a pure number. Thus, for example, a matrix element formed from one matrix that may have spread-spectrum like pseudorandom numbers multiplied by another matrix that may have tone or spectral-shape spreading instructions, such as QAM or phase shift keying instructions, multiplied by another scanning system, permutation scheme, or matrix that may have data instructions, should be understood as directing the transmitter405to transmit a radio signal that is modulated according to these three means, or as directing the receiver455to receive and demodulate/decode a radio signal that is modulated according to these three means. Put into matrix terminology, the OTFS method of convolving the data for a group of symbols over both time, spectrum, and tone or spectral-shape can be viewed as transforming the data frame with N2information elements (symbols) to another new matrix with N2elements whereby each element in the new transformed matrix, (here called the TFS data matrix) carries information about all elements of the original data frame. In other words the new transformed TFS data matrix will generally carry a weighted contribution from each element of the original data frame matrix [D]. Elements of this TFS data matrix are in turn transmitted and received over successive time intervals. As previously discussed, in embodiments of the OTFS method the basic unit of convolution and deconvolution (convolution unit) is composed of a matrix of N2symbols or data elements. Over each time interval, a different waveform may be used for each data element. By contrast, prior art methods generally use the same waveform for each data element. For consistency, the N2units of data will generally be referred to in this specification as a “data frame”. N may be any value greater than one, and in some embodiments will range from 64 to 256. One distinction between the OTFS method and conventional modulation schemes may be appreciated by observing that a basic unit of convolution, transmission, reception and deconvolution for a prior art communications protocol may be characterized as a data frame of n symbols or elements “d” operated on spreading codes that send the data for n symbols over one spreading interval time where: [D1xn]=[d1d2. . . dn] In contrast, embodiments of the OTFS method generally use a different basic unit of convolution, transmission, reception, and deconvolution. In particular, such OTFS embodiments will typically use a larger data frame [DN×N] composed of N2elements or symbols “d” that, as will be discussed, send the data for these N2elements over a plurality of spreading interval times (often the plurality is N). The data frame [DN×N] may be expressed as: [DN×N]=[d1,1d1,2…d1,Nd2,1d2,2…d2,Nd3,1d3,2…d4,1d4,2…dN,N] In general, references herein to a frame of data may be considered to be a reference to the N×N or N2matrix such as the one shown above, where at least some elements in the matrix may be zero or null elements. In some embodiments, a frame of data could be non-square, or N×M, where N≠M. Signal Transmission As previously discussed, the OTFS method will spread this group of N2symbols across a communications link over multiple spreading intervals of time (usually at least N spreading intervals or times), where each individual spreading interval of time is composed of at least N time slices. Note that due to potential overhead for synchronization and identification purposes, in some embodiments, extra time slices and/or extra spreading intervals of time may be allocated to provide room for this overhead. Although for clarity of presentation this overhead will generally be ignored, it should be understood that the disclosure is intended to also encompass methods in which such overhead exists. In exemplary embodiments of the OTFS method the data will thus be transmitted as a complex series of waveforms, usually over wireless radio signals with frequencies usually above 100 MHz, and often above 1 GHz or more. These radio frequencies are then usually received over at least N spreading time intervals, where each spreading time interval is often composed of at least N time-slices. Once received, the original data frame will be deconvolved (i.e. solved for) and the most likely coefficients of the original group of symbols are reconstructed. It should be evident that in order to successfully deconvolve or solve for the original data frame, the receiver will usually have prior knowledge of the time, spectrum, and tone or spectral-shape spreading algorithms used by the transmitter. Attention is now directed toFIG.3B, which illustrates a process340by which the OTFS transceiver315-1of the transmitting device310may transmit a data frame (or a convolution unit) of data, here expressed as an (N by N) or (N2) matrix [D]. This process may be described using standard matrix multiplication as follows:1: Construct the matrix product of a first N×N matrix [U1] and [D] (often written as either [U1]*[D] or more simply [U1][D]—here both the “*” and simple close association (e.g. [U1][D]) both are intended to represent matrix multiplication)(stage342).2: Optionally permute [U1][D] by a permutation operation P in order to create a new N×N matrix (stage344). In general, any invertible permutation operation may be used. P may be an identity operation, or alternatively may be a permutation operation that essentially translates the columns of the original N×N [U1][D] matrix to diagonal elements of a transformed [U1][D]′ matrix.3: Upon completing the permutation, optionally multiply the permuted result by a second N×N [U2] matrix (for spectral shaping for example), forming:[P([U1][D])][U2] (stage348).4: Transmit this signal, according to methods discussed below (stage350).In one embodiment the permutation operation P may optionally be of the form: bi,j=ai,(j-i)modN where [a] is the original matrix (here [U1][D]), and [b] is the new matrix (here P([U1][D]). For sake of simplicity, the result of this permutation operation may be written as P([U1][D]). FIG.22illustrates another permutation that may be used. In this case, the permutation is given by the following relationship: bi,j=ai,(j+i)mod N Yet another permutation option is illustrated inFIG.23. InFIG.23, for illustrative purposes, a second [a] matrix is placed next to the original [a] matrix. Diagonal lines are drawn overlapping the first and second [a] matrices. The permuted [b] matrix is formed by translating each diagonal line one column to the left (or right in yet another permutation), where one or more of the translated entries falls into the second [a] matrix such that one or more entries is moved from the second [a] matrix to the same position in the first [a] matrix. Here [U1] and [U2], if being used, can both be unitary N×N matrices, usually chosen to mitigate certain impairments on the (often wireless) communications link, such as wide band noise, narrow-band interference, impulse noise, Doppler shift, crosstalk, etc. To do this, rather than simply selecting [U1] and [U2] to be relatively trivial identity matrices [I], or matrices with most of the coefficients simply being placed along the central diagonal of the matrix, [U1] and [U2] will usually be chosen with non-zero coefficients generally throughout the matrix so as to accomplish the desired spreading or convolution of the convolution unit [D] across spectrum and tone or spectral-shape space in a relatively efficient and uniform manner. Usually, the matrix coefficients will also be chosen to maintain orthogonality or to provide an ability to distinguish between the different encoding schemes embodied in the different rows of the respective matrices, as well as to minimize autocorrelation effects that can occur when radio signals are subjected to multi-path effects. In reference to the specific case where [U1] may have rows that correspond to pseudo-random sequences, it may be useful to employ a permutation scheme where each successive row in the matrix is a cyclically rotated version of the pseudo-random sequence in the row above it. Thus the entire N×N matrix may consist of successive cyclically rotated versions of a single pseudo-random sequence of length N. FIGS.17-19illustratively represent the manner in which different types of unitary matrices [U1] can be used to represent various forms of modulation. For example,FIG.17illustrates a unitary matrix [U1] in the form of an identity matrix1710representative of a time division multiplexed transmission basis; that is, a matrix of basis vectors where each column and each row is comprised of a single “1” and multiple “0” values. When the identity matrix1710is combined with a data matrix [D], the result corresponds to each column of [D] being transmitted in a different time slot corresponding to one of the time lines1700(i.e., the columns of [D] are transmitted in a time division multiplexed series of transmissions). FIG.18illustrates a unitary matrix [U1] in the form of a DFT basis vector matrix1810representative of a frequency division multiplexed transmission basis. The DFT basis vector matrix1810is comprised of N column entries representing rotating phaser or tone basis vectors. When the DFT basis vector matrix1810is multiplied by a data matrix [D], the columns of the resulting matrix represent rotating phasers each having a different frequency offset or tone as represented by the set of time lines1800. This corresponds to each column of [D] being transmitted at a different frequency offset or tone. FIG.19illustrates a unitary matrix [U1] in the form of a Hadamard matrix1910representative of a code division multiplexed transmission basis. The Hadamard matrix1910is comprised of a set of quasi-random plus and minus basis vectors. When the Hadamard matrix1910is multiplied by a data matrix [D], the columns of the resulting matrix represent different quasi-random code division multiplexed signals as represented by the set of time lines1900. This corresponds to each column of [D] being transmitted using a different quasi-random code. In principle, [U1] and [U2], if both are being used, may be a wide variety of different unitary matrices. For example, [U1] may be a Discrete Fourier Transform (DFT) matrix and [U2] may be a Hadamard matrix. Alternatively [U1] may be a DFT matrix and [U2] may be a chirp matrix. Alternatively [U1] may be a DFT matrix and [U2] may also be a DFT matrix, and so on. Thus although, for purposes of explaining certain aspects of the OTFS method, certain specific examples and embodiments of [U1] and [U2] will be given, these specific examples and embodiments are not intended to be limiting. Note that a chirp matrix, [V], is commonly defined in signal processing as a matrix where, if Ψ is the chirp rate,[V]=diag(Ψ, Ψ2, . . . , Ψn), Ψ=ejψ, and frequency=ejωwhere ω is the initial center frequency of the spectrum. Alternatively, a different chirp matrix may be used, filled with elements of the form: Vj,k=e(-i⁢2⁢π⁢kj2N) Where j is the matrix row, k is the matrix column, and N is the size of the matrix. Other commonly used orthonormal matrices, which may be used for [U1], or [U2] or [U3] (to be discussed), include Discrete Fourier matrices, Polynomial exponent matrices, harmonic oscillatory, matrices, the previously discussed Hadamard matrices, Walsh matrices, Haar matrices, Paley matrices, Williamson matrices, M-sequence matrices, Legendre matrices, Jacobi matrices, Householder matrices, Rotation matrices, and Permutation matrices. The inverses of these matrices may also be used. As will be discussed, in some embodiments, [U1] can be understood as being a time-frequency shifting matrix, and [U2] can be understood as being a spectral shaping matrix. In order to preserve readability, [U1] will thus often be referred to as a first time-frequency shifting matrix, and [U2] will thus often be referred to as a second spectral shaping matrix. However use of this nomenclature is also not intended to be limiting. In embodiments in which the optional permutation or multiplication by a second matrix [U2] is not performed, the [U1] matrix facilitates time shifting by providing a framework through which the elements of the resulting transformed data matrix to be transmitted at different times (e.g., on a column by column basis or any other ordered basis). Turning to some more specific embodiments, in some embodiments, [U1] may have rows that correspond to Legendre symbols, or spreading sequences, where each successive row in the matrix may be a cyclically shifted version of the Legendre symbols in the row above it. These Legendre symbols will occasionally also be referred to in the alternative as base vectors, and occasionally as spectrum-spreading codes. In some embodiments, [U2] may chosen to be a Discrete Fourier transform (DFT) matrix or an Inverse Discrete Fourier Transform matrix (IDFT). This DFT and IDFT matrix can be used to take a sequence of real or complex numbers, such as the N×N (P[U1][D]) matrix, and further modulate P([U1][D]) into a series of spectral shapes suitable for wireless transmission. The individual rows for the DFT and IDFT matrix [U2] will occasionally be referred in the alternative as Fourier Vectors. In general, the Fourier vectors may create complex sinusoidal waveforms (tone or spectral-shapes) of the type: Xjk=e(-i*2*π*j*k)N where, for an N×N DFT matrix, X is the coefficient of the Fourier vector in row k, column N of the DFT matrix, and j is the column number. The products of this Fourier vector can be considered to be tones or spectral-shapes. Although certain specific [U1] and [U2] can be used to transmit any given data frame [D], when multiple data frames [D] are being transmitted simultaneously, the specific [U1] and [U2] chosen may vary between data frames [D], and indeed may be dynamically optimized to avoid certain communications link impairments over the course of transmitting many data frames [D] over a communications session. This process of convolution and modulation will normally be done by an electronic device, such as a microprocessor equipped, digital signal processor equipped, or other electronic circuit that controls the convolution and modulation parts of the wireless radio transmitter. Similarly the process of receiving and demodulation will also generally rely upon a microprocessor equipped, digital signal processor equipped, or other electronic circuit that controls the demodulation, accumulation, and deconvolution parts of the wireless radio receiver. Thus again using matrix multiplication, and again remembering that these are all N×N matrices, [P([U1][D])][U2], where [U2] is optional, represents the TFS data matrix that the transmitter will distribute over a plurality of time spreading intervals, time slices, frequencies, and spectral shapes. Note again that as a result of the various matrix operation and optional permutation steps, a single element or symbol from the original N×N data matrix [D] after modulation and transmission, will be distributed throughout the different time spreading intervals, time slices, frequencies, and spectral shapes, and then reassembled by the receiver and deconvolved back to the original single data element of symbol. FIG.6Aillustratively represents an exemplary OTFS method600for transmitting data over a wireless link such as the communication channel320.FIG.6Billustrates components of an exemplary OTFS transmitter650for performing the method ofFIG.6A. The method600can be performed, for example, by components of the OTFS transceiver400ofFIG.4or by components of the OTFS transmitter650ofFIG.6B. In the example ofFIG.6, the payload intended for transmission comprises an input data frame601composed of an N×N matrix [D] containing N2symbols or data elements. As shown inFIG.6A, a succession of data frames601are provided, each of which defines a matrix [D] of N×N data elements. Each matrix [D] can be provided by a digital data source660in the OTFS transmitter650. The elements of the matrix [D] can be complex values selected from points in a constellation matrix such as, for example, a 16 point constellation of a 16QAM quantizer. In order to encode this data, an OTFS digital encoder665will select an N×N matrix [U1]602and, in some embodiments, select an N×N matrix [U2]604(stage606). As previously discussed, in some embodiments the matrix [U1]602may be a matrix composed of Legendre symbols or a Hadamard matrix. This matrix [U1]602will often be designed to time and frequency shift the symbols or elements in the underlying data matrix [D]601. The matrix [U2]604may be a DFT or IDFT matrix and is often designed to spectrally shape the signals. For example, in some embodiments the matrix [U2]604may contain the coefficients to direct the transmitter circuits of the OTFS modulator430to transform the signals over time in a OFDM manner, such as by quadrature-amplitude modulation (QAM) or phase-shift keying, or other scheme. Usually the matrix [D]601will be matrix multiplied by the matrix [U1]602by the digital encoder665at stage610, and the matrix product of this operation [U1][D] then optionally permuted by the digital encoder665forming P([U1][D]) (stage611). In embodiments in which a spectral shaping matrix is utilized, the digital encoder665multiplies matrix [U1][D] by matrix [U2]604forming an N×N TFS data matrix, which may also be referred to herein as an OTFS transmission matrix (stage614). The various elements of the TFS matrix are then selected by the OTFS analog modulator670, usually a column of N elements at a time, on a single element at a time basis (stage616) The selected elements are then used to generate a modulated signal that is transmitted via an antenna680(stage618). More specifically, in one embodiment the particular real and imaginary components of each individual TFS matrix element are used to control a time variant radio signal620during each time slice. Thus one N-element column of the TFS matrix will usually be sent during each time-spreading interval608, with each element from this column being sent in one of N time slices612of the time-spreading interval608. Neglecting overhead effects, generally a complete N×N TFS matrix can be transmitted over N single time spreading intervals622. Attention is now directed toFIG.6C, which is a flowchart representative of an exemplary OTFS data transmission method690capable of being implemented by the OTFS transmitter650or, for example, by the OTFS transmitter2100ofFIG.21(discussed below). As shown, the method includes establishing a time-frequency transformation matrix of at least two dimensions (stage692). The time-frequency transformation matrix is then combined with a data matrix (stage694). The method690further includes providing a transformed matrix based upon the combining of the time-frequency transformation matrix and the data matrix (stage696). A modulated signal is then generated in accordance with elements of the transformed data matrix (stage698). Attention is now directed toFIG.21A, which is a block diagram representation of an OTFS transmitter module2100capable of performing functions of the OTFS transmitter650(FIG.6B) in order to implement the transmission method600ofFIGS.6A and6C. With reference toFIG.21andFIG.6B, the transmitter2100includes a digital processor2102configured for inclusion within the digital encoder665and a modulator2104configured for inclusion within the analog modulator component670. The digital processor2102, which may be a microprocessor, digital signal processor, or other similar device, accepts as input the data matrix [D]2101and may either generate or accept as inputs a [U1] matrix2102and a [U2] matrix2104. A matrix generation routine2105stored within a memory associated with the processor2102will, when executed by the processor2102, generate a TFS matrix2108(FIG.21B), which will generally be comprised of a set of complex-valued elements. Once generated, scanning/selection routine2106will, when executed by the processor2102, select individual elements from the TFS matrix2108matrix, often by first selecting one column of N elements from the TFS matrix and then scanning down this column and selecting individual elements at a time. Generally one new element will be selected every time slice2112(FIG.21C). Thus every successive time slice, one element from the TFS matrix2108will be used to control the modulator2104. In one embodiment of the OTFS method, the modulator2104includes modules2132and2134for separating the element into its real and imaginary components, modules2142and2144for chopping the resultant real and imaginary components, and filtering modules2152and2154for subsequently performing filtering operations. The filtered results are then used to control the operation of sin and cosine generators2162and2164, the outputs of which are upconverted using a RF carrier in order to produce an analog radio waveform2120. This waveform then travels to the receiver where it is demodulated and deconvolved as will be described below with reference toFIG.7. Thus in this scheme (again neglecting overhead effects), element t1,1from the first column of the TFS matrix can be sent in the first time slice, and the Nth element from the first column of the TFS matrix can be sent in the last time slice of the first time spreading interval2124. The next element t1,2from the second column of the TFS matrix can be sent in the first time slice of the second time spreading interval2128, and so on. Thus, the modulator2104transmits a composite waveform during each time spreading interval, where the value of the waveform is determined by a different element of the TFS matrix2108during each time slice of the spreading interval. In an alternative embodiment, diagonals of the TFS data matrix may be transmitted over a series of single time-spreading intervals, one diagonal per single time-spreading interval, so that N diagonals of the final N×N transmission matrix are transmitted over N time intervals. In other embodiments the order in which individual elements of the TFS transmission matrix [[U1][D]][U2] are transmitted across the communications link is determined by a transmit matrix or transmit vector. In some embodiments, there may be some overhead to this basic model. Thus, for example, with some time padding (additional time slices or additional time spreading intervals), checksums or other verification/handshaking data, which could be transmitted in a non-convolved manner, could be sent back by the receiver to the transmitter on a per time-spreading interval, per N time spreading intervals, or even on a per time slice interval in order to request retransmission of certain parts of the TFS data matrix as needed. FIG.9illustratively represents an exemplary transmit frame900comprised of a plurality of transmit blocks920separated by guard times950. Each transmit block920includes data corresponding to a portion of the [D] matrix, such as a column as shown inFIG.9, or a row, or sub-blocks of the [D] matrix. The guard time950can provide the receiver with time to resolve Doppler shift in transmitted signals. The Doppler shift causes delays or advances in the receive time and the OTFS receiver455can use the spaces between the transmit blocks920-1,920-2,920-3,920-4and920-5to capture data without interference from other users. The guard times950can be used with either the first or second forms of the OTFS methodology. The guard times950can be utilized by other transmitters in the area so long as the transmission uses different codes (e.g., Hadamard codes) than those used to transmit the frame900. Attention is now directed toFIG.20, which illustrates a sequence of L OTFS matrices2010, each of dimension N×N. The L OTFS matrices2010collectively comprise a frame of data which includes L×N×N symbols spread out in both time and frequency. The matrices2010-1through2010-L are transmitted back-to-back and include guard times (Tg) between matrices2010. The N columns2020of a given matrix2010are transmitted on a column-by-column basis, typically with guard times inserted between the transmission of each column2020. Therefore, the L frames2010are transmitted in a time greater than N×[L×(N×T+Tg)], where T is the time to transmit one column of symbols inclusive of the guard times described above. As previously discussed, in some embodiments, the first N×N time spreading matrix [U1] may be constructed out of N rows of a cyclically shifted Legendre symbols or pseudorandom number of length N. That is, the entire N×N spreading matrix is filled with all of the various cyclic permutations of the same Legendre symbols. In some embodiments, this version of the [U1] matrix can be used for spectrum spreading purposes and may, for example, instruct the transmitter to rapidly modulate the elements of any matrix that it affects rapidly over time, that is, with a chip rate that is much faster than the information signal bit rate of the elements of the matrix that the Legendre symbols are operating on. In some embodiments, the second N×N spectral shaping matrix [U2] can be a Discrete Fourier Transform (DFT) or an Inverse Discrete Fourier Transform (IDFT) matrix. These DFT and IDFT matrices can instruct the transmitter to spectrally shift the elements of any matrix that the DFT matrix coefficients act upon. Although many different modulation schemes may be used, in some embodiments, this modulation may be chosen to be an Orthogonal Frequency-Division Multiplexing (OFDM) type modulation, in which case a modulation scheme such as quadrature amplitude modulation or phase-shift keying may be used, and this in turn may optionally be divided over many closely-spaced orthogonal sub-carriers. Often the actual choice of which coefficients to use for the first N×N time-frequency shifting matrix [U1] and what coefficients to use for the second N×N spectral shaping matrix [U2] may depend on the conditions present in the communications channel320. If, for example, a communications channel320is subjected to a particular type of impairment, such as wide band noise, narrow-band interference, impulse noise, Doppler shift, crosstalk and so on, then some first N×N time-frequency shifting matrices and some second N×N spectral shaping matrices will be better able to cope with these impairments. In some embodiments of the OTFS method, the transmitter and receiver will attempt to measure these channel impairments, and may suggest alternate types of first N×N time-frequency shifting matrices [U1] and second N×N spectral shaping matrices to each [U2] in order to minimize the data loss caused by such impairments. Various modifications of the above-described data transmission process represented by the matrix multiplication [[U1][D]][U2] are also within the scope of the present disclosure and are described below with reference toFIGS.13and15. For example,FIGS.13A and13Bshow a first alternative OTFS transmission scheme. In the embodiment ofFIG.13, the data matrix [D] may be further convolved by means of a third unitary matrix [U3]1306, which may be an IDFT matrix. In one implementation [U1] may be a DFT matrix and the matrix [U2]1308may be the product of a DFT matrix and a base. In this scheme, the process of scanning and transmitting the data is represented by the previously described permutation operation P. The basic transmission process can thus be represented as [U3]*[P([U1][D])]*[U2].FIG.13Ashows the matrix [D] identified by reference numeral1300, and the matrix product ([U1][D]) is identified by reference numeral1302.FIG.13Afurther shows that the result of the permutation operation P is the permuted version of the matrix product ([U1][D]), i.e., P([U1][D]), identified by reference numeral1304. In the representation ofFIG.13A, at least certain of the effects of the permutation operation P are represented by the differing directions of arrow1305and arrow1305′. FIG.13Bshows a result1310of the final matrix product [U3][P([U1][D])][U2] where, again, the permuted version of the matrix product ([U1][D]), i.e., P([U1][D]), is identified by reference numeral1304(without arrow1305′). In various embodiment the matrix [U3]1306may comprise a DFT matrix, an IDFT matrix, or a trivial identity matrix (in which case this first alternative scheme becomes essential equivalent to a scheme in which a matrix [U3] is not employed). Attention is now directed toFIGS.15A and15B, which illustrate a second alternative OTFS transmission scheme. As shown, the original data matrix [D] is identified by reference numeral1500, the matrix product [U1][D] is identified by reference numeral1502, the permuted matrix P([U1][D]) is identified by reference numeral1504, and the matrix [U2] is identified by reference numeral1506. In the representation ofFIG.15A, at least certain of the effects of the permutation operation P are represented by the differing directions of arrow1507and arrow1507′. In one embodiment [U1] may be a Hadamard matrix; that is, a square matrix composed of mutually orthogonal rows and either +1 or −1 coefficients. This matrix has the property that H*HT=nInwhere Inis an N×N identity matrix and HTis the transpose of H). As shown inFIG.15B, consistent with this alternative OTFS transmission scheme the matrix corresponding to the transmitted signal may be expressed as [P([U1][D])]*[U2] and is identified by reference numeral1508where, again, the permuted matrix P([U1][D]) is identified by reference numeral1504(without arrow1507′). Signal Reception and Data Reconstruction Attention is now directed toFIG.3C, which illustrates a process360by which the OTFS transceiver315-2of the receiving device330may operate to receive a transmitted data frame. Within the OTFS transceiver315-2, the process performed during transmission is essentially done in reverse. Here the time and frequency spread replica of the TFS data matrix ([P([U1][D])][U2])′ (where the ‘ annotation is indicative of a replicated matrix) is accumulated over multiple time spreading intervals, time slices, frequencies, and spectral shapes, and then deconvolved to solve for [D] by performing the following operations:1: Receive ([P([U1][D])][U2])′ (stage362)2: Perform a first left multiplication by the Hermitian matrix of the [U2] matrix [U2H], if one was used for transmission, thus creating P([U1][D]) (stage364).3: Inverse permute this replica by (P([U1][D])P−1, if a permutation was used during transmission, thus creating [U1][D] (stage368)4. Perform a second right multiplication by the Hermitian matrix of the [U1] matrix [U1H], thus recreating [D] (stage370). As a consequence of noise and other impairments in the channel, use of information matrices and other noise reduction methods may be employed to compensate for data loss or distortion due to various impairments in the communications link. Indeed, it may be readily appreciated that one advantage of spreading out the original elements of the data frame [D] over a large range of times, frequencies, and spectral shapes as contemplated by embodiments of the OTFS method is that it becomes straightforward to compensate for the loss during transmission of information associated with a few of the many transmission times, frequencies and spectral shapes. Although various deconvolution methods may used in embodiments of the OTFS method, the use of Hermitian matrices may be particularly suitable since, in general, for any Hermitian matrix [UH] of a unitary matrix [U], the following relationship applies:[U][UH]=[I] where [I] is the identity matrix. Communications links are not, of course, capable of transmitting data at an infinite rate. Accordingly, in one embodiment of the OTFS method the first N×N time-frequency shifting matrix ([U1], the second N×N spectral shaping matrix ([U2] (when one is used), and the elements of the data frame, as well as the constraints of the communications link (e.g. available bandwidth, power, amount of time, etc.), are chosen so that in balance (and neglecting overhead), at least N elements of the N×N TFS data matrix can be transmitted over the communications link in one time-spreading interval. More specifically (and again neglecting overhead), one element of the N×N TFS data matrix will generally be transmitted during each time slice of each time-spreading interval. Given this rate of communicating data, then typically the entire TFS data matrix may be communicated over N time-spreading intervals, and this assumption will generally be used for this discussion. However it should be evident that given other balancing considerations between the first N×N time-frequency shifting matrix, the second N×N spectral shaping matrix, and the elements of the data frame, as well as the constraints of the communications link, the entire TFS data matrix may be communicated in less than N time-spreading intervals, or greater than N time spreading intervals as well. As discussed above, the contents of the TFS data matrix may be transmitted by selecting different elements from the TFS data matrix, and sending them over the communications link, on a one element per time slice basis, over multiple spreading time intervals. Although in principle this process of selecting different elements of the TFS data matrix can be accomplished by a variety of different methods, such as sending successive rows of the TFS data matrix each single time spreading interval, sending successive columns of the TFS data matrix each successive time spreading interval, sending successive diagonals of the TFS data matrix each successive time spreading intervals, and so on, from the standpoint of communications link capacity, minimizing interference, and reducing ambiguity, some schemes are often better than others. Thus, often the [U1] and [U2] matrices, as well as the permutation scheme P, may be chosen to optimize transmission efficiency in response to various impairments in the communications link. As shown inFIG.4B, an exemplary process404pursuant to which an OTFS transceiver may transmit, receive and reconstruct information utilizing a TFS data matrix may thus be generally characterized as follows:1: For each single time-spreading interval, selecting N different elements of the TFS data matrix (often successive columns of the TFS matrix will be chosen)(stage482).2: Over different time slices in the given time spreading interval, selecting one element (a different element each time slice) from the N different elements of the TFS data matrix, modulating this element, and transmitting this element so that each different element occupies its own time slice (stage484).3: Receiving these N different replica elements of the transmitted TFS data matrix over different said time slices in the given time spreading interval (stage486).4: Demodulating these N different elements of the TFS data matrix (stage488).5. Repeating stages482,484,486and488up to a total of N times in order to reassemble a replica of the TFS data matrix at the receiver (stage490). This method assumes knowledge by the receiver of the first N×N spreading code matrix [U1], the second N×N spectral shaping matrix [U2], the permutation scheme P, as well as the particular scheme used to select elements from the TFS matrix to transmit over various periods of time. In one embodiment the receiver takes the accumulated TFS data matrix and solves for the original N×N data frame using standard linear algebra methods. It may be appreciated that because each original data symbol from the original data frame [D] has essentially been distributed over the entire TFS data matrix, it may not be possible to reconstruct an arbitrary element or symbol from the data [D] until the complete TFS data matrix is received by the receiver. Attention is now directed toFIG.7A, which illustratively represents an exemplary method700for demodulating OTFS-modulated data over a wireless link such as the communication channel320.FIG.7Billustrates components of an exemplary OTFS receiver for performing the method ofFIG.7A. The method700can be performed by the OTFS receiver module455of the OTFS transceiver400ofFIG.4Aor by the OTFS receiver750ofFIG.7B. Just as the OTFS transmitter405is often a hybrid analog/digital device, capable of performing matrix calculations in the digital portion and then converting the results to analog signals in the analog portion, so to the OTFS receiver750will typically be capable of receiving and demodulating the radio signals in the analog receiver770of the OTFS receiver750, and then often decoding or deconvolving these signals in the digital portion of the digital OTFS receiver780. As shown inFIG.7A, received signals720corresponding to channel-impaired versions of the transmitted radio signals620may be received by, for example, an antenna760of the OTFS receiver750. The received signals720will generally not comprise exact copies of the transmitted signals620because of the signal artifacts, impairments, or distortions engendered by the communication channel320. Thus replicas, but not exact copies, of the original elements of the TFS matrix are received and demodulated722by the OTFS analog receiver770every time slice612. In an exemplary embodiment one column of the TFS matrix is demodulated at stage722during every spreading time interval608. As a consequence, the OTFS demodulator460will accumulate these elements over N single time spreading intervals, eventually accumulating the elements necessary to create a replica of the original TFS matrix (stage724) In order to decode or deconvolve the TFS matrix accumulated during stage724, the digital OTFS data receiver780left multiplies, during a stage726, the TFS matrix by the Hermitian matrix of the [U2] matrix, i.e., [U2H], established at a stage704. Next, the digital OTFS data receiver780performs, at a stage728, an inverse permutation (P−1) of the result of this left multiplication. The digital OTFS data receiver780then deconvolves the TFS matrix in order to reconstruct a replica732of the original data matrix [D] by, in a stage730, right multiplying the result of stage728by the Hermitian of the original N×N time-frequency shifting matrix [U1], i.e., [U1H], established at a stage702. Because the reconstructed signal will generally have some noise and distortion due to various communications link impairments, various standard noise reduction and statistical averaging techniques, such as information matrices, may be used to assist in the reconstruction process (not shown). Each replicated frame732of each original data matrix [D] may be stored within digital data storage782(stage740). Attention is now directed toFIG.7C, which is a flowchart representative of an exemplary OTFS data demodulation method790capable of being implemented by the OTFS receiver module455of the OTFS transceiver400or, for example, by the OTFS receiver750ofFIG.7B. As shown inFIG.7C, the method includes establishing a time-frequency detransformation matrix of at least two dimensions (stage792). The method further includes receiving a modulated signal formed using a time-frequency transformation matrix that is a hermetian of the detransformation matrix (stage794). The modulated signal is then demodulated to form a transformed data matrix (stage796). The method further includes generating a data matrix by combining the transformed data matrix and the detransformation matrix (stage798). Attention is now directed toFIGS.16A and16B, which illustrate an alternative OTFS signal reception scheme corresponding to the alternative OTFS transmission scheme ofFIG.15. As shown inFIG.16A, the matrix [r]1600of received data is demodulated and deconvolved (decoded) by forming the Hermitian matrices of the matrices [U1] and [U2] used to encode and modulate the data [D], as well as the inverse permutation operation P−1to undo the original permutation operation P used to scan and transmit the data over multiple time intervals. In the illustration ofFIGS.16A and16B, the inverse permutation P−1([r][U2H]) is identified by reference numeral1604and the reconstructed data matrix [D] (created from [U1H]*P−1([r] *[U2H])) is identified by the reference numeral1606. Attention is now directed toFIG.15, which illustrates an alternative OTFS transmission scheme. As shown, the original data matrix [D] is identified by reference numeral1500, the matrix product [U1][D] is identified by reference numeral1502, the permuted matrix P([U1][D]) is identified by reference numeral1504, and the matrix [U2] is identified by reference numeral1506. In the representation ofFIG.15, at least certain of the effects of the permutation operation P are represented by the differing directions of arrow1507and arrow1507′. In one embodiment [U1] may be a Hadamard matrix; that is, a square matrix composed of mutually orthogonal rows and either +1 or −1 coefficients. This matrix has the property that H*HT=nInwhere Inis an N×N identity matrix and HTis the transpose of H). Consistent with the alternative OTFS transmission scheme ofFIG.15, the matrix corresponding to the transmitted signal may be expressed as [P([U1][D])]*[U2] and is identified by reference numeral1508. Various modifications of the above-described data reconstruction process are also within the scope of the present disclosure and are described below with reference toFIGS.14and16. Turning now toFIGS.14A and14B, there is illustrated a scheme for reception and reconstruction of signals transmitted consistent with the first alternative OTFS transmission scheme ofFIG.13. InFIG.14A, the data that the transmitter has received and accumulated, after various communications link impairment effects, is represented as the [r] matrix1400. The [r] matrix1400is demodulated and deconvolved (decoded) by forming the Hermitian matrices of the original [U1], [U2], and [U3] matrices originally used to encode and modulate the data [D], as well as the inverse permutation operation P−1to undo the original permutation operation P used to scan and transmit the data over multiple time intervals. Here [U1H] may be an IDFT matrix, [U3H] may be a DFT matrix, and [U2H]1402may be a DFT matrix times a base. As shown inFIG.14A, P−1([r][U2H]) is identified by the reference numeral1403. InFIG.14B, P−1([U3H][r][U2H]) is identified by the reference numeral1404and the reconstructed data matrix [D] is identified by reference numeral1406. Referring now toFIG.11, there is illustrated an exemplary receive frame1100including guard times1150between groups of received data or blocks1120. The receive frame1100corresponds to a frame received in response to transmission of a frame having characteristics equivalent to those illustrated inFIG.9. As shown inFIG.11, each receive block1120includes information comprising a portion of the [D] matrix, such as a column as shown inFIG.11, or a row, or sub-blocks of the [D] matrix. The entire [D] matrix is received in a time Tf1130that includes N blocks1120and N−1 guard times1150. The guard time1150provides the receiver with time to resolve Doppler shift in the received signals. The Doppler shift causes delays or advances in the receive time and the OTFS receiver455can use the guard times1120between the receive blocks1120-1,1120-2,1120-3,1120-4and1120-5to capture data without interference from other users. Second Form of OTFS Method Attention is now directed toFIGS.8,10and12, to which reference will be made in describing aspects of a second form of the OTFS method. As mentioned previously, in the first OTFS method, which was described with reference toFIGS.6and7, data is transmitted on a per time slice basis. In contrast, the second form of the OTFS method contemplates that data is transmitted as a series of waveforms, each of which generally subsists for a period of N time slices. More particularly, in embodiments of the second form of the OTFS method each data element within an input frame of data [D] including N2data elements is assigned a unique waveform derived from a basic waveform of duration N time slices. In one implementation this uniqueness is obtained by assigning to each data element a specific combination of a time and frequency cyclic shift of the basic waveform. Consistent with one embodiment of the second form of the OTFS method, each element in the input frame of data [D] is multiplied by its corresponding unique waveform, thereby producing a series of N2weighted unique waveforms. Over one spreading time interval (generally composed of N time slices), all N2weighted unique waveforms corresponding to each data element in the fame of data [D] are simultaneously combined and transmitted. Further, in this embodiment a different unique basic waveform of length (or duration) of N time slices is used for each consecutive time-spreading interval. The set of N unique basic waveforms (i.e., one for each of the N time-spreading intervals) form an orthonormal basis. As may be appreciated, embodiments of the second form of the OTFS element contemplate that at least a part of [D] is transmitted within each of the N time-spreading intervals. To receive waveforms modulated and transmitted in accordance with this second form of the OTFS method, the received signal is (over each spreading interval of N time slices), correlated with the set of all N2waveforms previously assigned to each data element during the transmission process for that specific time spreading interval. Upon performing this correlation, the receiver will produce a unique correlation score for each one of the N2data elements (the receiver will have or be provided with knowledge of the set of N2waveforms respectively assigned by the transmitter to the corresponding set of N2data elements). This process will generally be repeated over all N time-spreading intervals. The original data matrix [D] can thus be reconstructed by the receiver by, for each data element, summing the correlation scores over N time-spreading intervals. This summation of correlation scores will typically yield the N2data elements of the frame of data [D]. Turning now toFIG.8, there are shown an exemplary set of vectors used in convolving and deconvolving data in accordance with the second form of the OTFS method. Specifically,FIG.8depicts a base vector802, data vector800, Fourier vector804and Transmit vector806. In the embodiment ofFIG.8the data vector800may include N elements (often one row, column, or diagonal) of an N×N [D] matrix, the base vector802may include N elements (often one row, column, or diagonal) of an N×N [U1] matrix, the Fourier vector804may include N elements (often one row, column, or diagonal) of an N×N [U2] matrix, which may often comprise a DFT or IDFT matrix. The transmit frame808is composed of N single time-spreading intervals Tm810, each of which is defined by a transmit vector806containing multiple (such as N) time slices. In the embodiment ofFIG.8, the transmit vector806provides information used by the transmitter in selecting elements of the OTFS transmission matrix for transmission during each time slice of each transmission interval. InFIG.8, the lines812are intended to indicate that each Fourier vector waveform804is manifested over one spreading time interval Tm810. It is observed that this is representative of a difference in wireless radio signal modulation between the second form of the OTFS method (in which each waveform exists over a time spreading interval composed of multiple (e.g. N) time slices) and the first form of the OTFS method (in which the wireless signal is essentially transmitted on a per time slice basis). FIG.10illustrates aspects of a cyclic convolution method that may be used to convolve data and transmit data according to the second form of the OTFS methodology. As previously discussed, particularly in the case where [U1] is composed of a cyclically permuted Legendre number of length N, the process of convolving the data and scanning the data can be understood alternatively as being a cyclic convolution of the underlying data. Here the d0, dk, dN-1can be understood as being the elements or symbols of the data vector1000component of the [D] matrix, the bmcoefficients can be understood as representing the base vector1002components of the [U1] matrix, and the X coefficients can be understood as representing the Fourier vector1004components of the [U2] matrix. The combinations of the bmcoefficients and the X coefficients are summed to form the transmit block Tm1010. In the illustration ofFIG.10, each such combination is represented as [bm*Xk] and comprises the element-wise multiplication of the mthbase vector with the kthFourier vector. FIGS.39A,39B,39C and39Dillustrate an exemplary OTFS encoding scheme pursuant to which N2data symbols dijof a data matrix are spread using a pair of transform matrices into N2different basis matrices Bijof basis frames Fu. With reference toFIG.39A, a basis matrix includes N length N basis vectors b0-bN-1. When [U1] is implemented using a DFT or IDFT matrix, the multiplication of the [D] matrix by [U1] and [U2] can be replicated by multiplying each of the basis vectors b0-bN-1by a diagonal matrix formed by placing the N components of each DFT vector (column) along the main diagonal. The result of these multiplications is N2basis matrices. As shown inFIG.39A, each data element dijis then multiplied by one of the N2basis matrices and the resulting N2matrices dij*Bijare summed to yield an OTFS data matrix. This is illustrated by, for example, the cyclic convolution ofFIG.10. Thus, each data element dijis spread over each element of the OTFS data matrix. FIG.39Billustrates an incomplete basis matrix that includes N-l columns and N-k rows where l and k are greater than or equal to one. The resulting multiplications spread only a portion of the data elements do over the entire N×N OTFS matrix.FIG.39Cillustrates a basis frame that has N vectors of length M where M is greater than N. The resulting basis frames include N×M elements.FIG.39Dillustrates an incomplete basis frame including N−1 columns and M-k rows, where 1 and k are greater than or equal to one. The result is that fewer than all the data elements do are spread across all of the N2basis frames. FIG.12shows a diagram of a cyclic deconvolution method that may be used to deconvolve received data according to the second form of the OTFS methodology. InFIG.12, Rm1202denotes a portion of the accumulated signal730received and demodulated by the OTFS receiver455. Again, as previously discussed, particularly in the case where [U1] is composed of a cyclically permuted Legendre number of length N, then the matrix-based mathematical process of deconvolving the data and reconstructing the data can be understood alternatively as being a cyclic deconvolution of the transmitted data previously convolved inFIG.10. Here the reconstructed components1200˜d0, ˜dk, ˜dN-1can be understood as being the reconstructed elements (symbols) of the data vector1000component of the [D] matrix, the bmcoefficients1002again can be understood as representing the same base vector1002components of the [U1] matrix, and the X coefficients1004can again be understood as representing the Fourier vector1004components of the [U2] matrix. In addition, [bm*Xk]′ may be understood as denoting the element-wise multiplication of the mirror conjugate of the mthbase vector with the kthFourier vector. In this alternative scheme or embodiment, the OTFS method can be understood as being a method of transmitting at least one frame of data ([D]) over a communications link, comprising: creating a plurality of time-spectrum-tone or spectral-shape spreading codes operating over a plurality of time-spreading intervals, each single time-spreading interval being composed of at least one clock intervals; each time-spectrum-tone or spectral-shape spreading code comprising a function of a first time-frequency shifting, a second spectral shaping, and a time spreading code or scanning and transmission scheme. Multiple Users In an exemplary embodiment, OTFS modulation techniques may be employed to enable data to be sent from multiple users using multiple transmitters (here usually referred to as the multiple transmitter case) to be received by a single receiver. For example, assume multiple users “a”, “b”, “c”, and “d”, each desire to send a frame of data including N elements. Consistent with an embodiment of a multi-user OTFS transmission scheme, a conceptual N×N OTFS transmission matrix shared by multiple users may be created in the manner described below. Specifically, each given user packs their N elements of data into one column of an N×N data frame associated with such user but leaves the other columns empty (coefficients set to zero). The N×N data frame [Da] associated with, and transmitted by, a user “a” may thus be represented as: [Da]=[a1,101,2…01,na2,102,2…02,n…………an,10n,2…0n,n] Similarly, the N×N data frame [Db] associated with, and transmitted by, a user “b” may thus be represented as [Db]=[01,1b1,2…01,n02,1b2,2…02,n…………0n,1bn,2…0n,n] And user “n” sends and N×N data frame [Dn] [Dn]=[01,101,2…n1,n02,102,2…n2,n…………0n,10n,2…mn,n] Thus, transmission of the data frames [Da], [Db] . . . [Dn] respectively by the users “a”, “b” “n” results in transmission of the conceptual N×N OTFS transmission matrix, with each of the users being associated with one of the columns of such conceptual transmission matrix. In this way each independent user “a”, “b” “n” transmits its N data elements during its designated slot (i.e., column) within the conceptual N×N OTFS transmission matrix, and otherwise does not transmit information. This enables signals corresponding to the data frames [Da], [Db] . . . [Dn] to be received at the receiver as if the conceptual N×N OTFS transmission matrix was representative of a complete data frame sent by only a single transmitter. Once so received at the receiver, the received data frames [Da], [Db] . . . [Dn] effectively replicate the conceptual N×N OTFS transmission matrix, which may then be deconvolved in the manner discussed above. FIG.24depicts a time/frequency plane2400which illustrates the manner in which multiple users may transmit data in designated columns of a conceptual OTFS transmission matrix consistent with the preceding example. As shown, the time/frequency plane2400includes a first tile T02410-1representative of transmission, by a first user, of data in a first column of the conceptual OTFS transmission matrix. In the embodiment ofFIG.24the first tile T02410-1encompasses an entire bandwidth (BW) of the OTFS channel and extends for a duration of Tf/N, where Tfdenotes a total time required to transmit all of the entries within the conceptual OTFS transmission matrix. Similarly, the time/frequency plane2400includes a second tile T12410-2representative of transmission, by a second user, of data in a second column of the conceptual OTFS matrix during a second Tf/N interval. In this way each of the N users are provided with a time interval of Tf/N to transmit their respective N elements included within the N×N conceptual OTFS transmission matrix. FIG.25depicts an alternative time/frequency plane2400which illustrates another manner in which multiple users may transmit data in designated rows of a conceptual OTFS transmission matrix consistent with the preceding example. As shown, the time/frequency plane2500includes a first tile T02510-1representative of transmission, by a first user, of data in a first row or first set of rows of the conceptual OTFS transmission matrix. In the embodiment ofFIG.25the first tile T02510-1encompasses a first portion of the entire bandwidth (BW) of the OTFS channel corresponding to the number of first rows, and the transmission extends for an entire duration Tf, where Tfdenotes a total time required to transmit all of the entries within the conceptual OTFS transmission matrix. Similarly, the time/frequency plane2500includes a second tile T12510-2representative of transmission, by a second user, of data in a second row or rows of the conceptual OTFS matrix encompassing a second portion of the bandwidth, and also transmitting during the entire Tftime interval. In this way each of the users are provided with a portion of the bandwidth for the entire time interval of Tfto transmit their respective N elements (or integer multiple of N elements) included within the N×N conceptual OTFS transmission matrix. FIG.26depicts yet another time/frequency plane2600which illustrates another manner in which multiple users may transmit data in designated column/row portions of a conceptual OTFS transmission matrix consistent with the preceding example. As shown, the time/frequency plane2600includes a first tile T02610-1representative of transmission, by a first user, of data in one or more first columns and one or more first rows of the conceptual OTFS transmission matrix. In the embodiment ofFIG.26the first tile T02610-1encompasses a portion of the entire bandwidth (BW) of the OTFS channel proportional to the number of rows in the first tile2610-1, and the transmission extends for a duration of n Tf/N, where Tfdenotes a total time required to transmit all of the entries within the conceptual OTFS transmission matrix and n≤N represents the number of rows that the first tile2610-1includes. Similarly, the time/frequency plane2600includes a second tile T12610-2representative of transmission, by a second user, of data in a one or more second columns and one or more second rows of the conceptual OTFS matrix during a second m Tf/N interval, where m≤N represents the number of rows in the second tile2610-2. In this way each of the users are provided with a time interval of an integer multiple of Tf/N to transmit their respective elements included within the N×N conceptual OTFS transmission matrix. The size of the tiles inFIGS.24-26corresponds proportionally to the amount of data provided to the corresponding user. Therefore, users with higher data rate requirements can be afforded larger portions of the [D] matrix and therefore larger tiles. In addition, users that are closer to the transmitter can be afforded larger portions of the [D] matrix while users further away may be provided smaller portions to take advantage of the efficient transmissions to close users and minimize data lost transmitting to further users. Multiple users that are using different transmitters (or simply multiple transmitters) may communicate over the same communications link using the same protocol. Here, each user or transmitter may, for example, select only a small number of data elements in the N2sized frame of data to send or receive their respective data. As one example, a user may simply select one column of the frame of data for their purposes, and set the other columns at zero. The user's device will then compute TFS data matrices and send and receive them as usual. As previously discussed, one advantage of the OTFS approach is increased resistance to Doppler shifts and frequency shifts. For example, in many cases the greater degree of time, frequency, and spectral shaping contemplated by the OTFS approach will largely mitigate any negative effects of such shifts due to the superior ability of OTFS-equipped devices to function over an impaired communications link. In other cases, because the local impaired device can be identified with greater accuracy, a base station or other transmitting device can either send corrective signals to the impaired device, or alternatively shut off the impaired device. Improving Resistance to Channel Impairments As previously discussed, one advantage of the OTFS method is increased resistance to communications channel impairments. This resistance to impairments can be improved by further selecting the first N×N time-frequency shifting matrix and the second N×N spectral shaping matrix to minimize the impact of an aberrant transmitter; specifically, a transmitter suffering from Doppler shift or frequency shift on the elements of the TFS data matrix that are adjacent to the elements of the TFS data matrix occupied by the aberrant transmitter. Alternatively, the receiver may analyze the problem, determine if an alternate set of first N×N time-frequency shifting matrices and/or said second N×N spectral shaping matrices would reduce the problem, and suggest or command that corresponding changes be made to corresponding transmitter(s). Symbol-Based Power and Energy Considerations The OTFS method also enables more sophisticated tradeoffs to be made between transmission distance, transmitter power, and information data rate than is possible to be made using conventional modulation techniques. This increased flexibility arises in part because each symbol is generally spread over a larger number of intervals relative to the case in which conventional techniques are employed. For example, in conventional time-division multiplexed communication systems the power per symbol transmitted must be quite high because the symbol is being transmitted over only one time interval. In conventional spread spectrum communication systems, the symbol is being transmitted over essentially N intervals, and the power per interval is correspondingly less. Because the OTFS method transmits a bit or symbol of information over N2different modalities (e.g. waveforms, times), the power per modality is much less. Among other things, this means that the effect of impulse noise, that would in general only impact a specific waveform over a specific time interval, will be less. It also means that due to increased number of signal transmission modalities (waveforms, times) enabled by the OTFS method, there are more degrees of freedom available to optimize the signal to best correspond to the particular communications link impairment situation at hand. Overview of OTFS Equalization Attention is now directed toFIGS.27-36, to which reference will be made in describing various techniques for compensating for Doppler and frequency shift within an OTFS communication system. Turning now toFIG.27, there is shown an exemplary process by which a receiver2706compensates for various types of echo reflections or other channel distortions through time deconvolution of a received signal in the manner described herein. InFIG.27, wireless transmitter2700transmits a complex cyclically time shifted and cyclically frequency shifted wireless waveform2702in multiple directions using methods in accordance with the above description. The wireless transmitter2700could be realized using, for example, the OTFS transmitter405ofFIG.4. Some of these signals2704go directly to the receiver2706. The receiver2706can be, for example, the OTFS receiver455ofFIG.4. Other signals2708may be reflected by a wireless reflector, such as a building2707. These “echo” reflections2710travel a longer distance to reach receiver2706, and thus end up being time delayed. As a result, receiver2706receives a distorted signal2712that is the summation of both the original signal2704and the echo waveforms2710. Since a portion of the transmitted signal2702is a cyclically time shifted waveform, a time deconvolution device2714at the receiver, such as the post-equalizer480ofFIG.4, analyzes the cyclically time varying patterns of the waveforms and effects appropriate compensation. In the embodiment ofFIG.27this analysis may include a type of pattern matching or the equivalent and the decomposition of the distorted, received signal back into various time-shifted versions. These time-shifted versions may include, for example, a first time-shifted version2716corresponding to direct signals2704and a second time-shifted version2718corresponding to the reflected signal2710. The time deconvolution device2714may also determine the time-offset2720necessary to cause the time delayed echo signal2718,2710to match up with the original or direct signal2716,2704. This time offset value2720, here called a time deconvolution parameter, may provide useful information as to the relative position of the echo location(s) relative to the transmitter2700and receiver2706. This parameter may also help the system characterize some of the signal impairments that occur between the transmitter and receiver. FIG.28shows an example of how transmitting both cyclically time shifted waveforms and cyclically frequency shifted waveforms can be useful to help a receiver2806(such as the OTFS receiver455) effect both time and frequency compensation of the received signal to compensate for both echo reflections and frequency shifts—in this example Doppler effect frequency shifts. InFIG.28, a moving wireless transmitter2800, such as the OTFS transmitter405, is again transmitting a complex cyclically time shifted and cyclically frequency shifted wireless waveform2802in multiple directions. To simplify presentation, it is assumed that transmitter2800is moving perpendicular to receiver2806so that it is neither moving towards nor away from the receiver, and thus there are no Doppler frequency shifts relative to the receiver2806. It is further assumed that the transmitter2800is moving towards a wireless reflector, such as a building2807, and thus the original wireless waveform2802will be modified by Doppler effects, thereby shifting frequencies of the waveform2802towards a higher frequency (blue shifted) relative to the reflector2807. Thus, the direct signals2804impinging upon the receiver2806will, in this example, not be frequency shifted. However the Doppler-shifted wireless signals2808that bounce off of the wireless reflector, here again building2807, will echo off in a higher frequency shifted form. These higher frequency shifted “echo” reflections2810also still have to travel a longer distance to reach receiver2806, and thus also end up being time delayed as well. As a result, receiver2806receives a signal2812that is distorted due to the summation of the direct signal2804with the time and frequency shifted echo waveforms2810. However, as was described above, the OTFS techniques described herein may utilize the transmission of cyclically time shifted and frequency shifted waveforms. Accordingly, a time and frequency deconvolution device2814(alternatively a time and frequency adaptive equalizer such as the OTFS demodulator460and the OTFS post-equalizer480ofFIG.4) within the receiver2806may evaluate the cyclically time varying and frequency varying patterns of the waveforms in order to decompose such waveforms back into various time-shifted and frequency shifted versions. Included among such versions are a first version2816corresponding to the direct signal2804and a second version2818corresponding to the frequency shifted echo waveform2810. In one embodiment this evaluation and decomposition may be effected using pattern matching or related techniques. At the same time, the time and frequency deconvolution device2814may also determine the above-referenced time deconvolution parameter and a frequency offset value2822, which may also be referred to herein as a frequency deconvolution parameter. These parameters may provide useful information as to the relative position of the echo location(s) relative to the transmitter2800and the receiver2806, and may also enable characterization of certain of the signal impairments that occur between the transmitter and receiver. The net effect of both time and frequency deconvolutions, when applied to transmitters, receivers, and echo sources that potentially exist at different distances and velocities relative to each other, is to allow the receiver to properly interpret the impaired signal. Here, even if the energy received in the primary signal is too low to permit proper interpretation, the energy from the time and/or frequency shifted versions of the signals can be added to the primary signal upon the application of appropriate time and frequency offsets or deconvolution parameters to signal versions, thereby resulting in a less noisy and more reliable signal at the receiver. Additionally, the time and frequency deconvolution parameters can contain useful information as to the relative positions and velocities of the echo location(s) relative to the transmitter and receiver, as well as the various velocities between the transmitter and receiver, and can also help the system characterize some of the signal impairments that occur between the transmitter and receiver. Thus, in some embodiments the OTFS systems described herein may also provide a method to provide an improved receiver where, due to either one or a combination of echo reflections and frequency offsets, multiple signals associated with such reflections and offsets result in the receiver receiving a time and/or frequency convolved composite signal representative of time and/or frequency shifted versions of the N2summation-symbol-weighted cyclically time shifted and frequency shifted waveforms. Here, the improved receiver will further time and/or frequency deconvolve the time and/or frequency convolved signal to correct for such echo reflections and the resulting time and/or frequency offsets. This will result in both time and frequency deconvolved results (i.e. signals, typically of much higher quality and lower signal to noise ratio), as well as various time and frequency deconvolution parameters that, as will be discussed, are useful for a number of other purposes. Before going into a more detailed discussion of other applications, however, it is useful to first discuss the various waveforms in more detail. Embodiments of the OTFS systems and methods described herein generally utilize waveforms produced by distributing a plurality of data symbols into one or more N×N symbol matrices, and using these one or more N×N symbol matrices to control the signal modulation of a transmitter. Here, for each N×N symbol matrix, the transmitter may use each data symbol to weight N waveforms, selected from an N2-sized set of all permutations of N cyclically time shifted and N cyclically frequency shifted waveforms determined according to an encoding matrix U, thus producing N symbol-weighted cyclically time shifted and cyclically frequency shifted waveforms for each data symbol. This encoding matrix U is chosen to be an N×N unitary matrix that has a corresponding inverse decoding matrix UH. The method will further, for each data symbol in the N×N symbol matrix, sum the N symbol-weighted cyclically time shifted and cyclically frequency shifted waveforms, producing N2summation-symbol-weighted cyclically time shifted and cyclically frequency shifted waveforms. The transmitter will transmit these N2summation-symbol-weighted cyclically time shifted and cyclically frequency shifted waveforms, structured as N composite waveforms, over any combination of N time blocks or frequency blocks. As discussed above, various waveforms can be used to transmit and receive at least one frame of data [D] (composed of a matrix of up to N2data symbols or elements) over a communications link. Here each data symbol may be assigned a unique waveform (designated a corresponding waveform) derived from a basic waveform. For example, the data symbols of the data matrix [D] may be spread over a range of cyclically varying time and frequency shifts by assigning each data symbol to a unique waveform (corresponding waveform) which is derived from a basic waveform of length N time slices (in embodiments described herein the set of N time slices correspond to the time required to transmit this waveform, also referred to as a time block), with a data symbol specific combination of a time and frequency cyclic shift of this basic waveform. In one embodiment each symbol in the frame of data [D] is multiplied by its corresponding waveform, producing a series of N2weighted unique waveforms. Over one spreading time interval (or time block interval), all N2weighted unique waveforms corresponding to each data symbol in the fame of data [D] are simultaneously combined and transmitted. Further, a different unique basic waveform of length (or duration) of one time block (N time slices) may be used for each consecutive time-spreading interval (consecutive time block). Thus a different unique basic waveform corresponding to one time block may be used for each consecutive time-spreading interval, and this set of N unique waveforms generally forms an orthonormal basis. Essentially, each symbol of [D] is transmitted (in part) again and again either over all N time blocks, or alternatively over some combination of time blocks and frequency blocks (e.g. assigned frequency ranges). To receive data over each block of time, the received signal is correlated with a corresponding set of all N2waveforms previously assigned to each data symbol by the transmitter for that specific time block. Upon performing this correlation, the receiver may produce a unique correlation score for each one of the N2data symbols. This process will be repeated over some combination of time blocks and frequency blocks until all N blocks are received. The original data matrix [D] can thus be reconstructed by the receiver by summing, for each data symbol, the correlation scores over N time blocks or frequency blocks, and this summation of the correlation scores will reproduce the N2data symbols of the frame of data [D]. Note that in some embodiments, some of these N time blocks may be transmitted non-consecutively, or alternatively some of these N time blocks may be frequency shifted to an entirely different frequency range, and transmitted in parallel with other time blocks from the original set of N time blocks in order to speed up transmission time. This is discussed later and in more detail in reference toFIG.29. In order to enable focus to be directed to the underlying cyclically time shifted and cyclically shifted waveforms, detailed aspects of one embodiment of the OTFS methods described above may be somewhat generalized and also discussed in simplified form. For example, the operation of selecting from an N2set of all permutations of N cyclically time shifted and N cyclically frequency shifted waveforms may correspond, at least in part, to an optional permutation operation P as well as to the other steps discussed above. Additionally, the N2set of all permutations of N cyclically time shifted and N cyclically frequency shifted waveforms may be understood, for example, to be at least partially described by a Discrete Fourier transform (DFT) matrix or an Inverse Discrete Fourier Transform matrix (IDFT). This DFT and IDFT matrix can be used by the transmitter, for example, to take a sequence of real or complex numbers and modulate them into a series of different waveforms. Considering now a particular example, individual rows of a DFT matrix (e.g., the DFT matrix ofFIG.18) may each be used to generate a Fourier vector including set of N cyclically time-shifted and frequency-shifted waveforms. In general, the Fourier vectors may create complex sinusoidal waveforms of the type: Xjk=e(-i*2*π*j*k)N where, for an N×N DFT matrix [X}, Xjkis the coefficient of the Fourier vector in row k column j of the DFT matrix, and N is the number of columns. The products of this Fourier vector may be considered to represent one example of a manner in which the various time shifted and frequency shifted waveforms suitable for use in the OTFS system may be generated. For example and as mentioned previously,FIG.10shows a diagram of one example of a cyclic convolution method that a transmitter can use to encode and transmit data. InFIG.10, the sum of the various [bm*Xk] components can also be termed a “composite waveform”. As a consequence, in an embodiment consistent withFIG.10the full [D] matrix of symbols will ultimately be transmitted as N composite waveforms. Although previously discussed,FIG.12may also be understood to provide a diagram of a cyclic deconvolution method capable of being used to decode received data. More specifically, particularly in the case where [U1] is composed of a cyclically permuted Legendre number of length N, the process of deconvolving the data and reconstructing the data can be understood alternatively as being a cyclic deconvolution (cyclic decoding) of the transmitted data previously convolved (encoded) by the transmitter as described in reference toFIG.10. In the embodiment ofFIG.12, the ˜d0, ˜dk, ˜dN-1elements represent the reconstructed symbols (symbols) of the data vector1200component of the [D] matrix (corresponding to the transmitted data vector1000), the bmcoefficients again represent the base vector1002components of the [U1] matrix, and the Xjkcoefficients can again be understood as representing the Fourier vector1004components of the [U2] matrix. Here (Rm)1202is a portion of the accumulated signal1010received and demodulated by the receiver. As described above with reference toFIGS.24-26, different tiling schemes for proportioning the rows (frequency offsets) and columns (time offsets) of the data matrix [D] can be utilized to provide for multiple users to transmit data over multiple time/frequency offset blocks in the same data matrix [D]. These tiling schemes can be utilized differently depending on the type(s) of motion and reflected signals and the resulting time and frequency offsets that a transmitter and receiver are experiencing. Some exemplary methods for utilizing different time/frequency blocks will now be described with reference toFIGS.29-30. Referring now toFIG.29, there are shown various transmitted waveform blocks2900can be transmitted as a series of N consecutive time blocks (i.e. no other blocks in between). These consecutive time blocks can either be a contiguous series2902(i.e. with minimal or no time gaps in between various waveform blocks) or they can be a sparsely contiguous series2904(i.e. with time gaps between the various waveform bocks, which may in some embodiments be used for synchronization, hand shaking, listening for other user's transmitters, channel assessment and other purposes. Alternatively, the various waveform time blocks can be transmitted either time-interleaved with the blocks from one or more different symbol matrices2906,2908(which in some cases may be from a different transmitter) in a contiguous or sparse interleaved manner as shown in series2910. As yet another alternative, some of the various waveform time blocks may be frequency transposed to entirely different frequency bands or ranges2912,2914,2916. This can speed up transmission time, because now multiple waveform time blocks can now be transmitted at the same time as different frequency blocks. As shown in time/frequency offset tiles2918and2920, such multiple frequency band transmissions can also be done on a contiguous, sparse contiguous, contiguous interleaved, or sparse contiguous interleaved manner. Here2922and2928represent one time block at a first frequency range2912, and2924and2930represent the next time block at the frequency range2912. Here the various frequency ranges2912,2914, and2916can be formed, as will be described shortly, by modulating the signal according to different frequency carrier waves. Thus, for example, frequency range or band2912might be transmitted by modulating a 1 GHz frequency carrier wave, frequency range or band2914might be transmitted by modulating a 1.3 GHz frequency carrier wave, and band2915might be transmitted by modulating a 1.6 GHz frequency carrier wave, and so on. Stated differently, the N composite waveforms, themselves derived from the previously discussed N2summation-symbol-weighted cyclically time shifted and cyclically frequency shifted waveforms, may be transmitted over at least N time blocks. These N time blocks may be either transmitted consecutively in time (e.g.2902,2904) or alternatively transmitted time-interleaved with the N time blocks from a second and different N×N symbol matrix. FIG.30shows that the various composite waveform blocks transmitted by the transmitter can be either transmitted as shorter duration time blocks over one or more wider frequency ranges, or as longer duration time blocks over one or more narrower frequency ranges. That is,FIG.30shows exemplary tradeoffs between frequency bandwidth and time made available through use of embodiments of the OTFS method. Whereas in time/frequency tile2940, the available bandwidth for each frequency range2912,2914, and2916is relatively large, in2942, the available bandwidth for each frequency range2932,2934and2936is considerably less. Here, the OTFS scheme can compensate for narrower frequency ranges by allowing more time per time block. Thus where as in time/frequency tile2940, with high bandwidth available, the time blocks2922and2924can be shorter, in time/frequency tile2942, with lower bandwidth available, the time blocks2926for transmitting the composite waveform is longer. For bothFIGS.29and30then, if there is only one fundamental carrier frequency, then all N blocks are transmitted consecutively in time as N time blocks. If there are less than N multiple fundamental carrier frequencies available, then all N blocks can be transmitted as some combination of N time blocks and N frequency blocks. If there are N or more fundamental frequencies available, then all N blocks can be transmitted over the duration of 1 time block as N frequency blocks. Attention is now again directed toFIG.21, to which reference will be made in describing an exemplary pre-equalization scheme. As was described previously, the transmitter2100is configured to transmit a series of N consecutive waveform time blocks where each time block encompasses a set of N time slices. During every successive time slice, one element from the OTFS matrix2108can be used to control the modulation circuit2104. As was also previously discussed, the modulation scheme may be one in which the element will be separated into its real and imaginary components, chopped and filtered, and then used to control the operation of a sin and cosine generator, producing a composite analog waveform2120. The net, effect, by the time that the entire original N×N data symbol matrix [D] is transmitted, is to transmit the data in the form of N2summation-symbol-weighted cyclically time shifted and cyclically frequency shifted waveforms, structured as N composite waveforms. In some embodiments the transmitter2100may further implement a pre-equalization operation, typically performed by the pre-equalizer410ofFIG.4, which involves processing the [D] matrix prior to providing it to the analog modulation circuit2102. When this pre-equalization operation is performed, the transmitter2100outputs pre-equalized OTFS signals2130; otherwise, the transmitter simply outputs the OTFS signals2120. The pre-equalization operation may be performed when, for example, the receiver in communication with the transmitter2100detects that an OTFS signal2120has been subjected to specific echo reflections and/or frequency shifts. Upon so detecting such echo reflections and/or frequency shifts, the receiver may transmit corrective information to the transmitter pertinent to such reflections and shifts. The pre-equalizer410may then shape subsequently-transmitted pre-equalized OTFS signals so as to compensate for these echo reflections and/or frequency shift. Thus, for example, if the receiver detects an echo delay, the pre-equalizer410may send the signal with an anti-echo cancellation waveform. Similarly, if the receiver detects a frequency shift, the pre-equalizer410can introduce a compensatory reverse frequency shift into the transmitted pre-equalization signal2130. FIG.31illustrates exemplary receiver processing section3110operative to compensate for the effects of echo reflections and frequency shifts. Referring toFIG.31, the receiver processing section3110includes a cyclic deconvolution processing block3106and an equalizer3102. The equalizer3102performs a series of math operations and outputs equalization parameters3108that can also give information pertaining to the extent to which the echo reflections and frequency shifts distorted the underlying signal. The equalizer3102A can be, for example, an adaptive equalizer. InFIG.31, it is assumed that the composite transmitted waveform has, since transmission, been distorted by various echo reflections and/or frequency shifts as previously shown inFIGS.27and28. This produces a distorted waveform3100, which for simplicity is represented through a simple echo reflection delayed distortion. InFIG.31, equalizer3102is configured to reduce or substantially eliminate such distortion by analyzing the distorted waveform3100and, assisted by the knowledge that the original composite waveform was made up of N cyclically time shifted and N cyclically frequency shifted waveforms, determine what sort of time offsets and frequency offsets will best deconvolve distorted waveform3100back into a close representation of the original waveform, which is represented inFIG.31as deconvolved waveform3104. The equalization operations performed by equalizer3102may alternately be carried out by the cyclic deconvolution device3106. In one embodiment the equalizer3102produces a set of equalization parameters3108during the process of equalizing the distorted waveform. For example, in the simple case where the original waveform was distorted by only a single echo reflection offset by time toffset, and by the time the original waveform and the toffsetecho waveform reach the receiver, the resulting distorted signal3100may be, for example, about 90% original waveform and 10% toffsetecho waveform, then the equalization parameters3108can output both the 90% original and 10% echo signal mix, as well as the toffsetvalue. Typically, of course, the actual distorted signal3100could consist of a number of various time and frequency offset components, and here again, in addition to cleaning this distortion, the equalizer3102can also report the various time offsets, frequency offsets, and percentage mix of the various components of signal3100to the transmitter and/or the receiver. As previously discussed inFIGS.29and30, the various composite waveforms in the N time blocks can be transmitted in various ways. In addition to time consecutive transmission, i.e. a first block, followed (often by a time gap which may optionally be used for handshaking or other control signals) by a second time block and then a third time block, the various blocks of composite waveforms can be transmitted by other schemes. In some embodiments, for example in network systems where there may be multiple transmitters and potentially also multiple receivers, it may be useful to transmit the data from the various transmitters using more than one encoding method. Here, for example, a first set of N time blocks may transmit data symbols originating from a first N×N symbol matrix from a first transmitter using a first unitary matrix [U1]. A second set of N time blocks may transmit data symbols originating from a second N×N symbol matrix from a second transmitter using a second unitary matrix [U2]. Depending on the embodiment, [U1] and [U2] may be identical or different. Because the signals originating from the first transmitter may encounter different impairments (e.g. different echo reflections, different frequency shifts), some schemes of cyclically time shifted and cyclically frequency shifted waveforms may operate better than others. Thus, these waveforms, as well as the unitary matrices [U1] and [U2], may be selected based on the characteristics of these particular echo reflections, frequency offsets, and other signal impairments of the system and environment of the first transmitter, the second transmitter and/or the receiver. As an example, a receiver configured to implement equalization in accordance withFIG.31may, based upon the equalization parameters3108which it derives, elect to propose an alternative set of cyclically time shifted and cyclically frequency shifted waveforms intended to provide superior operation in view of the current environment and conditions experienced by such receiver. In this case the receiver could transmit this proposal (or command) to the corresponding transmitter(s). This type of “handshaking” can be done using any type of signal transmission and encoding scheme desired. Thus in a multiple transmitter and receiver environment, each transmitter may attempt to optimize its signal so that its intended receiver is best able to cope with the impairments unique to communication between the transmitter and receiver over the communications channel therebetween. In some cases, before transmitting large amounts of data, or any time as desired, a given transmitter and receiver may choose to more directly test the various echo reflections, frequency shifts, and other impairments of the transmitter and receiver's system and environment. This can be done, by, for example having the transmitter send a test signal where the plurality of data symbols are selected to be test symbols known to the receiver (e.g., the receiver may have stored a record of these particular test symbols). Since in this case the receiver will be aware of exactly what sort of signal it should receive in the absence of any impairment, the equalizer3102will generally be able to provide even more accurate time and frequency equalization parameters3108for use by the receiver relative to the case in which the receiver lacks such awareness. Thus, in this case the equalization parameters provide even more accurate information relating to the characteristics of the echo reflections, frequency offsets, and other signal impairments of the system and environment of the applicable transmitter(s) and the receiver. This more accurate information may be used by the receiver to suggest or command that the applicable transmitter(s) shift to use of communications schemes (e.g., to U matrices) more suitable to the present situation. In some embodiments, when the transmitter is a wireless transmitter and the receiver is a wireless receiver, and the frequency offsets are caused by Doppler effects, the more accurate determination of the deconvolution parameters, i.e. the characteristics of the echo reflections and frequency offsets, can be used to determine the location and velocity of at least one object in the environment of the transmitter and receiver. Examples of OTFS Equalization Techniques This section includes a description of a number of exemplary OTFS equalization techniques capable of being implemented consistent with the general OTFS equalization approach and apparatus discussed above. However, prior to describing such exemplary techniques, a summary of aspects of transmission and reception of OTFS-modulated signals is given in order to provide an appropriate context for discussion of these OTFS equalization techniques. Turning now to such a summary of OTFS signal transmission and reception, consider the case in which a microprocessor-controlled transmitter packages a series of different symbols “d” (e.g. d1, d2, d3. . . ) for transmission by repackaging or distributing the symbols into various elements of various N×N matrices [D]. In one implementation such distribution may, for example, include assigning d1to the first row and first column of the [D] matrix (e.g. d1=d0,0), d2to the first row, second column of the [D] matrix (e.g. d2=d0,1) and so on until all N×N symbols of the [D] matrix are full. Here, if the transmitter runs out of “d” symbols to transmit, the remaining [D] matrix elements can be set to be 0 or other value indicative of a null entry. The various primary waveforms used as the primary basis for transmitting data, which here will be called “tones” to show that these waveforms have a characteristic sinusoid shape, can be described by an N×N Inverse Discrete Fourier Transform (IDFT) matrix [W], where for each element w in [W], wj,k=ei⁢2⁢π⁢j⁢kN or alternatively wj,k=eijθkor wj,k=[eiθk]j. Thus the individual data elements d in [D] are transformed and distributed as a combination of various fundamental tones w by a matrix multiplication operation [W]*[D], producing a tone transformed and distributed form of the data matrix, here described by the N×N matrix [A], where [A]=[W]*[D]. To produce N cyclically time shifted and N cyclically frequency shifted waveforms, the tone transformed and distributed data matrix [A] is then itself further permuted by modular arithmetic or “clock” arithmetic, thereby creating an N×N matrix [B], including each element b of [B], bi,j=ai,(i+j)mod N. This can alternatively be expressed as [B]=Permute([A])=P(IDFT*[D]). Thus the clock arithmetic controls the pattern of cyclic time and frequency shifts. The previously described unitary matrix [U] can then be used to operate on [B], producing an N×N transmit matrix [T], where [T]=[U]*[B], thus producing an N2sized set of all permutations of N cyclically time shifted and N cyclically frequency shifted waveforms determined according to an encoding matrix [U]. Put alternatively, the N×N transmit matrix [T]=[U]*P(IDFT*[D]). Then, typically on a per column basis, each individual column of N is used to further modulate a frequency carrier wave (e.g. if transmitting in a range of frequencies around 1 GHz, the carrier wave will be set at 1 GHz). In this case each N-element column of the N×N matrix [T] produces N symbol-weighted cyclically time shifted and cyclically frequency shifted waveforms for each data symbol. Effectively then, the transmitter is transmitting the sum of the N symbol-weighted cyclically time shifted and cyclically frequency shifted waveforms from one column of [T] at a time as, for example, a composite waveform over a time block of data. Alternatively, the transmitter could instead use a different frequency carrier wave for the different columns of [T], and thus for example transmit one column of [T] over one frequency carrier wave, and simultaneously transmit a different column of [T] over a different frequency carrier wave, thus transmitting more data at the same time, although of course using more bandwidth to do so. This alternative method of using different frequency carrier waves to transmit more than one column of [T] at the same time will be referred to as frequency blocks, where each frequency carrier wave is considered its own frequency block. Thus, since the N×N matrix [T] has N columns, the transmitter will transmit the N2summation-symbol-weighted cyclically time shifted and cyclically frequency shifted waveforms, structured as N composite waveforms, over any combination of N time blocks or frequency blocks, as previously shown inFIGS.29and30. On the receiver side, the transmit process is essentially reversed. Here, for example, a microprocessor controlled receiver would of course receive the various columns [T] (e.g., receive the N composite waveforms, also known as the N symbol-weighted cyclically time shifted and cyclically frequency shifted waveforms) over various time blocks or frequency blocks as desired for that particular application. In cases in which sufficient bandwidth is available and time is of the essence, the transmitter may transmit the data as multiple frequency blocks over multiple frequency carrier waves. On the other hand, if available bandwidth is more limited, and/or time (latency) is less critical, then the transmit will transmit and the receiver will receive over multiple time blocks instead. During operation the receiver may effectively tune into the one or more frequency carrier waves, and over the number of time and frequency blocks set for the particular application, eventually receive the data or coefficients from the original N×N transmitted matrix [T] as an N×N receive matrix [R]. In the general case [R] will be similar to [T], but may not be identical due to the existence of various impairments between the transmitter and receiver. The microprocessor controlled receiver then reverses the transmit process as a series of steps that mimic, in reverse, the original transmission process. The N×N receive matrix [R] is first decoded by inverse decoding matrix [UH], producing an approximate version of the original permutation matrix [B], here called [BR], where [BR]=([UH]*[R]). The receiver then does an inverse clock operation to back out the data from the cyclically time shifted and cyclically frequency shifted waveforms (or tones) by doing an inverse modular mathematics or inverse clock arithmetic operation on the elements of the N×N [BR] matrix, producing, for each element bRof the N×N [BR] matrix, ai,jR=bi,(j-i)mod NR. This produces a de-cyclically time shifted and de-cyclically frequency shifted version of the tone transformed and distributed form of the data matrix [A], which may hereinafter be referred to as [AR]. Put alternatively, [AR]=Inverse Permute ([BR]), or [AR]=P−1([UH]*[R]). The receiver then further extracts at least an approximation of the original data symbols d from the [AR] matrix by analyzing the [A] matrix using an N×N Discrete Fourier Transform matrix DFT of the original Inverse Fourier Transform matrix (IDFT). Here, for each received symbol dR, the dRare elements of the N×N received data matrix [DR] where [DR]=DFT*AR, or alternatively [DR]=DFT*P−1([UH]*[R]). Thus the original N2summation-symbol-weighted cyclically time shifted and cyclically frequency shifted waveforms are subsequently received by a receiver which is controlled by the corresponding decoding matrix UH(also represented as [UH]) The processor of the receiver uses this decoding matrix [UH] to reconstruct the various transmitted symbols “d” in the one or more originally transmitted N×N symbol matrices [D] (or at least an approximation of these transmitted symbols). Turning now to a discussion of various exemplary OTFS equalization techniques, there exist at least several general approaches capable of being used correct for distortions caused by the signal impairment effects of echo reflections and frequency shifts. One approach leverages the fact that the cyclically time shifted and cyclically frequency shifted waveforms or “tones” form a predictable time-frequency pattern. In this scheme a deconvolution device situated at the receiver's front end may be straightforwardly configured to recognize these patterns, as well as the echo-reflected and frequency shifted versions of these patterns, and perform the appropriate deconvolutions by a pattern recognition process. Alternatively the distortions may be mathematically corrected using software routines, executed by the receiver's processor, designed to essentially determine the various echo reflected and frequency shifting effects, and solve for these effects. As a third alternative, once, by either process, the receiver determines the time and frequency equalization parameters of the communication media's particular time and frequency distortions, the receiver may transmit a command to the transmitter to instruct the transmitter to essentially pre-compensate or pre-encode, e.g., by using a pre-equalizer such as the pre-equalizer410ofFIG.4, for these effects. That is, if for example the receiver detects an echo, the transmitter can be instructed to transmit in a manner that offsets this echo, and so on. FIG.32Aillustrates an exemplary system in which echo reflections and frequency shifts (e.g., Doppler shifts caused by motion) of a channel He can blur or be distorted by additive noise3202. The time and frequency distortions can be modeled as a 2-dimensional filter Hcacting on the data array. This filter He represents, for example, the presence of multiple echoes with time delays and Doppler shifts. To reduce these distortions, the signal can be pre-equalized, e.g., using the pre-equalizer3208, before the signal3200is transmitted by the transmitter3204over the channel to the receiver3205and subsequently post-equalized, using the post-equalizer3206, after the DRmatrix has been recovered at3206. This equalization process may be performed by, for example, using digital processing techniques. The equalized form of the received D matrix, which ideally will completely reproduce the original D matrix, may be referred to hereinafter as Deq. FIG.32Bshows an example of an adaptive linear equalizer3240that may be used to implement the post-equalizer3206in order to correct for such distortions. The adaptive linear equalizer3240, which may also be used as the equalizer3102, may operate according to the function: Y⁡(k)=∑L=L⁢cR⁢cC⁡(l)*X⁡(k-l)+η⁡(k) Mathematical Underpinnings of Two Dimensional Equalization An exemplary equalization mechanism associated with the OTFS modulation, which is inherently two dimensional, is discussed below. This is in contrast to its one dimensional counterpart in conventional modulation schemes such as, for example, OFDM and TDMA. Assume that the input symbol stream provided to an OTFS transmitter is a digital function X∈C (Rd×Rd) with values in specific finite constellation C⊂C (either QPSK or higher QAM's, for example). This transmitter modulates this input stream into an analog signal ΦTx,Pass, which is then transmitted. During the transmission ΦTx,Passundergo a multipath channel distortion. The distorted passband signal ΦTx,Passarrives at the OTFS receiver and is demodulated back into a digital function Y∈C (Rd×Rd), which may be referred to herein as the output stream. The locality properties of OTFS modulation imply that the net effect of the multipath channel distortion is given by a cyclic two dimensional convolution with a two dimensional channel impulse response. SeeFIGS.53and54. Referring toFIG.53, an illustration is provided of a two-dimensional channel impulse. Smear along the time axis represents multipath reflections causing time delay while smear along the frequency axis represents multipath reflectors causing Doppler shifts. InFIGS.54A-54C, input and output streams are depicted after two-dimensional channel distortion. Specifically,FIG.54Arepresents the two-dimensional channel impulse,FIG.54Brepresents a portion of the input stream andFIG.54Bdepicts the same portion after convolution with the channel and additive noise. In what follows an appropriate equalization mechanism will be described. To this end, it will be convenient to enumerate the elements of the digital time axis by 0, 1, . . . , N−1 and to consider the input and output streams X and Y, respectfully, as sequences of functions: X=(X(k)∈(Rd):k=0, . . . ,N−1), Y=(Y(k)∈(Rd):k=0, . . . ,N−1),where X(k)(i)=X(k,i) and Y(k)(i)=Y(k,i), for every k=0, N−1 and i∈Rd. Furthermore, for purposes of explanation it will be assumed that the time index k is infinite in both directions, that is, k∈Z, the digital time direction is linear, and the digital frequency direction is cyclic. Under these conventions the relation between the output stream and the input stream can be expressed by the following Equation (1.1): Y⁡(k)=∑l=L⁢cRcC⁡(l)*X⁡(k-l)+(k),1.1 where:C=(C(l)∈C (Rd): l=LC, . . . , RC) are the channel impulse taps. Typically, LC∈Z<0and RC∈Z>0. The number nC=RC−LC+1 may be referred to herein as the memory length of the channel. The operation * in (4.1) stands for one dimensional cyclic convolution on the ring Rd.(k)∈(0, N0−IdN) is a complex Gaussian N dimensional vector with mean 0 and covariance matrix N0·Id, representing the white Gaussian noise. Referring now toFIG.32C, there is shown an exemplary adaptive decision feedback equalizer3250capable of being utilized as the equalizer3102(FIG.31). The adaptive decision feedback equalizer3250both shifts the echo and frequency shifted signals on top of the main signal in a forward feedback process3210, and also then uses feedback signal cancelation methods to further remove any residual echo and frequency shifted signals in3312. The method then effectively rounds the resulting signals to discrete values. The adaptive decision feedback equalizer3250may, in some embodiments, operate according to the function: Xs(k)=∑l=LFRFF⁡(l)*Y⁡(k+l)-∑l-LB-1B⁡(l)*Xh(k+l)Where XH(k)=Q(Xs(k)) Decision Feedback Least Mean Square Estimator (DF-LMS) with Locked Carrier Frequency An exemplary decision feedback LMS equalizer adapted to the relation expressed in Equation (1.1) will now be described under the condition that the carrier frequency is locked between the transmitter and receiver, that is, WTx=WRx. An adaptation of the equalizer under the condition of the existence of a non-zero discrepancy, i.e., ΔW≠0, will subsequently be described. In one aspect, the equalizer incorporates a forward filter and a feedback filter as follows: Forward filter:F=(F(l)∈(Rd):l=LF, . . . ,RF), Feedback filter:B=(B(l)∈(Rd):l=LB, . . . ,−1), where, typically LF, LB∈Z<0and RF∈Z0satisfy LF, LB≥LCand RFRC. In fact, both filters depend on additional parameter k∈Z designating the present point on the digital time axis, hence, the complete notation for the filter taps is Fk(l) and Bk(l). However, for the sake of presentation this additional index will generally be omitted and will only be included when necessary. A soft estimator is defined as follows: Xs(k)=∑l=LFRFF⁡(l)*Y⁡(k+l)-∑l=LB-1B⁡(l)*Xh(k+l).1.2 where Xh(k+l) is the past hard estimation of the past data vector X(k+l), l=LB, . . . , −1, defined as the quantization Xh(k)=Q(X8(k)), that is: Xh(k)⁢(i)=argminp∈C❘"\[LeftBracketingBar]"Xs(k)⁢(i)-p❘"\[RightBracketingBar]".1.3 Computation of Initial Forward and Feedback Filter Taps In one aspect, a closed formula may be used to determine the forward and feedback filter taps of the decision feedback equalizer expressed in terms of the channel impulse response. In this case the forward filter taps are computed without regard to feedback and then the feedback filter taps are determined. Computation of the Forward Filter Taps First fix k=0 and let X8denote the following soft estimator for the vector X(0), which depends only upon the forward filter taps: Xs=∑l=LFRFF⁡(l)*Y⁡(l).1.7 In what follows it is assumed that X(k)˜(0,P·IdN), for every k∈Z. Later this condition may be replaced by the condition that X(k)˜(0, P·IdN), for k≥0 and X(k)=0 for k<0, which is more adapted to the choice of X8. We denote by Err=Err (0) the soft error term: Err=Xs−X(0).  1.8 We consider the cost function: U(F)=∥Err∥2=∥Xs−X(0)∥2,where the expectation is taken over the probability distribution of the input stream X and the additive white Gaussian noise. The optimal filter Foptis defined as: Fo⁢p⁢t=argminFU⁡(F), therefore it satisfies the following system of linear equations: ∇F(l)U(Fopt)=0,l=LF, . . . ,RF1.9 The formula for the gradient ∇F(l)U is an averaged version of (1.6), that is: ∇F⁡(l)U=𝔼[Err*Y⁡(l)★]=𝔼[(Xs-X⁡(0))*Y⁡(l)★]=𝔼[Xs*Y⁡(l)★]-𝔼[X⁡(0)*Y⁡(l)★].1.1 We first compute the term[X(0)*Y(l)★] and then the term[X*Y(l)★]. Developing the expression X(0)*Y(l))★ we obtain: X⁡(0)*Y⁡(l)★=∑l′=LFRFX⁡(0)*X⁡(l-l′)★*C⁡(l′)★. We observe that[X(0)*X(k)★]=0 when l≠0 and[X(0)*X(0)★]=NP·δw=0, hence we conclude that: [X(0)*Y(l)★]=NP·δω=0*C(l)★=NP·C(l)★*.  1.11 Next, we compute the term[X8*Y(l)★]: [Xs*Y⁡(l)★]=∑l′=LFRFF⁡(l′)*[Y⁡(l′)*Y⁡(l)].1.12 Developing the expression for Y(l′)*Y(l) we obtain: Y⁡(l′)*Y⁡(l)=(∑r=LCRCC⁡(r)*X⁡(l′-r)+𝔑⁡(l′))*(∑s=LCRCC⁡(s)*X⁡(l-s)+𝔑⁡(l))=∑s=LCRC∑r=LCRCC⁡(s)★*C⁡(r)*X⁡(l-s)*X⁡(l′-r)+𝔑⁡(l′)*𝔑⁡(l)+Additional⁢Terms.1.13 Denote⁢R⁡(l,l′)=1NP·[Y⁡(l′)*Y⁡(l)]. Taking the expectation of both sides of (1.13) we obtain the following explicit formula for R(l,l′): R⁡(l,l′)={∑s,r=LCs-r=l-l′RCC⁡(r)*C⁡(s)★l≠l′∑s=LC′RCC⁡(s)*C⁡(s)★+1SNR·δw=0l=l′,1.14where in the computation of R(l,l′) we use the following conditions on the mean of the specific terms in (1.13): [X(k)*X(k′)]=NP·δk=k′, [(k)*(k′)]=N0·δk=k′, [Additional Terms]=0. Combining (1.9), (1.10), (1.11) and (1.12) we conclude that the optimal filter Foptsatisfies the following system of linear equations: ∑l′=LFRFR⁡(l,l′)*Fopt(l′)=C⁡(l)★,l=LF,…,RF.1.15 Finally, system (1.15) can be reduced to N systems of nF=RF−LF=1 scalar equations as follows. Applying a DFT to both sides of (1.15) we obtain: ∑l′=LFRF)·)==,l=LF,…,RF.1.16(−) where stands for the DFT of the corresponding function and · stands for pointwise multiplication of functions in C(Rd) as we recall that DFT interchanges convolution with pointwise multiplication and ★ with complex conjugation. Now observe that each function valued equation in (1.16) decouples into nFscalar valued equations by evaluating both sides on each element of the ring Rd. Explicitly, if we number the elements in Rdby 0, 1, 2, . . . , N−1 we end up with the following scalar valued system of equations: ∑l′=LFRF)⁢(i)·)⁢(i)=(i),l=LF,…,RF,1.17 for every i=0, N−1. In more concrete matrix form (1.17) looks like: (R)⁢(i)……R)⁢(i)⋮……⋮⋮……⋮R)⁢(i)……R)⁢(i))⁢()⁢(i)⋮⋮)⁢(i))=((i)⋮⋮(i)),1.18 for every i=0, . . . , N−1. We conclude the discussion by considering the case when the input stream satisfies X(k)=0 for k<0 which is the case adapted to (feedback) subtraction of the past interference. In this scenario, the optimal forward filter Foptsatisfies a system of the form (1.15) with the “matrix coefficients” R(l,l′) taking the form: R⁡(l,l′)={∑s=LCmin⁢{l,RC}∑r=LCmin⁢{l′,RC}s-r=l-l′⁢C⁡(s)★*C⁡(r)l≠l′∑s=LCmin⁢{l,RC}C⁡(s)★*C⁡(s)+1SNR·δw=0l=l′,1.19 Computation of the Feedback Filter Taps The optimal feedback filter taps Bopt(l), l=LB, . . . , −1 can be computed from the forward and channel taps according to the following formula: Bopt(l)=∑l′=LCRCFopt(l′-l)*C⁡(l′).1.2 The justification of Formula (1.20) proceeds as follows. Fix an input vector X(l0) for some specific l0=LB, . . . , −1. Subtracting its interference C(l′)*X(l0) from each term Y(l0+l′), we obtain an “interference free” sequence {tilde over (Y)}(l), l=LF, . . . , RF. Now, applying the forward filter Foptto the sequence {tilde over (Y)}(l) we obtain an estimator for X(0) given by Xs=∑l=LFRFFopt(l)*X⁡(l)-Bopt(l0)*X⁡(l0),which concludes the justification. Computation of Optimal Initial Forward and Feedback Filter Taps In an alternative aspect, a closed formula of the optimal forward and feedback filter taps of the decision feedback equalizer may be expressed in terms of the channel impulse response. In this regard we conduct the computation in the stochastic setting where we assume that X(k)˜(0, P·IdN), for every k∈Z. We denote by Xsthe following soft estimator for the vector X(0): Xs=∑l=LFRFF⁡(l)*Y⁡(l)-∑l=LB-1B⁡(l)*X⁡(l).1.21 We denote by Err=Err (0) the soft error term: Err=Xs−X(0).  (4.22) We consider the cost function: U(F,B)=∥Err∥2=∥Xs−X(0)∥2, where the expectation is taken over the probability distribution of the input stream X and the additive white Gaussian noise. The optimal filters Fopt, Boptare defined as: (Fopt,Bopt)=argmin(F,B)U⁡(F,B),therefore they satisfy the following system of linear equations: ∇F(l)U(Fopt,Bopt)=0,l=LF, . . . ,RF. ∇B(l)U=(Fopt,Bopt)=0,l=LB, . . . ,−1,  1.24where the gradients are given by: ∇F(l)U=[Err*Y(l)★],l=LF, . . . ,RF, ∇B(l)U=−[Err*X(l)★],l=LB, . . . ,−1.  1.24 First we write explicitly the first system ∇F(l)U (Fopt, Bopt)=0. Expanding the term [Err*Y(l)★], we obtain: 𝔼[Err*Y⁡(l)★]=∑l′=LFRFF⁡(l′)*𝔼[Y⁡(l′)*Y⁡(l)★]-∑l′=LB-1B⁡(l′)*𝔼[X⁡(l′)*Y⁡(l)★]-𝔼[X⁡(0)*Y⁡(l)★]. Direct computation reveals that: [X(0)*Y(l)★]=NP·C(l)★, [X(l′)*Y(l)★]=NP·C(l−l′), [Y(l′)*Y(l)★]=NP·R1(l,l′),where: R1(l,l′)={∑s,r=LCs-r=l-l′RCC⁡(r)*C⁡(s)★l≠l′∑s=LC′RCC⁡(s)*C⁡(s)★+1SNR·δw=0l=l′. Thus, the first system of equations amounts to: ∑l′=LFRFR1(l,l′)*Fopt(l′)-∑l′=LFRFC⁡(l-l′)*Bopt(l′)=C⁡(l)★,1.25l=LF,…,RF. Next we write explicitly the system ∇B(l)U (Fopt, Bopt)=0. Expanding the term [Err*X(l)★], we obtain: [Err*X⁡(l)★]=∑l′=LFRFF⁡(l′)*[Y⁡(l′)*X⁡(l)★]-∑l′=LB-1B⁡(l′)*[X⁡(l′)*X⁡(l)★]-⌈X⁡(0)*X⁡(l)★⌉. Direct computation reveals that: [Y(l′)*X(l)★]=NP·C(l′−l), [X(l′)*X(l)*★]=NP·δl=l′·δω=0, [X(0)*X(l)★*]=0, Thus, the second system of equations amounts to: ∑l′=LFRFC⁡(l′-l)*Fopt(l′)-Bopt(l)=0,l=LB,…,-1.1.26 Using Equation (1.26), the optimal feedback filter taps may be expressed in terms of the channel taps and the optimal forward filter taps as: Bopt(l)=∑l′=LFRFC⁡(l′-l)*Fopt(l′),l=LB,…,-11.27 Substituting the right hand side of (1.27) in (1.25) enables the optimal forward filter taps to be determined by finding the solution of the following linear system: ∑l′=LFRFR1(l,l′)*Fopt(l′)-∑l′=LFRFR2(l,l′)*Fopt(l′)=C⁡(l)★,1.28l=LF,…,RF,where:R2(l,l′)=∑s=l+1l-LB∑r=l′+1l′-LBs-r=l-l′⁢C⁡(r)*C⁡(s)★. As a final note, we denote R(l,l′)=Rl(l,l′)−R2(l,l′) and write the system (1.28) in the following form: ∑l′=LFRFR⁡(l,l′)*Fopt(l′)=C⁡(l)★,l=LF,…,RF1.29 System (1.29) can be reduced to N systems of nF=RF−LF=1 scalar equations as follows. Applying a DFT to both sides of (1.15) we obtain: ∑l′=LFRF)·)==,l=LF,…,RF.1.3 Where (−) stands for the DFT of the corresponding function and stands for pointwise multiplication of functions in(Rd), since the DFT interchanges convolution with pointwise multiplication and ★ with complex conjugation. It is now observes that each function valued equation in (1.30) decouples into nF scalar valued equations by evaluating both sides on each element of the ring Rd. Explicitly, numbered the elements in Rdas 0, 1, 2, . . . , N−1 results in the following scalar valued system of equations: ∑l′=LFRF)⁢(i)·)⁢(i)=(i),l=LF,…,RF,1.31for every i=0, N−1. In more concrete matrix form (1.31) looks like: (R)⁢(i)..R)⁢(i)........R)⁢(i)..R)⁢⁢(i))⁢()⁢(i)..)⁢⁢(i))=((i)..(i)),1.32for every i=0, N−1. Channel Acquisition An exemplary channel acquisition component of the OTFS modulation scheme will now be described. To this end, we number the elements of Rdby 0, 1, . . . , N−1. For the channel acquisition, a rectangular strip [0, RC−2LC]×[0, N] is devoted in the time frequency plane. The value of the input stream X at this strip is specified to be: X⁡(τ,w)={Nτ=-LC⁢and⁢w=00otherwise. The complement of this stream will generally be devoted to data. Gradient Correction As mentioned previously, the forward and feedback taps of the decision feedback equalizer depend on the index k and change slowly as k varies. We proceed to describe herein an exemplary tracking mechanism based on gradient correction with respect to an appropriate quadratic cost function. We denote by Err (k) the soft error term at the k step: Err(k)=X′(k)−Xh(k)∈(Rd),  1.4 where theoretically this error should be taken with respect to the true data vector X(k) (true decisions); however, in an exemplary embodiment the error is taken with respect to the hard estimator Xh(k) (hard decisions) as specified in equation (1.4). We define the following cost function U, taking as arguments the forward and feedback filter taps: U(F,B)=∥Err(k)∥2=∥Xs(k)−Xh(k)∥2,  1.5 where ∥−∥ is the norm associated with the standard Hermitian inner product−, −on(Rd). Note that, in fact, the cost function depends on the index k, however, for the sake of brevity we omit this index from the notation. Next, we compute the gradients ∇F(l)U,l=LF, . . . , RFand ∇B(l)U,l=LB, −1 with respect to the Euclidean inner product 2 Re−, −on(Rd). (considered as a real vector space). The formulas for the gradients are: ∇F(l)=∇F(l)U=Err(k)*Y(k+l)★,l=LF, . . . ,RF) ∇B(l)=∇B(l)U=−Err(k)*Xh(k+l)★,l=LB, . . . ,−1  1.6 where ★ stands for the star operation on the convolution algebra(Rd), given by f★ (i)=f(−ι)for every f∈(Rd) and i∈Rd. In other words, the star operation of a function is obtained by inverting the coordinate inside Rdfollowed by complex conjugation. We note that the star operation is related by DFT to complex conjugation, that is DFT (f★)=DFT(f), for every f∈(Rd). The correction of the taps at the k step is obtained by adding a small increment in the (inverse) gradient direction, that is: Fk+1(l)=Fk(l)−μ·∇F(l),l=LF, . . . ,RF, Bk+l(l)=Bk(l)−μ·∇F(l),l=LB, . . . ,−1, for an appropriately chosen positive real number μ<<1. The optimal value μoptof the small parameter μ is given by: μopt=argminμU⁢(F+μ·∇F,B+μ·∇B). A formal development of the quadratic expression U(F+μ·∇F, B+μ·∇B) in the parameter μ reveals that: U⁡(F+μ·∇F,B+μ·∇B)=U⁡(F,B)+μ⁡(2⁢R⁢e⁢〈F,F〉+2⁢Re⁢〈B,B〉)+μ2⁢Hess⁡(∇F,∇B), where Hess (∇F, ∇B) stands for: Hess⁢(∇F,∇B)=∑l=LFRF∇F⁡(l)*Y⁡(k+l)-∑l=LB-1∇B⁡(l)*Xh(k+l)2 andF, FandB, Bstand for: 〈F,F〉=∑l=LFRF〈F⁡(l),F⁡(l)〉,〈B,B〉=∑l=LB-1〈B⁡(l),B⁡(l)〉. If we denote b=2 ReF, F+2 ReB, Band a=Hess (∇F, ∇B) then the standard formula for the minimum of a parabola imply that μoptis given by: μopt=-b2⁢a. FIG.33shows a time-frequency graph providing an illustration of the various echo (time shifts) and frequency shifts which a signal may encounter during transmission through a channel; that is,FIG.33illustrates the impulse response of the channel. If the channel lacked any echo (time shift) or frequency shifts, signal spike3400—which is representative of original signal as transformed by the channel—would instead show up as a single spike at a defined time and frequency. However due to various echoes and frequency shifts, the original signal is instead spread over both time3302and frequency3304in the manner illustrated by spike3400. It is thus desired to compensate or otherwise address these effects, either before further processing at the receiver3204or later after the receiver has taken the processing to the DRstage3206. Alternatively, the original signal may be pre-equalized3208prior to transmission using a related process. FIG.34illustratively represents a time-frequency map of tap values produced by the feed forward (FF) portion of the adaptive decision feedback equalizer ofFIG.32Cwhen correcting for the time and frequency distortions introduced by the channel impulse response shown inFIG.33. The FF portion3210of the equalizer works to shift the echo or frequency shifted signals to once again coincide with the main signal (the un-reflected and non-shifted signal), and thus enhances the intensity of the received signal while diminishing the intensity of the echo or frequency shifted signals. FIG.35illustratively represents a time-frequency map of tap values produced by the feedback (FB) portion3212of the adaptive decision feedback equalizer ofFIG.32Cwhen correcting for the time and frequency distortions introduced by the channel impulse response shown inFIG.33. After the feedforward (FF) portion3210of the equalizer has acted to substantially offset the echo and frequency shifted signals, there will still be some residual echo and frequency signals remaining. The feedback (FB) portion3212acts to cancel out those trace remaining echo signals, essentially acting like an adaptive canceller for this portion of the system. The quantizer portion of the adaptive decision feedback equalizer3214then acts to “round” the resulting signal to the nearest quantized value so that, for example, the symbol “1” after transmission, once more appears on the receiving end as “1” rather than “0.999”. As previously discussed, an alternative mathematical discussion of the equalization method, particularly suitable for step802B, is described in provisional application 61/615,884, the contents of which are incorporated herein by reference. Data Interleaving Attention is now directed toFIGS.36A and36B, to which reference will be made in further elaborating upon the use of interleaving within an OTFS system. In particular,FIGS.36A and36Bshow that it may be useful to transmit various different time blocks in an interleaved scheme where the time needed to transmit all N blocks may vary between different data matrices D, and wherein the interleaving scheme is such as to take the latency, that is the time needed to transmit all N blocks, into account according to various optimization schemes. By choosing groups of latencies properly, one can prevent delays to one user or another. For example,FIG.36Ashows a first latency timeline3600depicting transmission times for five users a, b, c, d and e. Constellation3605shows a hierarchical diagram showing two groups including a first group comprising users a and b, each with a latency of four, and a second group comprising users c, d and e, each with a latency of six. This means that users a and b will transmit, or receive data every four time slots, while users c, d and e will transmit or receive data every six time slots. Time track3610shows the resulting order of transmission/receiving for each user, while latency indicators3615,3620,3625,3630and3635show the resulting latency spacing for users a, b, c, d and e, respectively. FIG.36Bshows a second latency timeline3650showing the transmission times for four users a, b, c and d. Constellation3655shows a hierarchical diagram depicting three groups including a first group comprising user a with a latency of two, a second group comprising user b with a latency of four, and a third group comprising users c and d, each with a latency of eight. This means that user a will transmit or receive data every two time slots, user b will transmit, or receive data every four time slots, while users c and d will transmit or receive data every eight time slots. Time track3660shows the resulting order of transmission/receiving for each user, while latency indicators3665,3670,3675and3680show the resulting latency spacing for users a, b, c and d, respectively. Different latencies can be chosen for different users depending on what type of service the user is seeking. For example, a voice connection may be provided a latency of two, while a file or video download might be provided a latency of eight. Latency may be chosen for other reasons. Full Duplex Transceiver FIG.37shows an example of a full duplex OTFS transceiver3700capable of enabling data to be transmitted and received simultaneously in the same frequency band. The OTFS transceiver3700is configured with an echo cancellation module3705that implements echo cancelation in the time and frequency domain. This enables estimation of two-dimensional reflections of the transmitted signal; that is, estimation of frequency shifts and time shifts. As shown, a first OTFS encoder3710-1performs OTFS encoding with a first matrix [U1], a permutation operation, a second matrix multiplication of a Basis matrix [U2] and sin/cos transmission of the elements of the resulting transformed data matrix. The transformed data matrix is transmitted a column at a time in a one dimensional data stream and up-converted to an RF frequency with RF up converter3715-1, power amplified with transmit power amplifier3720-1and passed to an antenna3740via a circulator3722. In the embodiment ofFIG.37the antenna also receives a second data stream from another transmitter. However, the second data stream also includes reflections of the first signal transmitted by the OTFS transmitter3700. The circulator3722routes the received second signal to a subtractor3724that subtracts an estimate of the reflected signals that is created by the echo canceller3705. A second OTFS encoder3710-2, a second RF up converter3715-2and an echo canceller power amplifier3720-2create the estimated echo that is subtracted from the received second signal. An RF down converter3725demodulates the second received signal and passes the demodulated received second data stream Drto a first OTFS decoder3730-1and a second OTFS decoder3730-2. The first OTFS decoder3730-2decodes the received second signal using the base t matrix that was used to transmit the first data stream. The second OTFS decoder3730-2decodes the echo-canceled data stream using a base r matrix that the other transmitter used to encode the second data stream. The output of the first OTFS decoder3730-1is fed back as a residual error signal to the echo canceller3705in order to tune the two dimensional estimate of the reflected echoes channel. The output of the second OTFS decoder3730-2is an estimate of the second data stream from the other transmitter. The capability to obtain an estimate of the echo channel in both frequency and time is a significant advantage of the OTFS technique, and facilitates full-duplex communication over a common frequency band in a manner not believed to be possible using prior art methods. Iterative Signal Separation FIG.38shows an example of an OTFS receiver3800that provides iterative signal separation in accordance with the disclosure. The OTFS receiver3800receives a first data matrix D1from a first transmitter that uses a first basis matrix. The OTFS receiver3800also receives a second data stream D2from a second transmitter in the same frequency band where the second data stream D2was encoded using a second basis matrix different from the first basis matrix. A first OTFS decoder3810-1decodes the first data matrix D1to create a one dimensional data stream Y1while a second OTFS decoder decodes the second data matrix D2to form a second one dimensional data stream Y2. The OTFS receiver3800includes a pair of feed-forward and feedback equalizers comprising first and second feed forward equalizers3820-1and3820-2, first and second feedback equalizers3835-1and3835-2, and first and second slicers3825-1and3825-2. First and second subtractors3830-1and3830-2calculate first and second residual error signals3840-1and3840-2that are used by respective ones of the feed forward equalizers3820and the feedback equalizers3835in order to optimize two dimensional time/frequency shift channel models. A pair of cross talk cancellers3845-1and3845-2also use the residual error signals3840-1and3840-2, respectively, in order to optimize estimates of the first received data signal and the second received data signal in order to subtract each signal at subtractors3815-1and3815-2. In this way, the cross talk from one data signal to the other is minimized. As with the full duplex OTFS transceiver3700ofFIG.37, the OTFS receiver3800can model two dimensional time/frequency channels and is believed to represent a significant advance over receivers employing conventional one dimensional (i.e., time only) channel modeling approaches. Attention is now directed toFIG.40, which is a block diagram of a time-frequency-space decision feedback equalizer4000that may be employed to facilitate signal separation in a multi-antenna OTFS system. As shown inFIG.40, received signal information (R) represented by a set of M time-frequency planes4004is received at input port4010of the equalizer4000. Each of the M time-frequency planes4004represents the information collected from N transmit antenna instances (M>N) by one of M antenna instances associated with an OTFS receiver. The N transmit antenna instances, which may or may not be co-located, will generally be associated with an OTFS transmitter remote from the OTFS receiver associated with the M receive antenna instances. Each of the N transmit antenna instances and M receive antenna instances may, for example, comprise a single physical antenna which is either co-located or not co-located with the other antenna instances. Alternatively, one or more of the N transmit antenna instances and M receive antenna instances may correspond to an antenna instance obtained through polarization techniques. In the embodiment ofFIG.40, the time-frequency-space decision feedback equalizer4000includes a time-frequency-space feedforward FIR filter4020and a time-frequency-space feedback FIR filter4030. The equalizer4000produces an equalized data stream at least conceptually arranged in set of N time-frequency planes (M>N) wherein, again, N corresponds to the number of antenna instances transmitting information to the M antenna instances of the OTFS receiver associated with the equalizer4000. Turning now toFIG.41, a block diagram is provided of a time-frequency-space decision feedforward FIR filter4100which may be utilized to implement the time-frequency-space feedforward FIR filter4020. As shown, the filter4100processes received signal information (R) carried on a set of M time-frequency planes4104provided by a corresponding set of M receive antennas. The filter4100produces a filtered data stream at least conceptually arranged in set of N time-frequency planes4150(M>N), where, again, N corresponds to the number of antenna instances transmitting information to the M antenna instances of the OTFS receiver associated with the equalizer4000. Referring toFIG.42, a block diagram is provided of a time-frequency-space decision feedback FIR filter4200which may be utilized to implement the time-frequency-space feedback FIR filter4030. As shown, the filter4200processes received signal information (R) carried on a set of M time-frequency planes4204which may, for example, correspond to the set of M time-frequency planes provided by a corresponding set of M receive antennas. The filter4200produces a filtered data stream at least conceptually arranged in a set of N time-frequency planes4250(M>N). The time-frequency-space decision feedback equalizer4000advantageously enables the separation of signals within an OTFS communication system in a manner that substantially maximizes utilization of the available bandwidth. Such signal separation is useful in several contexts within an OTFS communication system. These include separation, at a receiver fed by multiple co-located or non-co-located antennas, of signals transmitted by a set of co-located or non-co-located antennas of a transmitter. In addition, the time-frequency-space decision feedback equalizer4000enables the separation, from signal energy received from a remote transmitter, of echoes received by a receive antenna in response to transmissions from a nearby transmit antenna. This echo cancellation may occur even when the transmit and receive signal energy is within the same frequency band, since the two-dimensional channel-modeling techniques described herein enable accurate and stationary representations of the both the echo channel and the channel associated with the remote transmitter. Moreover, as is discussed below the signal separation capability of the disclosed time-frequency-space decision feedback equalizer enables deployment of OTFS transceivers in a mesh configuration in which neighboring OTFS transceivers may engage in full duplex communication in the same frequency band with other such transceivers in a manner transparent to one another. Again with reference toFIG.40, operation of an exemplary OTFS system may be characterized as the transmission, from each antenna instance associated with a transmitter, of a time-frequency plane representing a two-dimensional information array being sent. Each such antenna instance, whether co-located or non-co-located, may simultaneously transmit two-dimensional information planes, each independent of the other. The information in each of these information planes may be shifted in time and frequency using the same basis functions. During transmission from each of N transmit antenna instances to each of M receive antenna instances, the information within each transmitted plane is differently affected by the different two-dimensional channels linking one of the N transmit antenna instances to each of the M receive antenna instances. At each of the M antenna instances associated with an OTFS receiver, each entry within the two-dimensional array of received signal energy being collected will typically include a contribution from each of the N transmit antenna instances involved in transmitting such signal energy. That is, each of the M receive antenna instances collects a mixture of the two-dimensional, time-frequency planes of information separately sent by each of the N transmit antenna instances. Thus, the problem to be solved by the equalizer4000may be somewhat simplistically characterized as inversion of the N×M “coupling matrix” representative of the various communication channels between the N OTFS transmit antenna instances and the M OTFS receive antenna instances. In one embodiment each of the N transmit antenna instances sends a pilot signal which may be differentiated from the pilot signals transmitted by the other N−1 antenna instances by its position in the time-frequency plane. These pilot signals enable the OTFS receiver to separately measure each channel and the coupling between each antenna instance. Using this information the receiver essentially initializes the filters present within the equalizer4000such that convergence can be achieved more rapidly. In one embodiment an adaptive process is utilized to refine the inverted channel or filter used in separating the received signal energy into different time-frequency-space planes. Thus, the coupling channel between each transmit and receive antenna instance may be measured, the representation of the measured channel inverted, and that inverted channel representation used to separate the received signal energy into separate and distinct time-frequency planes of information. As noted above, the channel models associated with known conventional communication systems, such as OFDM-based systems, are one-dimensional in nature. As such, these models are incapable of accurately taking into consideration all of the two-dimensional (i.e., time-based and frequency-based) characteristics of the channel, and are limited to providing an estimate of only one such characteristic. Moreover, such one-dimensional channel models change rapidly relative to the time scale of modern communication systems, and thus inversion of the applicable channel representation becomes very difficult, if possible at all. The stationary two-dimensional time-frequency channel models described herein also enable OTFS systems to effectively implement cross-polarization cancellation. Consider the case in which a transmit antenna instance associated with an OTFS transceiver is configured for horizontally-polarized transmission and a nearby receive antenna of the OTFS transceiver is configured to receive vertically-polarized signal energy. Unfortunately, reflectors proximate either the transmit or receive antenna may reflect and cross-polarize some of the transmitted horizontally-polarized energy from the transmit antenna, some of which may be directed to the receive antenna as a vertically-polarized reflection. It is believed that a two-dimensional channel model of the type disclosed herein is needed in order to decouple and cancel this cross-polarized reflection from the energy otherwise intended for the receive antenna. Similarly, full duplex communication carried out on the same channel requires echo cancellation sufficiently robust to substantially remove the influence of a transmitter on a nearby receiver. Again, such echo cancellation is believed to require, particularly in the case of moving reflectors, an accurate two-dimensional representation of at least the echo channel in order to permit the representation to be appropriately inverted. OTFS Transceiver Using Spreading Kernel As discussed above, embodiments of the OTFS method may involve generating a two-dimensional matrix by spreading a two-dimensional input data matrix. In addition, time/frequency tiling may be utilized in transport of the two-dimensional matrix across a channel. In this approach each matrix column may be tiled as a function of time; that is, each column element occupies a short symbol time slice utilizing the full available transmission bandwidth, with time gaps optionally interposed between subsequent columns. Alternatively, the matrix columns may be tiled and transported as a function of frequency; that is, each element of the column occupies a frequency bin for a longer period of time, with time gaps optionally interposed between subsequent columns. In other embodiments a spreading kernel may be used to effect spreading of the input data matrix. In this case two-dimensional spreading may be achieved through, for example, a two-dimensional cyclic convolution with a spreading kernel, a convolution implemented using a two-dimensional FFT, multiplication with the two-dimensional DFT of the spreading kernel, followed by a two-dimensional inverse Fourier transform. A wide variety of spreading kernels may be utilized; however, the two-dimensional DFT of the selected kernel should lack any zeroes so as to avoid division by zero during the dispreading process. Moreover, spreading may also be achieved using alternate methods of convolutions, transforms and permutations. Masking (i.e., element by element multiplication) may also be utilized as long as each operation in invertible. Attention is now directed toFIGS.44A and44B, which provide block diagram representations of embodiments of a first OTFS transceiver4400and a second OTFS transceiver4450configured to utilize a spreading kernel. Reference will be made to the first OTFS transceiver4400ofFIG.44Ain describing principles of OTFS communication using a spreading kernel. The second OTFS transceiver4450is substantially similar in principle to the first OTFS transceiver4400but is characterized by an architecture believed to enable more efficient signal processing. As shown inFIG.44A, a transmitter4404of the first OTFS transceiver4400includes two-dimensional spreading block4408, an FFT block4410and first and second time-frequency tiling elements4412,4414. The first and second time-frequency tiling elements4412,4414are configured to effect time-frequency tiling of the two-dimensionally spread input data and may, for example, be implemented using one or more filter banks. The two-dimensional spreading block4408and FFT block4410cooperatively effect spreading of the two-dimensional input data by performing a series of operations using, for example, a spreading kernel selected from a wide family of unitary matrices. In one embodiment this series of operations includes two-dimensional cyclic convolution with the spreading kernel, convolution implemented using a two-dimensional FFT, multiplication using two-dimensional Discrete Fourier Transform of the spreading kernel, and a two-dimensional inverse Fourier transform. This results in cyclically shifting the kernel matrix “up” along the column direction by an amount corresponding to an information index (yielding a time shift) and multiplying by a diagonal tone whose frequency is set by the information index. All resulting transformed matrices are then summed together in order to generate the two-dimensional spread matrix, each element of which is carried using a transformed Kernel (basis matrix). A receiver4420of the first OTFS transceiver4400includes first and second inverse time-frequency tiling elements4424,4426configured to effect an inverse of the tiling operation performed by the time-frequency tiling elements4412and4414. A two-dimensional IFFT block4428and despreading block4430are configured to perform the inverse of the spreading operation performed by the two-dimensional spreading block4408and the FFT block4410. The received data is then converted using an FFT block4434prior to being equalized by a time-frequency-space decision feedforward/feedback analyzer block4438. The equalized data is then converted using an IFFT block4440. Turning now toFIG.44B, a transmitter4454of the second OTFS transceiver4450includes a two-dimensional spreading arrangement comprised of an FFT block4458and a multiplier4460addressed by a Fourier mask. Within the transmitter4454, each information element is represented as a cyclic shift of a kernel matrix in both a horizontal (row) and vertical (column) direction corresponding to the applicable information element index (row and column position in the input two-dimensional information array). In the implementation ofFIG.44B, the spreading kernel is selected such that its two-dimensional DFT is comprised entirely of non-zero elements (thus enabling the resulting matrix to be inverted without forming singularities). The resulting matrix goes through a DFT transformation of the rows to represent the two-dimensional spread information element. All resulting transformed matrices are then summed together in order to generate the resulting two-dimensional spread information matrix. As shown inFIG.44B, an arrangement of time-frequency tiling elements4462,4464and4466are configured to effect time-frequency tiling of the two-dimensionally spread input data output by the multiplier4460. The time-frequency tiling elements4464and4466may, for example, be implemented using one or more filter banks. A receiver4470of the second OTFS transceiver4450includes a serial arrangement of inverse time-frequency tiling elements4474,4476,4478configured to effect an inverse of the tiling operation performed by the time-frequency tiling elements4462,4464and4466. A multiplier4480is configured to multiply the output produced by the inverse time-frequency tiling elements4474,4476and4478by an inverse mask. Next, an IFFT block4482converts the output of the multiplier4480and provides the result to a time-frequency-space decision feedforward/feedback analyzer block4488. The equalized data is then converted by an IFFT block4492. Mesh Networking Attention is now directed toFIGS.50-52, which illustratively represent mesh network implementations of OTFS communication systems. The OTFS mesh networks depicted inFIGS.50-52advantageously leverage the time-frequency-space equalization and echo cancellation techniques described herein to enable OTFS mesh nodes to engage in full duplex communication with other such nodes on the same communication channel, whether or not such communication channel is also used by neighboring OTFS mesh nodes. Referring toFIG.50, there is shown an OTFS mesh network5000within the context of a cellular communication system comprised of cell sites5004and associated cell coverage areas5008. As may be appreciated fromFIG.50, significant gaps may exist between the coverage areas5008. The mesh network5000comprises a plurality of OTFS wireless mesh nodes5020operative to provide wireless communication coverage to fixed or mobile devices within areas of high demand which are generally outside of the range of the coverage areas5008. For the reasons discussed above, each OTFS wireless mesh node5020may be configured for full duplex wireless communication with other such mesh nodes5020over the same frequency band. This full duplex wireless communication over the same frequency band is represented inFIG.50by wireless communication links5030. In the embodiment ofFIG.50each of the wireless mesh links5030operates over an identical frequency range. Turning now toFIG.51, there is shown an OTFS mesh network5100organized around a set of wired network gateways5110. The mesh network5100comprises a plurality of OTFS wireless mesh nodes5120operative to provide wireless communication to fixed or mobile devices within areas proximate each of the nodes5120. Each OTFS wireless mesh node5120may be configured for full duplex wireless communication with other such mesh nodes5120over the same frequency band. This full duplex wireless communication over the same frequency band is represented inFIG.51by wireless mesh links5130. In the embodiment ofFIG.51, the wireless mesh nodes5120are self-organizing in the sense that the nodes5120are configured to discover each other and to determine all possible paths over links5130to each wired network gateway5110. Accordingly, network routing techniques may be employed to route packetized information between and among the mesh nodes5120and the wired network gateways5110in both directions over the wireless mesh links5130. FIG.52shows an OTFS mesh network system5200comprised of a single-channel wireless mesh network5204including plurality of mesh elements. In one embodiment certain of the mesh elements of mesh network5204preferably include an OTFS wireless mesh router5210and a traffic aggregation device5220(e.g., and LTE node or Wi-Fi access point) serving end user devices5250within a respective coverage area5254. Each OTFS wireless mesh router5210may be configured for full duplex wireless communication with other such mesh nodes5210over the same frequency band. In the embodiment ofFIG.52, the wireless mesh nodes5210are self-organizing in the sense that the nodes5210are configured to discover each other and to determine all possible paths over OTFS wireless links5230to each wired network gateway5240. Accordingly, network routing techniques may be employed to route packetized information between and among the mesh nodes5120and a wired network5244—via the wired network gateways5110—in both directions over the wireless mesh links5130. As shown, the wired network5244may provide a conduit to a wide area network through which information packets are routed between the mesh network5204and a core network5260of a mobile network operator. In one embodiment mesh spatial gain may be achieved by using neighboring mesh nodes5120to support the simultaneous parallel transmission of streams of information using an identical frequency band over a single point to point link. This approach may improve signal transmission gain by using neighboring nodes5120to effectively create a distributed transmit source, thereby achieving gain through spatial signal separation. Some embodiments of the systems and methods described herein may include computer software and/or computer hardware/software combinations configured to implement one or more processes or functions associated with the methods such as those described above and/or in the related applications. These embodiments may be in the form of modules implementing functionality in software and/or hardware software combinations. Embodiments may also take the form of a computer storage product with a computer-readable medium having computer code thereon for performing various computer-implemented operations, such as operations related to functionality as describe herein. The media and computer code may be those specially designed and constructed for the purposes of the claimed systems and methods, or they may be of the kind well known and available to those having skill in the computer software arts, or they may be a combination of both. Examples of computer-readable media within the spirit and scope of this disclosure include, but are not limited to: magnetic media such as hard disks; optical media such as CD-ROMs, DVDs and holographic devices; magneto-optical media; and hardware devices that are specially configured to store and execute program code, such as programmable microcontrollers, application-specific integrated circuits (“ASICs”), programmable logic devices (“PLDs”) and ROM and RAM devices. Examples of computer code may include machine code, such as produced by a compiler, and files containing higher-level code that are executed by a computer using an interpreter. Computer code may be comprised of one or more modules executing a particular process or processes to provide useful results, and the modules may communicate with one another via means known in the art. For example, some embodiments of systems described herein may be implemented using assembly language, Java, C, C #, C++, or other programming languages and software development tools as are known in the art. Other embodiments of the described systems may be implemented in hardwired circuitry in place of, or in combination with, machine-executable software instructions. The foregoing description, for purposes of explanation, used specific nomenclature to provide a thorough understanding of the claimed systems and methods. However, it will be apparent to one skilled in the art that specific details are not required in order to practice the systems and methods described herein. Thus, the foregoing descriptions of specific embodiments of the described systems and methods are presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the claims to the precise forms disclosed; obviously, many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the described systems and methods and their practical applications, they thereby enable others skilled in the art to best utilize the described systems and methods and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the following claims and their equivalents define the scope of the systems and methods described herein.
166,860
11943090
MODE FOR INVENTION Embodiments of the present invention will be described in detail below with reference to the accompanying drawings. Redundant descriptions and descriptions of well-known functions and configurations that have been deemed to make the gist of the present invention unnecessarily obscure will be omitted below. The embodiments of the present invention are intended to fully describe the present invention to persons having ordinary knowledge in the art to which the present invention pertains. Accordingly, the shapes, sizes, etc. of components in the drawings may be exaggerated to make the description obvious. Preferred embodiments according to the present invention are described in detail with reference to the accompanying drawings. FIG.1is a block diagram illustrating a broadcast signal transmission/reception system according to an embodiment of the present invention. Referring toFIG.1, a broadcast signal transmission/reception system according to an embodiment of the present invention includes a broadcast signal transmission apparatus110, a wireless channel120, and a broadcast signal reception apparatus130. The broadcast signal transmission apparatus110includes a signal multiplexer111for multiplexing core layer data and enhanced layer data, and an OFDM transmitter113. The signal multiplexer111combines a core layer signal corresponding to core layer data and an enhanced layer signal corresponding to enhanced layer data at different power levels, and generates a multiplexed signal by performing interleaving applied to both the core layer signal and the enhanced layer signal. The OFDM transmitter113transmits the multiplexed signal using an OFDM communication method via an antenna117, thereby allowing the transmitted OFDM signal to be received via the antenna137of the broadcast signal reception apparatus130over the wireless channel120. The broadcast signal reception apparatus130includes an OFDM receiver133and a signal demultiplexer131. When the signal transmitted over the wireless channel120is received via the antenna137, the OFDM receiver133receives an OFDM signal through synchronization, channel estimation, and equalization. The signal demultiplexer131restores the core layer data from the signal received via the OFDM receiver133first, and then restores the enhanced layer data via cancellation corresponding to the restored core layer data. Although not explicitly illustrated inFIG.1, broadcast signal transmission/reception system according to an embodiment of the present invention may multiplex/demultiplex one or more pieces of extension layer data in addition to the core layer data and the enhanced layer data. In this case, the extension layer data may be multiplexed at a power level lower than that of the core layer data and the enhanced layer data. Furthermore, when two or more extension layers are included, the injection power level of a second extension layer may be lower than the injection power level of a first extension layer, and the injection power level of a third extension layer may be lower than the injection power level of the second extension layer. FIG.2is an operation flowchart illustrating a broadcast signal transmission/reception method according to an embodiment of the present invention. Referring toFIG.2, in the broadcast signal transmission/reception method according to the present embodiment, a core layer signal and an enhanced layer signal are combined at different power levels to multiplex the signals at step S210. Furthermore, in the broadcast signal transmission/reception method according to the present embodiment, the multiplexed signal is OFDM transmitted at step S220. Furthermore, in the broadcast signal transmission/reception method according to the present embodiment, the transmitted signal is OFDM received at step S230. In this case, at step S230, synchronization, channel estimation and equalization may be performed. Furthermore, in the broadcast signal transmission/reception method according to the present embodiment, core layer data is restored from the received signal at step S240. Furthermore, in the broadcast signal transmission/reception method according to the present embodiment, enhanced layer data is restored through the cancellation of the core layer signal at step S250. In particular, steps S240and S250illustrated inFIG.2may correspond to demultiplexing operations corresponding to step S210. FIG.3is a block diagram illustrating an example of the signal multiplexer illustrated inFIG.1. Referring toFIG.3, a signal multiplexer according to an embodiment of the present invention may include a core layer Bit-Interleaved Coded Modulation (BICM) unit310, an enhanced layer BICM unit320, an injection level controller330, a combiner340, and a time interleaver350. Generally, a Bit-Interleaved Coded Modulation (BICM) device includes an error correction encoder, a bit interleaver, and a symbol mapper. Each of the core layer BICM unit310and the enhanced layer BICM unit320illustrated inFIG.3may include an error correction encoder, a bit interleaver, and a symbol mapper. As illustrated inFIG.3, core layer data and enhanced layer data passes through different respective BICM units, and are then combined by the combiner340. That is, the core layer data passes through the core layer BICM unit310, the enhanced layer data passes through the enhanced layer BICM unit320and then the injection level controller330, and the core layer data and the enhanced layer data are combined by the combiner340. In this case, the enhanced layer BICM unit320may perform BICM encoding different from that of the core layer BICM unit310. That is, the enhanced layer BICM unit320may perform higher bit rate error correction encoding or symbol mapping than the core layer BICM unit310. Furthermore, the enhanced layer BICM unit320may perform less robust error correction encoding or symbol mapping than the core layer BICM unit310. For example, the core layer error correction encoder may exhibit a lower bit rate than the enhanced layer error correction encoder. In this case, the enhanced layer symbol mapper may be less robust than the core layer symbol mapper. The combiner340may be viewed as functioning to combine the core layer signal and the enhanced layer signal at different power levels. The core layer data uses forward error correction (FEC) code having a low code rate in order to perform robust reception, while the enhanced layer data uses FEC code having a high code rate in order to achieve a high data transmission rate. That is, the core layer data may have a broader coverage than the enhanced layer data in the same reception environment. The enhanced layer data having passed through the enhanced layer BICM unit320is adjusted in gain (or power) by the injection level controller330, and is combined with the core layer data by the combiner340. That is, the injection level controller330generates a power-reduced enhanced layer signal by reducing the power of the enhanced layer signal. In this case, the combiner340may be viewed as generating a multiplexed signal by combining the core layer signal with the power-reduced enhanced layer signal. The data obtained through the combination of the combiner340passes through the time interleaver350for distributing burst errors occurring over a channel, and is transmitted via the OFDM transmitter robust to multi-path and Doppler phenomena. In this case, it can be seen that the time interleaver350performs interleaving that is applied to both the core layer signal and the enhanced layer signal. That is, the core layer and the enhanced layer share the time interleaver, thereby preventing the unnecessary use of memory and also reducing latency at the receiver. Although will be described later in greater detail, the enhanced layer signal may correspond to enhanced layer data restored based on cancellation corresponding to the restoration of core layer data corresponding to the core layer signal. FIG.4is a block diagram illustrating another example of the signal multiplexer illustrated inFIG.1. Referring toFIG.4, it can be seen that a signal multiplexer multiplexes data corresponding to N (N is a natural number equal to or larger than 1) extension layers together in addition to core layer data and enhanced layer data. That is, the signal multiplexer illustrated inFIG.4includes N extension layer BICM units410, . . . ,430and injection level controllers440, . . . ,460in addition to a core layer BICM unit310, an enhanced layer BICM unit320, an injection level controller330, a combiner340, and a time interleaver350. The core layer BICM unit310, enhanced layer BICM unit320, injection level controller330, combiner340and time interleaver350illustrated inFIG.4have been described in detail in conjunction withFIG.3. Each of the N extension layer BICM units410, . . . ,430independently performs BICM encoding, and each of the injection level controllers440, . . . ,460performs power reduction corresponding to a corresponding extension layer, thereby enabling a power reduced extension layer signal to be combined with other layer signals via the combiner340. In particular, it is preferred that a reduction in power corresponding to each of the injection level controllers440, . . . ,460be higher than a reduction in power of the injection level controller330. That is, a lower one of the injection level controllers330,440, . . . ,460illustrated inFIG.4may correspond to a larger reduction in power. In the present invention, power adjustment may be increasing or decreasing the power of an input signal, and may be increasing or decreasing the gain of an input signal. The time interleaver350performs interleaving equally applied to the signals of the layers by interleaving the signals combined by the combiner340. FIG.5is a block diagram illustrating an example of the signal demultiplexer illustrated inFIG.1. Referring toFIG.5, a signal demultiplexer according to an embodiment of the present invention includes a time deinterleaver510, a core layer BICM decoder520, an enhanced layer symbol extractor530, and an enhanced layer BICM decoder540. In this case, the signal demultiplexer illustrated inFIG.5may correspond to the signal multiplexer illustrated inFIG.3. The time deinterleaver510receives a received signal from an OFDM receiver for performing operations, such as synchronization, channel estimation and equalization, and performs an operation related to the distribution of burst errors occurring over a channel. The output of the time deinterleaver510is provided to the core layer BICM decoder520, and the core layer BICM decoder520restores core layer data. In this case, the core layer BICM decoder520includes a core layer symbol demapper, a core layer bit deinterleaver, and a core layer error correction decoder. The core layer symbol demapper calculates Log-Likelihood Ratio (LLR) values related to symbols, the core layer bit deinterleaver strongly mixes the calculated LLR values with burst errors, and the core layer error correction decoder corrects error occurring over a channel. In particular, the core layer error correction decoder may output only information bits, or may output whole bits in which information bits have been mixed with parity bits. In this case, the core layer error correction decoder may output only information bits as core layer data, and may output whole bits in which information bits have been mixed with parity bits to the enhanced layer symbol extractor530. The enhanced layer symbol extractor530is provided with the whole bits by the core layer BICM decoder520of the core layer error correction decoder, and extracts enhanced layer symbols from the output signal of the time deinterleaver510. In this case, the enhanced layer symbol extractor530includes a buffer, a subtracter, a core layer symbol mapper, and a core layer bit interleaver. The buffer stores the output signal of the time deinterleaver510. The core layer bit interleaver receives the whole bits (information bits+parity bits) of the core layer BICM decoder, and performs the same core layer bit interleaving as the transmitter. The core layer symbol mapper generates core layer symbols, which are the same as the transmitter, from the interleaved signal. The subtracter obtains enhanced layer symbols by subtracting the output signal of the core layer symbol mapper from the signal stored in the buffer, and transfers the enhanced layer symbols to the enhanced layer BICM decoder540. In this case, the core layer bit interleaver and core layer symbol mapper included in the enhanced layer symbol extractor530may be the same as the core layer bit interleaver and the core layer symbol mapper illustrated inFIG.3. The enhanced layer BICM decoder540receives the enhanced layer symbols, and restores enhanced layer data. In this case, the enhanced layer BICM decoder540may include an enhanced layer symbol demapper, an enhanced layer bit deinterleaver, and an enhanced layer error correction decoder. The enhanced layer symbol demapper calculates Log-Likelihood Ratio (LLR) values related to the enhanced layer symbols, the enhanced layer bit deinterleaver strongly mixes the calculated LLR values with burst errors, and the enhanced layer error correction decoder corrects error occurring over a channel. That is, the signal demultiplexer illustrated inFIG.5restores core layer data first, leaves only enhanced layer symbols by cancellation core layer symbols in the received signal symbols, and then restores enhanced layer data. Since signals corresponding to respective layers are combined at different power levels, as described in conjunction withFIGS.3and4, data restoration having the lowest error is achieved only when the restoration starts with the signal combined at the highest power level. FIG.6is a block diagram illustrating another example of the signal demultiplexer illustrated inFIG.1. Referring toFIG.6, a signal demultiplexer according to an embodiment of the present invention includes a time deinterleaver510, a core layer BICM decoder520, an enhanced layer symbol extractor530, an enhanced layer BICM decoder540, one or more extension layer symbol extractors650and670, and one or more extension layer BICM decoders660and680. In this case, the signal demultiplexer illustrated inFIG.6may correspond to the signal multiplexer illustrated inFIG.4. The time deinterleaver510receives the received signal from the OFDM receiver that performs operations, such as synchronization, channel estimation and equalization, and performs the operation of distributing burst errors occurring over a channel. The output of the time deinterleaver510is provided to the core layer BICM decoder520, and the core layer BICM decoder520restores core layer data. In this case, the core layer BICM decoder520includes a core layer symbol demapper, a core layer bit deinterleaver, and a core layer error correction decoder. The core layer symbol demapper calculates LLR values related to symbols, the core layer bit deinterleaver strongly mixes the calculated LLR values with burst errors, and the core layer error correction decoder corrects error occurring over a channel. In particular, the core layer error correction decoder may output only information bits, or may output whole bits in which information bits have been combined with parity bits. In this case, the core layer error correction decoder may output only information bits as core layer data, and may output whole bits in which information bits have been mixed with parity bits to the enhanced layer symbol extractor530. The enhanced layer symbol extractor530receives whole bits from the core layer error correction decoder of the core layer BICM decoder520, and extracts enhanced layer symbols from the output signal of the time deinterleaver510. In this case, the enhanced layer symbol extractor530includes a buffer, a subtracter, a core layer symbol mapper, and a core layer bit interleaver. The buffer stores the output signal of the time deinterleaver510. The core layer bit interleaver receives whole bits (information bits+parity bits) of the core layer BICM decoder, and performs core layer bit interleaving that is the same as that of the transmitter. The core layer symbol mapper generates core layer symbols that are the same as those of the transmitter from the interleaved signal. The subtracter obtains enhanced layer symbols by subtracting the output signal of the core layer symbol mapper from the signal stored in the buffer, and transfers the enhanced layer symbols to the enhanced layer BICM decoder540. In this case, the core layer bit interleaver and the core layer symbol mapper included in the enhanced layer symbol extractor530may be the same as the core layer bit interleaver and the core layer symbol mapper illustrated inFIG.4. The enhanced layer BICM decoder540receives enhanced layer symbols, and restores enhanced layer data. In this case, the enhanced layer BICM decoder540may include an enhanced layer symbol demapper, an enhanced layer bit deinterleaver, and an enhanced layer error correction decoder. The enhanced layer symbol demapper calculates LLR values related to the enhanced layer symbols, the enhanced layer bit deinterleaver strongly mixes the calculated LLR values with burst errors, and the enhanced layer error correction decoder corrects error occurring over a channel. In particular, the enhanced layer error correction decoder may output only information bits, and may output whole bits in which information bits have been combined with parity bits. In this case, the enhanced layer error correction decoder may output only information bits as enhanced layer data, and may output whole bits in which information bits have been mixed with parity bits to the extension layer symbol extractor650. The extension layer symbol extractor650receives whole bits from the enhanced layer error correction decoder of the enhanced layer BICM decoder540, and extracts extension layer symbols from the output signal of the subtracter of the enhanced layer symbol extractor530. In this case, the extension layer symbol extractor650includes a buffer, a subtracter, an enhanced layer symbol mapper, and an enhanced layer bit interleaver. The buffer stores the output signal of the subtracter of the enhanced layer symbol extractor. The enhanced layer bit interleaver receives the whole bits information (bits+parity bits) of the enhanced layer BICM decoder, and performs enhanced layer bit interleaving that is the same as that of the transmitter. The enhanced layer symbol mapper generates enhanced layer symbols, which are the same as those of the transmitter, from the interleaved signal. The subtracter obtains extension layer symbols by subtracting the output signal of the enhanced layer symbol mapper from the signal stored in the buffer, and transfers the extension layer symbols to the extension layer BICM decoder660. In this case, the enhanced layer bit interleaver and the enhanced layer symbol mapper included in the extension layer symbol extractor650are the same as the enhanced layer bit interleaver and the enhanced layer symbol mapper illustrated inFIG.4. The extension layer BICM decoder660receives the extension layer symbols, and restores extension layer data. In this case, the extension layer BICM decoder660may include an extension layer symbol demapper, an extension layer bit deinterleaver, and an extension layer error correction decoder. The extension layer symbol demapper calculates LLR values related to the extension layer symbols, the extension layer bit deinterleaver strongly mixes the calculated LLR values with burst errors, and the extension layer error correction decoder corrects error occurring over a channel. In particular, each of the extension layer symbol extractor and the extension layer BICM decoder may include two or more extractors or decoders if the extension layers are two or more in number. That is, in the example illustrated inFIG.6, the extension layer error correction decoder of the extension layer BICM decoder660may output only information bits, and may output whole bits in which information bits have been combined with parity bits. In this case, the extension layer error correction decoder outputs only information bits as extension layer data, and may output whole bits in which information bits have been mixed with parity bits to the subsequent extension layer symbol extractor670. The configuration and operation of the extension layer symbol extractor670and the extension layer BICM decoder680can be easily understood from the configuration and operation of the above-described extension layer symbol extractor650and extension layer BICM decoder660. It can be seen that the signal demultiplexer illustrated inFIG.6restores core layer data first, restores enhanced layer data using the cancellation of core layer symbols, and restores extension layer data using the cancellation of enhanced layer symbols. Two or more extension layers may be provided, in which case restoration starts with an extension layer combined at a higher power level. Since the signal multiplexer illustrated inFIGS.3and4is configured such that two or more signals are combined at different power levels, it may be necessary to adjust the power levels after combination. That is, when a core layer signal and an enhanced layer signal are combined by a combiner, the power level of an obtained multiplexing signal may be higher than that of the core layer signal or enhanced layer signal before the combination, and thus a problem, such as the distortion of a signal, attributable to such an increase in power may occur during signal transmission/reception. FIG.7is a diagram showing an increase in power attributable to the combination of a core layer signal and an enhanced layer signal. Referring toFIG.7, it can be seen that when a multiplexed signal is generated by combining a core layer signal with an enhanced layer signal power reduced by an injection level, the power level of the multiplexed signal is higher than that of the core layer signal or enhanced layer signal. In this case, the injection level adjusted by the injection level controller illustrated inFIG.3or4may be adjusted from 3.0 dB to 10.0 dB in steps of 0.5 dB. When the injection level is 3.0 dB, the power of the enhanced layer signal is lower than that of the core layer signal by 3 dB. When the injection level is 10.0 dB, the power of the enhanced layer signal is lower than the power of the core layer signal by 10 dB. This relationship is applied not only between the core layer signal and the enhanced layer signal but also between the enhanced layer signal and the extension layer signal or between the extension layer signals. FIG.8is a block diagram illustrating another example of the signal multiplexer illustrated inFIG.1. Referring toFIG.8, a signal multiplexer according to an embodiment of the present invention may include a core layer BICM unit310, an enhanced layer BICM unit320, an injection level controller330, a combiner340, a power normalizer810, and a time interleaver350. Generally, a BICM device includes an error correction encoder, a bit interleaver, and a symbol mapper. Each of the core layer BICM unit310and the enhanced layer BICM unit320illustrated inFIG.8may include an error correction encoder, a bit interleaver, and a symbol mapper. As illustrated inFIG.8, core layer data and enhanced layer data passes through different respective BICM units, and are then combined by the combiner340. That is, the core layer data passes through the core layer BICM unit310, the enhanced layer data passes through the enhanced layer BICM unit320and then the injection level controller330, and the core layer data and the enhanced layer data are combined by the combiner340. In this case, the enhanced layer BICM unit320may perform BICM encoding different from that of the core layer BICM unit310. That is, the enhanced layer BICM unit320may perform higher bit rate error correction encoding or symbol mapping than the core layer BICM unit310. Furthermore, the enhanced layer BICM unit320may perform less robust error correction encoding or symbol mapping than the core layer BICM310unit. For example, the core layer error correction encoder may exhibit a lower bit rate than the enhanced layer error correction encoder. In this case, the enhanced layer symbol mapper may be less robust than the core layer symbol mapper. The combiner340may be viewed as functioning to combine the core layer signal and the enhanced layer signal at different power levels. The core layer data uses forward error correction (FEC) code having a low code rate in order to perform robust reception, while the enhanced layer data uses FEC code having a high code rate in order to achieve a high data transmission rate. That is, the core layer data may have a broader coverage than the enhanced layer data in the same reception environment. The enhanced layer data having passed through the enhanced layer BICM unit320is adjusted in gain (or power) by the injection level controller330, and is combined with the core layer data by the combiner340. That is, the injection level controller330generates a power-reduced enhanced layer signal by reducing the power of the enhanced layer signal. In this case, the injection level controller330may control the injection level from 3.0 dB to 10.0 dB in steps of 0.5 dB. In this case, the combiner340may be viewed as generating a multiplexed signal by combining the core layer signal with the power-reduced enhanced layer signal. The signal obtained by the combination of the combiner340is provided to the power normalizer810so that the power of the signal can be reduced by an increase in power caused by the combination of the core layer signal and the enhanced layer signal, and then power adjustment is performed. That is, the power normalizer810reduces the power of the signal, obtained by the multiplexing of the combiner340, to a power level corresponding to the core layer signal. Since the level of the combined signal is higher than the level of one layer signal, the power normalizing of the power normalizer810is required in order to prevent amplitude clipping, etc. in the remaining portion of a broadcast signal transmission/reception system. Assuming that the power levels of the core layer signal and the enhanced layer signal are normalized to 1 when an enhanced layer signal SEis injected into a core layer signal SCat a preset injection level, a combined signal may be expressed by SC+αSE. In this case, a is scaling factors corresponding to various injection levels. That is, the injection level controller330may correspond to the scaling factor. For example, when the injection level of an enhanced layer is 3 dB, a combined signal may be expressed by SC=12⁢SE. Since the power of a combined signal (a multiplexed signal) increases compared to a core layer signal, the power normalizer810needs to mitigate the increase in power. The output of the power normalizer810may be expressed by β(SC+αSE). In this case, β is normalizing factors based on various injection levels of the enhanced layer. When the injection level of the enhanced layer is 3 dB, the power of the combined signal is increased by 50% compared to that of the core layer signal. Accordingly, the output of the power normalizer810may be expressed by 23⁢(SC+12⁢SE). Table 1 below lists scaling factors α and normalizing factors β for various injection levels (CL: Core Layer, EL: Enhanced Layer): TABLE 1EL Injection levelScalingNormalizingrelative to CLfactor αfactor β3.0 dB0.70794580.81617363.5 dB0.66834390.83140614.0 dB0.63095730.84572624.5 dB0.59566210.85913275.0 dB0.56234130.87163465.5 dB0.53088440.88324956.0 dB0.50118720.89400226.5 dB0.47315130.90392417.0 dB0.44668360.91305127.5 dB0.42169650.92142318.0 dB0.39810720.92908198.5 dB0.37583740.93607129.0 dB0.35481340.94243539.5 dB0.33496540.948218010.0 dB0.31622780.9534626 That is, the power normalizer810corresponds to the normalizing factor, and reduces the power of the multiplexed signal by an amount by which the combiner340has increased the power. In this case, each of the normalizing factor and the scaling factor may be a rational number larger than 0 and smaller than 1. In this case, the scaling factor may decrease as a reduction in power corresponding to the injection level controller330becomes larger, and the normalizing factor may increase as a reduction in power corresponding to the injection level controller330becomes larger. The power normalized signal passes through the time interleaver350for distributing burst errors occurring over a channel, and is transmitted via the OFDM transmitter robust to multi-path and Doppler phenomena. In this case, it can be seen that the time interleaver350performs interleaving that is applied to both the core layer signal and the enhanced layer signal. That is, the core layer and the enhanced layer share the time interleaver, thereby preventing the unnecessary use of memory and also reducing latency at the receiver. Although will be described later in greater detail, the enhanced layer signal may correspond to enhanced layer data restored based on cancellation corresponding to the restoration of core layer data corresponding to the core layer signal. The combiner340may combine one or more extension layer signals having power levels lower than those of the core layer signal and the enhanced layer signal with the core layer signal and the enhanced layer signal. FIG.9is a block diagram illustrating still another example of the signal multiplexer illustrated inFIG.1. Referring toFIG.9, it can be seen that a signal multiplexer multiplexes data corresponding to N (N is a natural number equal to or larger than 1) extension layers together in addition to core layer data and enhanced layer data. That is, the signal multiplexer illustrated inFIG.9includes N extension layer BICM units410, . . . ,430and injection level controllers440, . . . ,460in addition to a core layer BICM unit310, an enhanced layer BICM unit320, an injection level controller330, a combiner340, a power normalizer810, and a time interleaver350. The core layer BICM unit310, enhanced layer BICM unit320, injection level controller330, combiner340and time interleaver350illustrated inFIG.9have been described in detail in conjunction withFIG.3. Each of the N extension layer BICM units410, . . . ,430independently performs BICM encoding, and each of the injection level controllers440, . . . ,460performs power reduction corresponding to a corresponding extension layer, thereby enabling a power reduced extension layer signal to be combined with other layer signals via the combiner340. In particular, it is preferred that a reduction in power corresponding to each of the injection level controllers440, . . . ,460be higher than a reduction in power of the injection level controller330. That is, a lower one of the injection level controllers330,440, . . . ,460illustrated inFIG.9may correspond to a larger reduction in power. In the present invention, power adjustment may be increasing or decreasing the power of an input signal, and may be increasing or decreasing the gain of an input signal. The power normalizer810mitigates an increase in power caused by the combination of a plurality of layer signals by the combiner340. The time interleaver350performs interleaving equally applied to the signals of the layers by interleaving the normalized signals. FIG.10is a block diagram illustrating still another example of the signal demultiplexer illustrated inFIG.1. Referring toFIG.10, a signal demultiplexer according to an embodiment of the present invention includes a time deinterleaver510, a de-normalizer1010, core layer BICM decoder520, an enhanced layer symbol extractor530, a de-injection level controller1020, and an enhanced layer BICM decoder540. In this case, the signal demultiplexer illustrated inFIG.10may correspond to the signal multiplexer illustrated inFIG.8. The time deinterleaver510receives a received signal from an OFDM receiver for performing operations, such as synchronization, channel estimation and equalization, and performs an operation related to the distribution of burst errors occurring over a channel. The de-normalizer1010corresponds to the power normalizer of the transmitter, and increases power by an amount by which the power normalizer has decreased the power. Although the de-normalizer1010is illustrated as adjusting the power of the output signal of the time interleaver510in the example illustrated inFIG.10, the de-normalizer1010may be located before the time interleaver510so that power adjustment is performed before interleaving in some embodiments. That is, the de-normalizer1010may be viewed as being located before or after the time interleaver510and amplifying the magnitude of a signal for the purpose of the LLR calculation of the core layer symbol demapper. The output of the time deinterleaver510(or the output of the de-normalizer1010) is provided to the core layer BICM decoder520, and the core layer BICM decoder520restores core layer data. In this case, the core layer BICM decoder520includes a core layer symbol demapper, a core layer bit deinterleaver, and a core layer error correction decoder. The core layer symbol demapper calculates LLR values related to symbols, the core layer bit deinterleaver strongly mixes the calculated LLR values with burst errors, and the core layer error correction decoder corrects error occurring over a channel. In particular, the core layer error correction decoder may output only information bits, or may output whole bits in which information bits have been mixed with parity bits. In this case, the core layer error correction decoder may output only information bits as core layer data, and may output whole bits in which information bits have been mixed with parity bits to the enhanced layer symbol extractor530. The enhanced layer symbol extractor530is provided with the whole bits by the core layer BICM decoder520of the core layer error correction decoder, and extracts enhanced layer symbols from the output signal of the time deinterleaver510. In this case, the enhanced layer symbol extractor530includes a buffer, a subtracter, a core layer symbol mapper, and a core layer bit interleaver. The buffer stores the output signal of the time deinterleaver510or de-normalizer1010. The core layer bit interleaver receives the whole bits (information bits+parity bits) of the core layer BICM decoder, and performs the same core layer bit interleaving as the transmitter. The core layer symbol mapper generates core layer symbols, which are the same as the transmitter, from the interleaved signal. The subtracter obtains enhanced layer symbols by subtracting the output signal of the core layer symbol mapper from the signal stored in the buffer, and transfers the enhanced layer symbols to the de-injection level controller1020. In this case, the core layer bit interleaver and core layer symbol mapper included in the enhanced layer symbol extractor530may be the same as the core layer bit interleaver and the core layer symbol mapper illustrated inFIG.8. The de-injection level controller1020receives the enhanced layer symbols, and increases the power of the input signal by an amount by which the injection level controller of the transmitter has decreased the power. That is, the de-injection level controller1020amplifies the input signal, and provides the amplified input signal to the enhanced layer BICM decoder540. For example, if at the transmitter, the power used to combine the enhanced layer signal is lower than the power used to combine the core layer signal by 3 dB, the de-injection level controller1020functions to increase the power of the input signal by 3 dB. The enhanced layer BICM decoder540receives the enhanced layer symbol whose power has been increased by the de-injection level controller1020, and restores the enhanced layer data. In this case, the enhanced layer BICM decoder540may include an enhanced layer symbol demapper, an enhanced layer bit deinterleaver, and an enhanced layer error correction decoder. The enhanced layer symbol demapper calculates LLR values related to the enhanced layer symbols, the enhanced layer bit deinterleaver strongly mixes the calculated LLR values with burst errors, and the enhanced layer error correction decoder corrects error occurring over a channel. That is, the signal demultiplexer illustrated inFIG.10restores core layer data first, leaves only the enhanced layer symbols by cancellation the core layer symbols in the received signal symbols, and then restores enhanced layer data by increasing the power of the enhanced layer symbols. FIG.11is a block diagram illustrating still another example of the signal demultiplexer illustrated inFIG.1. Referring toFIG.11, a signal demultiplexer according to an embodiment of the present invention includes a time deinterleaver510, a de-normalizer1010, a core layer BICM decoder520, an enhanced layer symbol extractor530, an enhanced layer BICM decoder540, one or more extension layer symbol extractors650and670, one or more extension layer BICM decoders660and680, and de-injection level controllers1020,1150and1170. In this case, the signal demultiplexer illustrated inFIG.11may correspond to the signal multiplexer illustrated inFIG.9. The time deinterleaver510receives a received signal from an OFDM receiver for performing operations, such as synchronization, channel estimation and equalization, and performs an operation related to the distribution of burst errors occurring over a channel. The de-normalizer1010corresponds to the power normalizer of the transmitter, and increases power by an amount by which the power normalizer has decreased the power. Although the de-normalizer1010is illustrated as adjusting the power of the output signal of the time interleaver510in the example illustrated inFIG.11, the de-normalizer1010may be located before the time interleaver510so that power adjustment is performed before interleaving in some embodiments. That is, the de-normalizer1010may be viewed as being located before or after the time interleaver510and amplifying the magnitude of a signal for the purpose of the LLR calculation of the core layer symbol demapper. The output of the time deinterleaver510(or the output of the de-normalizer1010) is provided to the core layer BICM decoder520, and the core layer BICM decoder520restores core layer data. In this case, the core layer BICM decoder520includes a core layer symbol demapper, a core layer bit deinterleaver, and a core layer error correction decoder. The core layer symbol demapper calculates LLR values related to symbols, the core layer bit deinterleaver strongly mixes the calculated LLR values with burst errors, and the core layer error correction decoder corrects error occurring over a channel. In particular, the core layer error correction decoder may output only information bits, or may output whole bits in which information bits have been mixed with parity bits. In this case, the core layer error correction decoder may output only information bits as core layer data, and may output whole bits in which information bits have been mixed with parity bits to the enhanced layer symbol extractor530. The enhanced layer symbol extractor530is provided with the whole bits by the core layer BICM decoder520of the core layer error correction decoder, and extracts enhanced layer symbols from the output signal of the time deinterleaver510. In this case, the enhanced layer symbol extractor530includes a buffer, a subtracter, a core layer symbol mapper, and a core layer bit interleaver. The buffer stores the output signal of the time deinterleaver510or de-normalizer1010. The core layer bit interleaver receives the whole bits (information bits+parity bits) of the core layer BICM decoder, and performs the same core layer bit interleaving as the transmitter. The core layer symbol mapper generates core layer symbols, which are the same as the transmitter, from the interleaved signal. The subtracter obtains enhanced layer symbols by subtracting the output signal of the core layer symbol mapper from the signal stored in the buffer, and transfers the enhanced layer symbols to the de-injection level controller1020. In this case, the core layer bit interleaver and core layer symbol mapper included in the enhanced layer symbol extractor530may be the same as the core layer bit interleaver and the core layer symbol mapper illustrated inFIG.9. The de-injection level controller1020receives the enhanced layer symbols, and increases the power of the input signal by an amount by which the injection level controller of the transmitter has decreased the power. That is, the de-injection level controller1020amplifies the input signal, and provides the amplified input signal to the enhanced layer BICM decoder540. The enhanced layer BICM decoder540receives the enhanced layer symbol whose power has been increased by the de-injection level controller1020, and restores the enhanced layer data. In this case, the enhanced layer BICM decoder540may include an enhanced layer symbol demapper, an enhanced layer bit deinterleaver, and an enhanced layer error correction decoder. The enhanced layer symbol demapper calculates LLR values related to the enhanced layer symbols, the enhanced layer bit deinterleaver strongly mixes the calculated LLR values with burst errors, and the enhanced layer error correction decoder corrects error occurring over a channel. In particular, the enhanced layer error correction decoder may output only information bits, and may output whole bits in which information bits have been combined with parity bits. In this case, the enhanced layer error correction decoder may output only information bits as enhanced layer data, and may output whole bits in which information bits have been mixed with parity bits to the extension layer symbol extractor650. The extension layer symbol extractor650receives whole bits from the enhanced layer error correction decoder of the enhanced layer BICM decoder540, and extracts extension layer symbols from the output signal of the de-injection level controller1020. In this case, the de-injection level controller1020may amplify the power of the output signal of the subtracter of the enhanced layer symbol extractor530. In this case, the extension layer symbol extractor650includes a buffer, a subtracter, an enhanced layer symbol mapper, and an enhanced layer bit interleaver. The buffer stores the output signal of the de-injection level controller1020. The enhanced layer bit interleaver receives the whole bits information (bits+parity bits) of the enhanced layer BICM decoder, and performs enhanced layer bit interleaving that is the same as that of the transmitter. The enhanced layer symbol mapper generates enhanced layer symbols, which are the same as those of the transmitter, from the interleaved signal. The subtracter obtains extension layer symbols by subtracting the output signal of the enhanced layer symbol mapper from the signal stored in the buffer, and transfers the extension layer symbols to the extension layer BICM decoder660. In this case, the enhanced layer bit interleaver and the enhanced layer symbol mapper included in the extension layer symbol extractor650are the same as the enhanced layer bit interleaver and the enhanced layer symbol mapper illustrated inFIG.9. The de-injection level controller1150increases power by an amount by which the injection level controller of a corresponding layer has decreased the power at the transmitter. The extension layer BICM decoder660receives the extension layer symbols whose power has been increased by the de-injection level controller1150, and restores extension layer data. In this case, the extension layer BICM decoder660may include an extension layer symbol demapper, an extension layer bit deinterleaver, and an extension layer error correction decoder. The extension layer symbol demapper calculates LLR values related to the extension layer symbols, the extension layer bit deinterleaver strongly mixes the calculated LLR values with burst errors, and the extension layer error correction decoder corrects error occurring over a channel. In particular, each of the extension layer symbol extractor and the extension layer BICM decoder may include two or more extractors or decoders if two or more extension layers are present. That is, in the example illustrated inFIG.6, the extension layer error correction decoder of the extension layer BICM decoder660may output only information bits, and may output whole bits in which information bits have been combined with parity bits. In this case, the extension layer error correction decoder outputs only information bits as extension layer data, and may output whole bits in which information bits have been mixed with parity bits to the subsequent extension layer symbol extractor670. The configuration and operation of the extension layer symbol extractor670, the extension layer BICM decoder680and the de-injection level controller1170can be easily understood from the configuration and operation of the above-described extension layer symbol extractor650, extension layer BICM decoder660and de-injection level controller1150. A lower one of the de-injection level controllers1020,1150and1170illustrated inFIG.11may correspond to a larger increase in power. That is, the de-injection level controller1150may increase power more than the de-injection level controller1020, and the de-injection level controller1170may increase power more than the de-injection level controller1150. It can be seen that the signal demultiplexer illustrated inFIG.11restores core layer data first, restores enhanced layer data using the cancellation of core layer symbols, and restores extension layer data using the cancellation of enhanced layer symbols. Two or more extension layers may be provided, in which case restoration starts with an extension layer combined at a higher power level. FIG.12is an operation flowchart illustrating a signal multiplexing method according to an embodiment of the present invention. Referring toFIG.12, in the signal multiplexing method according to the present embodiment, BICM is applied to core layer data at step S1210. Furthermore, in the signal multiplexing method according to the present embodiment, BICM is applied to enhanced layer data at step S1220. The BICM applied at step S1220may be different from the BICM applied to step S1210. In this case, the BICM applied at step S1220may be less robust than the BICM applied to step S1210. In this case, the bit rate of the BICM applied at step S1220may be less robust than that of the BICM applied to step S1210. In this case, an enhanced layer signal may correspond to the enhanced layer data that is restored based on cancellation corresponding to the restoration of the core layer data corresponding to a core layer signal. Furthermore, in the signal multiplexing method according to the present embodiment, a power-reduced enhanced layer signal is generated by reducing the power of the enhanced layer signal at step S1230. In this case, at step S1230, an injection level may be changed from 3.0 dB to 10.0 dB in steps of 0.5 dB. Furthermore, in the signal multiplexing method according to the present embodiment, a multiplexed signal is generated by combining the core layer signal and the power-reduced enhanced layer signal at step S1240. That is, at step S1240, the core layer signal and the enhanced layer signal are combined at different power levels so that the power level of the enhanced layer signal is lower than the power level of the core layer signal. In this case, at step S1240, one or more extension layer signals having lower power levels than the core layer signal and the enhanced layer signal may be combined along with the core layer signal and the enhanced layer signal. Furthermore, in the signal multiplexing method according to the present embodiment, the power of the multiplexed signal is reduced at step S1250. In this case, at step S1250, the power of the multiplexed signal may be reduced to the power of the core layer signal. In this case, at step S1250, the power of the multiplexed signal may be reduced by an amount by which the power has been increased at step S1240. Furthermore, in the signal multiplexing method according to the present embodiment, interleaving applied to both the core layer signal and the enhanced layer signal is performed at step S1260. The signal multiplexing method illustrated inFIG.12may correspond to steps S240and S250illustrated inFIG.2. As described above, with respect to the signal multiplexing apparatus and method according to the present invention, the configurations and operations of the above-described embodiments are not restrictively applied, but all or some of the embodiments may be selectively combined and configured so that the embodiments may be modified in various ways.
49,287
11943091
DETAILED DESCRIPTION Overview Presented herein are techniques to perform call failure diagnostics in a more efficient and timely manner. A method includes receiving, at a network device, an indication of calls-of-interest, detecting, at the network device, a failure, including a total failure or, e.g., poor quality audio, of one of the calls-of-interest, triggering, in response to the detecting, at the network device, diagnostics data analysis of data associated with the failure of one of the calls-of-interest, determining, based on the diagnostics data analysis, a cause of the failure of the one of the calls-of-interest, and notifying, by the network device, a management system of the cause of the failure of the one of the calls-of-interest and of recent configuration changes on the network device that are related to the cause of the failure of the one of the calls-of-interest. In another embodiment, a device is provided. The device includes an interface configured to enable network communications, a memory, and one or more processors coupled to the interface and the memory, and configured to: receive an indication of calls-of-interest, detect a failure, including a total failure or, e.g., poor quality audio, of one of the calls-of-interest, trigger, in response to detecting the failure of one of the calls-of-interest, diagnostics data analysis of data associated with the failure of one of the calls-of-interest, determine, based on the diagnostics data analysis, a cause of the failure of the one of the calls-of-interest, and notify a management system of the cause of the failure of the one of the calls-of-interest and of recent configuration changes that are related to the cause of the failure of the one of the calls-of-interest. Example Embodiments The embodiments described herein provide a method to distribute the execution of diagnostics (sometimes referred to in the field as “Intellectual Capital”) that detect a call routing failure, including a total failure or, e.g., poor quality audio, etc. The diagnostics analyze the root cause(s) of the failure and are performed in real-time at collaboration network devices in order to minimize the need to send diagnostics and operational data to a centralized cloud function for analysis. In one possible implementation, peer network devices may exchange their diagnostics execution capabilities, and respective peer network devices may trigger the diagnostics execution on a peer network device, on-demand. The described methodology also includes approaches to keep the diagnostics (or diagnostics logic) up-to-date on respective network devices. The diagnostics, though described herein in the context of call failures, may also be leveraged to run diagnostics related, more generally to, e.g., routing, security problems in the network, etc. While the description below focuses on “calls-of-interest,” those skilled in the art will appreciate that “calls” may be considered more broadly to include more general “connections” including, e.g., call control, an audio stream, a video stream, a TCP/UDP connection, etc. FIG.1shows several components of a calling system network topology10, including route policy analyzer logic112hosted by an access cloud110, and diagnostics logic180hosted on at least one of a call controller130and at least one session border controller142,144,146, according to an example embodiment. More specifically, access cloud110may include one more computing devices, servers, etc. configured to enable a network administrator to sign-up for cloud calling services, and perform other administrative tasks. Access cloud110may more generally provide asset inventory, adoption insights, proactive insights such as list of security advisories, field notices, bugs impacting the assets, best practices, software recommendation insights, and perform solution level troubleshooting, etc. Access cloud110is in communication with collaboration management cloud120(i.e., a management system), which may also include one more computing devices, servers, etc. Collaboration management cloud120is configured to include, e.g., a user interface (not shown) that enables a customer, e.g., an enterprise or network administrator, to control telephone/calling functionality within a given enterprise50. In some possible implementations access cloud110and collaboration management cloud120may be hosted in a single cloud environment. Collaboration management cloud120is in communication with a call controller130, which is configured to physically connect a hardwired telephone set135, or perhaps to wirelessly connect with one or more wireless handsets (e.g., a mobile or smart phone, not shown), and to, e.g., switch incoming and outgoing calls to/from such a telephone set135or wireless handset, and to provide calling functions such as hold, forward, conferencing, etc. Call controller130may be in communication with a session border controller (SBC)142, which may operate as, e.g., a gateway of sorts to other networks or cloud infrastructure such as collaboration cloud calling150, which may be part of collaboration management cloud120, a public switched telephone network (PSTN)/session initial protocol (SIP) network (PSTN/SIP network160), etc. Each of collaboration cloud calling150and PSTN/SIP network160are in communication with its own session border controller, namely SBC144and SBC146to provide connectivity to other networks, and ultimately to a distal endpoint. As shown in the figure, each of call controller130, SBC142, SBC144, and SBC146may host diagnostics logic180. In the topology shown inFIG.1, call controller130, SBC142, SBC144, and SBC146have bidirectional connectivity to collaboration management cloud120and/or access cloud110. This network connectivity is used to provision features and send telemetry data to collaboration management cloud120. The telemetry data includes information such as call detail records (CRDs) (details of calls placed in the network), feature usage, critical events, etc. The embodiments described herein leverage this bi-directional connectivity to distribute updates for diagnostics logic180, activate execution of diagnostics logic180, and receive execution or diagnostics results from diagnostics logic180. With continued reference toFIG.1, and more specifically, access cloud110is in communication with collaboration management cloud120and receives, at101call detail records (CDRs) periodically (e.g., every few minutes or hours). This information is used, by route policy analyzer logic112, to identify a list of “calls of interest” for selected users (e.g., enterprise customers). Such a list could be determined based on business needs such as top-ten frequently called numbers, Toll-Free numbers, calls placed by executives, calls to support teams, etc., and may be configurable by the enterprise administrator. As noted, network devices both in the on-premise and in the cloud, namely call controller130, SBC142, SBC144, and SBC146, are configured to run diagnostics logic180natively. In an embodiment, each such device comes with “pre-packaged” diagnostics logic180that is part of the device's underlying software or hardware. As the device is upgraded to, e.g., a new software version or components are upgraded, the device also receives a new version of diagnostics logic180that is updated to take into consideration any new features introduced in the device software version or hardware upgrade. This avoids a situation in which an older version of diagnostics logic180is run on the device but is unaware of relevant features recently-introduced in the device. A good example is call routing failure analysis which takes into consideration the call routing features available within the device in which diagnostics logic180is run. If the call routing failure analysis operating as part of diagnostics logic180is unaware of a new routing feature and that feature has been enabled, and configured, then the diagnostics analysis result may be inaccurate. In an embodiment, diagnostics logic180on each device is either automatically enabled or manually activated by an enterprise administrator via, e.g., route policy analyzer logic112. To initiate execution of diagnostics logic180, at102, an activation request (Activate Call Failure Analysis {Calls of Interest}) is sent from access cloud110to collaboration management cloud120along with input parameters. In this example, the list of {Calls of Interest}, namely Calling numbers, Called numbers, and Device ID) are sent as input parameters. The “Device ID” may, in one implementation, represent a collaboration endpoint (not shown) such that the system will, for example, monitor all calls from the collaboration endpoint used by an executive. Also, calls of interest could include one or more of these three parameters, but it is not required to always specify all the three parameters. Based on the Device ID parameter, collaboration management cloud120identifies the set of network devices (e.g., call controller130, SBC142, SBC144, and/or SBC146) in which execution of diagnostics logic180is to occur, and may check whether the identified devices in the network have appropriate diagnostics execution capabilities and, then at103, activates diagnostics logic180in those devices so that they thereafter detect call failures and perform failure analysis. The activated diagnostics logic180in, e.g., call controller130starts to monitor calls and detects call failures specifically for the calls of interest provided as input. Diagnostics logic180may use several sources to perform its analysis including system logs and CDR data to detect call failures, and analyze call signaling in the local logs to determine the root cause of the failure. To enable end-to-end call failure detection and analysis, diagnostics logic180, as shown inFIG.1, may also be deployed and enabled in edge devices (e.g., SBCs144,146) that are in communication with collaboration cloud calling150and PSTN/SIP network160. Diagnostics logic180takes different actions depending on call disconnect reason or location as described in the following scenarios, beginning first with the scenario illustrated byFIG.2. FIG.2illustrates a first scenario in which a call fails as a result of a local failure and how call controller130provides diagnostics results to collaboration management cloud120, according to an example embodiment. More specifically, at201a call that is within the calls of interest fails. Diagnostics logic180executing on call controller130detects the failure. As a result, at202, diagnostics logic180triggers diagnostics data analysis, and identifies that the source of the call disconnect is local and determines, e.g., that a “lack of route” is the root cause. Diagnostics logic180also analyzes recent and relevant configuration changes (e.g., deletion of a route pattern in call controller130. At203, diagnostics logic180sends diagnostics results220to the collaboration management cloud120. Diagnostics results220may include call details such as calling, called number, device, user that experienced the problem, etc., along with the recent configuration changes that contributed to the failure. In an embodiment, collaboration management cloud120, at204, passes diagnostics results220to access cloud110, which can take further actions such as informing an administrator through a visual display, suggesting remediation actions, or even triggering a configuration rollback after obtaining administrator approval. As an example, diagnostics results220may include the following information:{Issue IDIssue type: Call failureIssue detailscalling number: 5001called number xxx193926009device ID: SEP1234abcdUser ID: user1}Reason: No routeSource: 10.1.1.21 (i.e., call controller130)Recent changes:Config changes:Route pattern 91.XAction: DeletedTimestamp <data, time>UserID: <UserID>}} FIG.3illustrates a second scenario in which a call fails as a result of a next hop failure and how session border controller142provides diagnostics results220to call controller130, which then passes the diagnostics results220to collaboration management cloud120, according to an example embodiment. More specifically, at301a call that is within the calls of interest fails. Diagnostics logic180executing on call controller130detects the failure. As a result, at302, diagnostics logic180triggers diagnostics data analysis, and identifies that the source of the call disconnect is a next hop device. Thus, at303, diagnostics logic180in call controller130triggers diagnostics execution at the next hop by requesting the peer device (SBC142) to run diagnostics of a specific type (e.g., call failure analysis), specifying a minimum version of diagnostics, and passing the input parameters such as call specific details (calling, called number, session-ID, time of occurrence etc.). SBC142receives and verifies the diagnostics task execution request. As appropriate, diagnostics logic180executing on SBC142downloads a newer version of the diagnostics if the current version on SBC142is lesser than the minimum version specified in the request. Once verified, diagnostics logic180on SBC142, at304, executes the diagnostics task which analyzes the local diagnostics data, determines that the failure is due to “No route” and identifies relevant configuration changes (Route policy 919 deleted). At305, SBC142sends diagnostics results320to the requesting device, namely, in this case, call controller130. It is noted that the diagnostics task execution request, at303, and response, at305, can be exchanged across an existing secure call signaling connection between the network peers. Diagnostics logic180in call controller130then formats the diagnostics results320as appropriate. At306, call controller130sends diagnostics results320to collaboration management cloud120. Diagnostics results320may include call details such as calling, called number, device, user that experienced the problem, etc., along with the recent configuration changes that contributed to the failure. In an embodiment, collaboration management cloud120, at307, passes diagnostics results320to access cloud110, which can take further actions such as informing an administrator through a visual display, suggesting remediation actions, or even triggering a configuration rollback after obtaining administrator approval. As an example, diagnostics results320may include the following information:{ . . .Reason: No routeSource: 10.1.3.21 (SBC142)Recent changes:Config changes: {Route policy 919Action: DeletedTimestamp: <date, time>UserID: <UserID>} FIG.4illustrates a third scenario in which a call fails as a result of a remote hop failure and how session border controller146provides diagnostics results to a session border controller142, which passes the diagnostics results to call controller130, which then passes diagnostics results420to collaboration management cloud120, according to an example embodiment. More specifically, at401a call that is within the calls of interest fails. Diagnostics logic180executing on call controller130detects the failure. As a result, at402, diagnostics logic180triggers diagnostics data analysis, and identifies that the source of the call disconnect is a next hop device. Thus, at403, diagnostics logic180in call controller130triggers diagnostics execution at the next hop by requesting the peer device (SBC142) to run diagnostics of a specific type (e.g., call failure analysis), specifying a minimum version of diagnostics, and passing the input parameters such as call specific details (calling, called number, session-ID, time of occurrence etc.). SBC142receives and verifies the diagnostics task execution request. As appropriate, diagnostics logic180executing on SBC142downloads a newer version of the diagnostics if the current version on SBC142is lesser than the minimum version specified in the request. Once verified, diagnostics logic180on SBC142, at404, executes the diagnostics task which analyzes the local diagnostics data, determines that the failure is at its own next hop. Accordingly, at405, diagnostics logic180on SBC142triggers diagnostics execution at SBC146by sending a diagnostics trigger message and passing the input parameters. Diagnostics logic180at SBC146receives and verifies the diagnostics task execution request. As appropriate, diagnostics logic180executing on SBC146downloads a newer version of the diagnostics if the current version on SBC146is lesser than the minimum version specified in the request. Once verified, diagnostics logic180on SBC146, at406, executes the diagnostics task which analyzes the local diagnostics data, and determines that the failure is due to “No route” and identifies relevant configuration changes (Route policy 919 deleted). At407, SBC146sends diagnostics results420to the requesting device, namely, in this case, SBC142, which then, at408, sends diagnostics results420to call controller130. It is noted that the diagnostics task execution requests at403,405, and response at407,408can be exchanged across an existing secure call signaling connection between the network peers. Diagnostics logic180in call controller130then formats the diagnostics results420as appropriate. At409, call controller130sends diagnostics results420to collaboration management cloud120. Diagnostics results420may include call details such as calling, called number, device, user that experienced the problem, etc., along with the recent configuration changes that contributed to the failure. In an embodiment, collaboration management cloud120, at410, passes diagnostics results420to access cloud110, which can take further actions such as informing an administrator through a visual display, suggesting remediation actions, or even triggering a configuration rollback with optional administrator approval. Diagnostics results420may be similar to diagnostics results320. It is noted that a similar distributed diagnostics execution flow can occur if the source of the call disconnect is present within collaboration cloud calling150. In such a case, SBC142would trigger diagnostics execution at SBC144. FIG.5illustrates a diagnostics capabilities exchange between call controller130and SBC142, according to an example embodiment. In one possible implementation, call controller130, at510, may sends a SIP OPTIONS request to SBC142to discover the media and diagnostics capabilities of that peer device. SBC142, in response at512, sends its diagnostics capabilities details (such as names and version) in the form of a diagnostics list. SBC142at514may likewise send at SIP OPTIONS request to call controller130, and receive, at516, a similar diagnostics list. An example of such an exchange is set forth below:Received:OPTIONS sip:cube1.example.com:5060 SIP/2.0Via: SIP/2.0/TCP 172.18.110.205:5060;branch=z9hG4bK7f3602e9b1From: <sip:172.18.110.205>;tag=791723988To: <sip:cube1.example.com>Date: Sun, 24 Jun. 2022 16:02:34 GMTCall-ID: [email protected]: CallController11.5CSeq: 101 OPTIONSContact: <sip:172.1.1100.205:5060;transport=tcp>Accept: application/sdp, application/jsonMax-Forwards: 0Content-Length: 039396538: Jun 24 16:02:34.087 UTC: //-1/xxxxxxxxxxxx/SIP/Msg/ccsipDisplayMsg:Sent:SIP/2.0 200 OKVia: SIP/2.0/TCP 172.18.110.205:5060;branch=z9hG4bK7f3602e9b1From: <sip:172.18.110.205>;tag=791723988To: <sip:cube1.example.com>;tag=A7928DE5-25A5Date: Sun, 24 Jun. 2022 16:02:34 GMTCall-ID: [email protected]: SIPGateway/IOS-16.6.1CSeq: 101 OPTIONSAllow: INVITE, OPTIONS, BYE, CANCEL, ACK, PRACK, UPDATE, REFER, SUBSCRIBE, NOTIFY, INFO, REGISTERAllow-Events: telephone-eventAccept: application/sdp, application/jsonSupported: timer, resource-priority, replaces, sdp-anatContent-Type: multipart/mixed;boundary=“boundary1”Content-Length: 720--boundary1Content-type: application/sdpv=0o=SystemsSIP-GW-UserAgent 3801 2582 IN IP4 172.18.110.203s=SIP Callc=IN IP4 172.18.110.203t=0 0m=audio 0 RTP/AVP 18 0 8 9 4 2 15 3c=IN IP4 172.18.110.203m=image 0 udptl t38c=IN IP4 172.18.110.203a=T38FaxVersion:0a=T38MaxBitRate:9600a=T38FaxRateManagement:transferredTCFa=T38FaxMaxBuffer:200a=T38FaxMaxDatagram:320a=T38FaxUdpEC:t38UDPRedundancy--boundary1Content-type: application/json{“device”: {“model”: “ISR4451”,“version”: “IOSXE-16.6.1”,“vendor”: “Cisco”},“diagnosticsList”: [{“name”: “call-failure-analysis”,“version”: “1.0”},{“name”: “routing-failure-analysis”,“version”: “1.0”}]} FIG.6illustrates an exchange between call controller130and SBC142to trigger diagnostics on SBC142, according to an example embodiment. At610, call controller130sends an out-of-dialog SIP REFER request with the Refer-To header set to https://localhost/diagnsotics/{diagnosticsTaskName}. The message body of the SIP REFER includes the diagnostics task input parameters and minimum version of diagnostics logic180. SBC142verifies and validates the SIP REFER request and, at612, sends 200 OK if it is valid and from a trusted next hop peer. At614, SBC142sends a SIP NOTIFY message with “100 Trying” in the message body to indicate that it is working on processing the request, to which, at616, call controller130responds with a 200 OK. After the diagnostics execution is complete, SBC142, at618, sends another SIP NOTIFY request with message body containing “200 OK” indicating that the request processing is complete along with the diagnostics results in, e.g., JSON format. Call controller130may respond, at620, with a corresponding 200 OK. If there were any errors while issuing the diagnostics results, relevant response codes would be returned. For example, if the diagnostics results were not present or unavailable on the device a SIP 404 message may be returned. If the diagnostics execution engine of SBC142fails or encounters a fault in the diagnostics a SIP “500 Internal Server Error could indicate the failure. Should the IC execution system be offline or completely faulted, a SIP 503 “Service Unavailable” message would be logically returned to the IC request. FIG.7shows a series of operations that may be executed by diagnostics logic180, according to an example embodiment. At710, an operation includes receiving, at a network device, an indication of calls-of-interest. At712, an operation includes detecting, at the network device, a failure of one of the calls-of-interest. At714, an operation includes triggering, in response to the detecting, at the network device, diagnostics data analysis of data associated with the failure of one of the calls-of-interest. At716, an operation includes determining, based on the diagnostics data analysis, a cause of the failure of the one of the calls-of-interest. And, at718, an operation includes notifying, by the network device, a management system of the cause of the failure of the one of the calls-of-interest and of recent configuration changes on the network device that are related to the cause of the failure of the one of the calls-of-interest. FIG.8is a block diagram of a computing device, server, etc., that may be configured to host route policy analyzer logic or diagnostics logic and perform the techniques described herein, according to an example embodiment. In various embodiments, a computing device, such as computing device800or any combination of computing devices800, may be configured as any entity/entities as discussed for the techniques depicted in connection withFIGS.1-7in order to perform operations of the various techniques discussed herein. In at least one embodiment, the computing device800may include one or more processor(s)802, one or more memory element(s)804, storage806, a bus808, one or more network processor unit(s)810interconnected with one or more network input/output (I/O) interface(s)812, one or more I/O interface(s)814, and control logic820(which could include, for example, route policy analyzer logic112or diagnostics logic180). In various embodiments, instructions associated with logic for computing device800can overlap in any manner and are not limited to the specific allocation of instructions and/or operations described herein. In at least one embodiment, processor(s)802is/are at least one hardware processor configured to execute various tasks, operations and/or functions for computing device800as described herein according to software and/or instructions configured for computing device800. Processor(s)802(e.g., a hardware processor) can execute any type of instructions associated with data to achieve the operations detailed herein. In one example, processor(s)802can transform an element or an article (e.g., data, information) from one state or thing to another state or thing. Any of potential processing elements, microprocessors, digital signal processor, baseband signal processor, modem, PHY, controllers, systems, managers, logic, and/or machines described herein can be construed as being encompassed within the broad term ‘processor’. In at least one embodiment, memory element(s)804and/or storage806is/are configured to store data, information, software, and/or instructions associated with computing device800, and/or logic configured for memory element(s)804and/or storage806. For example, any logic described herein (e.g., control logic820) can, in various embodiments, be stored for computing device800using any combination of memory element(s)804and/or storage806. Note that in some embodiments, storage806can be consolidated with memory element(s)804(or vice versa) or can overlap/exist in any other suitable manner. In at least one embodiment, bus808can be configured as an interface that enables one or more elements of computing device800to communicate in order to exchange information and/or data. Bus808can be implemented with any architecture designed for passing control, data and/or information between processors, memory elements/storage, peripheral devices, and/or any other hardware and/or software components that may be configured for computing device800. In at least one embodiment, bus808may be implemented as a fast kernel-hosted interconnect, potentially using shared memory between processes (e.g., logic), which can enable efficient communication paths between the processes. In various embodiments, network processor unit(s)810may enable communication between computing device800and other systems, entities, etc., via network I/O interface(s)812(wired and/or wireless) to facilitate operations discussed for various embodiments described herein. In various embodiments, network processor unit(s)810can be configured as a combination of hardware and/or software, such as one or more Ethernet driver(s) and/or controller(s) or interface cards, Fibre Channel (e.g., optical) driver(s) and/or controller(s), wireless receivers/transmitters/transceivers, baseband processor(s)/modem(s), and/or other similar network interface driver(s) and/or controller(s) now known or hereafter developed to enable communications between computing device800and other systems, entities, etc. to facilitate operations for various embodiments described herein. In various embodiments, network I/O interface(s)812can be configured as one or more Ethernet port(s), Fibre Channel ports, any other I/O port(s), and/or antenna(s)/antenna array(s) now known or hereafter developed. Thus, the network processor unit(s)810and/or network I/O interface(s)812may include suitable interfaces for receiving, transmitting, and/or otherwise communicating data and/or information in a network environment. I/O interface(s)814allow for input and output of data and/or information with other entities that may be connected to computing device800. For example, I/O interface(s)814may provide a connection to external devices such as a keyboard, keypad, a touch screen, and/or any other suitable input and/or output device now known or hereafter developed. In some instances, external devices can also include portable computer readable (non-transitory) storage media such as database systems, thumb drives, portable optical or magnetic disks, and memory cards. In still some instances, external devices can be a mechanism to display data to a user, such as, for example, a computer monitor, a display screen, or the like. In various embodiments, control logic820can include instructions that, when executed, cause processor(s)802to perform operations, which can include, but not be limited to, providing overall control operations of computing device; interacting with other entities, systems, etc. described herein; maintaining and/or interacting with stored data, information, parameters, etc. (e.g., memory element(s), storage, data structures, databases, tables, etc.); combinations thereof; and/or the like to facilitate various operations for embodiments described herein. The programs described herein (e.g., control logic820) may be identified based upon application(s) for which they are implemented in a specific embodiment. However, it should be appreciated that any particular program nomenclature herein is used merely for convenience; thus, embodiments herein should not be limited to use(s) solely described in any specific application(s) identified and/or implied by such nomenclature. In various embodiments, entities as described herein may store data/information in any suitable volatile and/or non-volatile memory item (e.g., magnetic hard disk drive, solid state hard drive, semiconductor storage device, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM), application specific integrated circuit (ASIC), etc.), software, logic (fixed logic, hardware logic, programmable logic, analog logic, digital logic), hardware, and/or in any other suitable component, device, element, and/or object as may be appropriate. Any of the memory items discussed herein should be construed as being encompassed within the broad term ‘memory element’. Data/information being tracked and/or sent to one or more entities as discussed herein could be provided in any database, table, register, list, cache, storage, and/or storage structure: all of which can be referenced at any suitable timeframe. Any such storage options may also be included within the broad term ‘memory element’ as used herein. Note that in certain example implementations, operations as set forth herein may be implemented by logic encoded in one or more tangible media that is capable of storing instructions and/or digital information and may be inclusive of non-transitory tangible media and/or non-transitory computer readable storage media (e.g., embedded logic provided in: an ASIC, digital signal processing (DSP) instructions, software [potentially inclusive of object code and source code], etc.) for execution by one or more processor(s), and/or other similar machine, etc. Generally, memory element(s)804and/or storage806can store data, software, code, instructions (e.g., processor instructions), logic, parameters, combinations thereof, and/or the like used for operations described herein. This includes memory element(s)804and/or storage806being able to store data, software, code, instructions (e.g., processor instructions), logic, parameters, combinations thereof, or the like that are executed to carry out operations in accordance with teachings of the present disclosure. In some instances, software of the present embodiments may be available via a non-transitory computer useable medium (e.g., magnetic or optical mediums, magneto-optic mediums, CD-ROM, DVD, memory devices, etc.) of a stationary or portable program product apparatus, downloadable file(s), file wrapper(s), object(s), package(s), container(s), and/or the like. In some instances, non-transitory computer readable storage media may also be removable. For example, a removable hard drive may be used for memory/storage in some implementations. Other examples may include optical and magnetic disks, thumb drives, and smart cards that can be inserted and/or otherwise connected to a computing device for transfer onto another computer readable storage medium. VARIATIONS AND IMPLEMENTATIONS Embodiments described herein may include one or more networks, which can represent a series of points and/or network elements of interconnected communication paths for receiving and/or transmitting messages (e.g., packets of information) that propagate through the one or more networks. These network elements offer communicative interfaces that facilitate communications between the network elements. A network can include any number of hardware and/or software elements coupled to (and in communication with) each other through a communication medium. Such networks can include, but are not limited to, any local area network (LAN), virtual LAN (VLAN), wide area network (WAN) (e.g., the Internet), software defined WAN (SD-WAN), wireless local area (WLA) access network, wireless wide area (WWA) access network, metropolitan area network (MAN), Intranet, Extranet, virtual private network (VPN), Low Power Network (LPN), Low Power Wide Area Network (LPWAN), Machine to Machine (M2M) network, Internet of Things (IoT) network, Ethernet network/switching system, any other appropriate architecture and/or system that facilitates communications in a network environment, and/or any suitable combination thereof. Networks through which communications propagate can use any suitable technologies for communications including wireless communications (e.g., 4G/5G/nG, IEEE 802.11 (e.g., Wi-Fi®/Wi-Fi6®), IEEE 802.16 (e.g., Worldwide Interoperability for Microwave Access (WiMAX)), Radio-Frequency Identification (RFID), Near Field Communication (NFC), Bluetooth™ mm.wave, Ultra-Wideband (UWB), etc.), and/or wired communications (e.g., T1 lines, T3 lines, digital subscriber lines (DSL), Ethernet, Fibre Channel, etc.). Generally, any suitable means of communications may be used such as electric, sound, light, infrared, and/or radio to facilitate communications through one or more networks in accordance with embodiments herein. Communications, interactions, operations, etc. as discussed for various embodiments described herein may be performed among entities that may directly or indirectly connected utilizing any algorithms, communication protocols, interfaces, etc. (proprietary and/or non-proprietary) that allow for the exchange of data and/or information. Communications in a network environment can be referred to herein as ‘messages’, ‘messaging’, ‘signaling’, ‘data’, ‘content’, ‘objects’, ‘requests’, ‘queries’, ‘responses’, ‘replies’, etc. which may be inclusive of packets. As referred to herein and in the claims, the term ‘packet’ may be used in a generic sense to include packets, frames, segments, datagrams, and/or any other generic units that may be used to transmit communications in a network environment. Generally, a packet is a formatted unit of data that can contain control or routing information (e.g., source and destination address, source and destination port, etc.) and data, which is also sometimes referred to as a ‘payload’, ‘data payload’, and variations thereof. In some embodiments, control or routing information, management information, or the like can be included in packet fields, such as within header(s) and/or trailer(s) of packets. Internet Protocol (IP) addresses discussed herein and in the claims can include any IP version 4 (IPv4) and/or IP version 6 (IPv6) addresses. To the extent that embodiments presented herein relate to the storage of data, the embodiments may employ any number of any conventional or other databases, data stores or storage structures (e.g., files, databases, data structures, data or other repositories, etc.) to store information. Note that in this Specification, references to various features (e.g., elements, structures, nodes, modules, components, engines, logic, steps, operations, functions, characteristics, etc.) included in ‘one embodiment’, ‘example embodiment’, ‘an embodiment’, ‘another embodiment’, ‘certain embodiments’, ‘some embodiments’, ‘various embodiments’, ‘other embodiments’, ‘alternative embodiment’, and the like are intended to mean that any such features are included in one or more embodiments of the present disclosure, but may or may not necessarily be combined in the same embodiments. Note also that a module, engine, client, controller, function, logic or the like as used herein in this Specification, can be inclusive of an executable file comprising instructions that can be understood and processed on a server, computer, processor, machine, compute node, combinations thereof, or the like and may further include library modules loaded during execution, object files, system files, hardware logic, software logic, or any other executable modules. It is also noted that the operations and steps described with reference to the preceding figures illustrate only some of the possible scenarios that may be executed by one or more entities discussed herein. Some of these operations may be deleted or removed where appropriate, or these steps may be modified or changed considerably without departing from the scope of the presented concepts. In addition, the timing and sequence of these operations may be altered considerably and still achieve the results taught in this disclosure. The preceding operational flows have been offered for purposes of example and discussion. Substantial flexibility is provided by the embodiments in that any suitable arrangements, chronologies, configurations, and timing mechanisms may be provided without departing from the teachings of the discussed concepts. As used herein, unless expressly stated to the contrary, use of the phrase ‘at least one of’, ‘one or more of’, ‘and/or’, variations thereof, or the like are open-ended expressions that are both conjunctive and disjunctive in operation for any and all possible combination of the associated listed items. For example, each of the expressions ‘at least one of X, Y and Z’, ‘at least one of X, Y or Z’, ‘one or more of X, Y and Z’, ‘one or more of X, Y or Z’ and ‘X, Y and/or Z’ can mean any of the following: 1) X, but not Y and not Z; 2) Y, but not X and not Z; 3) Z, but not X and not Y; 4) X and Y, but not Z; 5) X and Z, but not Y; 6) Y and Z, but not X; or 7) X, Y, and Z. Additionally, unless expressly stated to the contrary, the terms ‘first’, ‘second’, ‘third’, etc., are intended to distinguish the particular nouns they modify (e.g., element, condition, node, module, activity, operation, etc.). Unless expressly stated to the contrary, the use of these terms is not intended to indicate any type of order, rank, importance, temporal sequence, or hierarchy of the modified noun. For example, ‘first X’ and ‘second X’ are intended to designate two ‘X’ elements that are not necessarily limited by any order, rank, importance, temporal sequence, or hierarchy of the two elements. Further as referred to herein, ‘at least one of’ and ‘one or more of’ can be represented using the ‘(s)’ nomenclature (e.g., one or more element(s)). Each example embodiment disclosed herein has been included to present one or more different features. However, all disclosed example embodiments are designed to work together as part of a single larger system or method. This disclosure explicitly envisions compound embodiments that combine multiple previously-discussed features in different example embodiments into a single system or method. In sum, in one embodiment a method is provided. The method may include receiving, at a network device, an indication of calls-of-interest, detecting, at the network device, a failure of one of the calls-of-interest, triggering, in response to the detecting, at the network device, diagnostics data analysis of data associated with the failure of one of the calls-of-interest, determining, based on the diagnostics data analysis, a cause of the failure of the one of the calls-of-interest, and notifying, by the network device, a management system of the cause of the failure of the one of the calls-of-interest and of recent configuration changes on the network device that are related to the cause of the failure of the one of the calls-of-interest. In the method, the indication of the calls-of-interest may include at least one of a calling number and a called number. In the method, the network device may be a call controller. In the method, the network device may be a session border controller. In the method, the network device may be deployed within an enterprise network. In the method, the notifying may include sending at least one of a calling number, a called number, a device ID to the management system. The method may further include determining, at the network device, that the failure of the one of calls-of-interest occurred at another network device along a network path, and sending, from the network device to the another network device along the network path, a trigger command that is configured to cause the another network device to determine a cause of the failure of the one of the call-of-interest. In the method, the network device may be a call controller and the another network device may be a session border controller. The method may further include the network device exchanging diagnostics data analysis capabilities with the another network device. The method may further include exchanging the diagnostics data analysis via session initiation protocol (SIP). A device may also be provided. The device may include an interface configured to enable network communications, a memory, and one or more processors coupled to the interface and the memory, and configured to: receive an indication of calls-of-interest, detect a failure of one of the calls-of-interest, trigger, in response to detecting the failure of one of the calls-of-interest, diagnostics data analysis of data associated with the failure of one of the calls-of-interest, determine, based on the diagnostics data analysis, a cause of the failure of the one of the calls-of-interest, and notify a management system of the cause of the failure of the one of the calls-of-interest and of recent configuration changes that are related to the cause of the failure of the one of the calls-of-interest. In an embodiment the indication of the calls-of-interest may include at least one of a calling number and a called number. The device may be implemented as a call controller. The device may be implemented as a session border controller. The one or more processors may be further configured to notify by sending at least one of a calling number, a called number, a device ID to the management system. The one or more processors may be further configured to determine that the failure of the one of calls-of-interest occurred at network device along a network path and to send, to the network device along the network path, a trigger command that is configured to cause the network device along the network path to determine a cause of the failure of the one of the call-of-interest. In still another embodiment, there is provided one or more non-transitory computer readable storage media encoded with instructions that, when executed by a processor, cause the processor to: receive an indication of calls-of-interest, detect a failure of one of the calls-of-interest, trigger, in response to detecting the failure of one of the calls-of-interest, diagnostics data analysis of data associated with the failure of one of the calls-of-interest, determine, based on the diagnostics data analysis, a cause of the failure of the one of the calls-of-interest, and notify a management system of the cause of the failure of the one of the calls-of-interest and of recent configuration changes that are related to the cause of the failure of the one of the calls-of-interest. The indication of the calls-of-interest may include at least one of a calling number and a called number. The processor may be configured to operate as a call controller. The processor may be configured to operate as a session border controller. Each example embodiment disclosed herein has been included to present one or more different features. However, all disclosed example embodiments are designed to work together as part of a single larger system or method. This disclosure explicitly envisions compound embodiments that combine multiple previously discussed features in different example embodiments into a single system or method. One or more advantages described herein are not meant to suggest that any one of the embodiments described herein necessarily provides all of the described advantages or that all the embodiments of the present disclosure necessarily provide any one of the described advantages. Numerous other changes, substitutions, variations, alterations, and/or modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and/or modifications as falling within the scope of the appended claims.
45,122
11943092
DETAILED DESCRIPTION Through one or more of its various aspects, embodiments and/or specific features or sub-components of the present disclosure, are intended to bring out one or more of the advantages as specifically described above and noted below. The examples may also be embodied as one or more non-transitory computer readable media having instructions stored thereon for one or more aspects of the present technology as described and illustrated by way of the examples herein. The instructions in some examples include executable code that, when executed by one or more processors, cause the processors to carry out steps necessary to implement the methods of the examples of this technology that are described and illustrated herein. As is traditional in the field of the present disclosure, example embodiments are described, and illustrated in the drawings, in terms of functional blocks, units and/or modules. Those skilled in the art will appreciate that these blocks, units and/or modules are physically implemented by electronic (or optical) circuits such as logic circuits, discrete components, microprocessors, hard-wired circuits, memory elements, wiring connections, and the like, which may be formed using semiconductor-based fabrication techniques or other manufacturing technologies. In the case of the blocks, units and/or modules being implemented by microprocessors or similar, they may be programmed using software (e.g., microcode) to perform various functions discussed herein and may optionally be driven by firmware and/or software. Alternatively, each block, unit and/or module may be implemented by dedicated hardware, or as a combination of dedicated hardware to perform some functions and a processor (e.g., one or more programmed microprocessors and associated circuitry) to perform other functions. Also, each block, unit and/or module of the example embodiments may be physically separated into two or more interacting and discrete blocks, units and/or modules without departing from the scope of the inventive concepts. Further, the blocks, units and/or modules of the example embodiments may be physically combined into more complex blocks, units and/or modules without departing from the scope of the present disclosure. FIG.1illustrates a computer system for implementing a cloud native dependency registration and impacted service notification (DRISN) system for building of a dependency chain across multiple heterogeneous platforms or environments present in an eco-system of an organization, and alerting of parties that may be effected by a failure of a resource included in the dependency chain in accordance with an exemplary embodiment. The system100is generally shown and may include a computer system102, which is generally indicated. The computer system102may include a set of instructions that can be executed to cause the computer system102to perform any one or more of the methods or computer-based functions disclosed herein, either alone or in combination with the other described devices. The computer system102may operate as a standalone device or may be connected to other systems or peripheral devices. For example, the computer system102may include, or be included within, any one or more computers, servers, systems, communication networks or cloud environment. Even further, the instructions may be operative in such cloud-based computing environment. In a networked deployment, the computer system102may operate in the capacity of a server or as a client user computer in a server-client user network environment, a client user computer in a cloud computing environment, or as a peer computer system in a peer-to-peer (or distributed) network environment. The computer system102, or portions thereof, may be implemented as, or incorporated into, various devices, such as a personal computer, a tablet computer, a set-top box, a personal digital assistant, a mobile device, a palmtop computer, a laptop computer, a desktop computer, a communications device, a wireless smart phone, a personal trusted device, a wearable device, a global positioning satellite (GPS) device, a web appliance, or any other machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single computer system102is illustrated, additional embodiments may include any collection of systems or sub-systems that individually or jointly execute instructions or perform functions. The term system shall be taken throughout the present disclosure to include any collection of systems or sub-systems that individually or jointly execute a set, or multiple sets, of instructions to perform one or more computer functions. As illustrated inFIG.1, the computer system102may include at least one processor104. The processor104is tangible and non-transitory. As used herein, the term “non-transitory” is to be interpreted not as an eternal characteristic of a state, but as a characteristic of a state that will last for a period of time. The term “non-transitory” specifically disavows fleeting characteristics such as characteristics of a particular carrier wave or signal or other forms that exist only transitorily in any place at any time. The processor104is an article of manufacture and/or a machine component. The processor104is configured to execute software instructions in order to perform functions as described in the various embodiments herein. The processor104may be a general-purpose processor or may be part of an application specific integrated circuit (ASIC). The processor104may also be a microprocessor, a microcomputer, a processor chip, a controller, a microcontroller, a digital signal processor (DSP), a state machine, or a programmable logic device. The processor104may also be a logical circuit, including a programmable gate array (PGA) such as a field programmable gate array (FPGA), or another type of circuit that includes discrete gate and/or transistor logic. The processor104may be a central processing unit (CPU), a graphics processing unit (GPU), or both. Additionally, any processor described herein may include multiple processors, parallel processors, or both. Multiple processors may be included in, or coupled to, a single device or multiple devices. The computer system102may also include a computer memory106. The computer memory106may include a static memory, a dynamic memory, or both in communication. Memories described herein are tangible storage mediums that can store data and executable instructions, and are non-transitory during the time instructions are stored therein. Again, as used herein, the term “non-transitory” is to be interpreted not as an eternal characteristic of a state, but as a characteristic of a state that will last for a period of time. The term “non-transitory” specifically disavows fleeting characteristics such as characteristics of a particular carrier wave or signal or other forms that exist only transitorily in any place at any time. The memories are an article of manufacture and/or machine component. Memories described herein are computer-readable mediums from which data and executable instructions can be read by a computer. Memories as described herein may be random access memory (RAM), read only memory (ROM), flash memory, electrically programmable read only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, a hard disk, a cache, a removable disk, tape, compact disk read only memory (CD-ROM), digital versatile disk (DVD), floppy disk, blu-ray disk, or any other form of storage medium known in the art. Memories may be volatile or non-volatile, secure and/or encrypted, unsecure and/or unencrypted. Of course, the computer memory106may comprise any combination of memories or a single storage. The computer system102may further include a display108, such as a liquid crystal display (LCD), an organic light emitting diode (OLED), a flat panel display, a solid-state display, a cathode ray tube (CRT), a plasma display, or any other known display. The computer system102may also include at least one input device110, such as a keyboard, a touch-sensitive input screen or pad, a speech input, a mouse, a remote control device having a wireless keypad, a microphone coupled to a speech recognition engine, a camera such as a video camera or still camera, a cursor control device, a global positioning system (GPS) device, an altimeter, a gyroscope, an accelerometer, a proximity sensor, or any combination thereof. Those skilled in the art appreciate that various embodiments of the computer system102may include multiple input devices110. Moreover, those skilled in the art further appreciate that the above-listed, exemplary input devices110are not meant to be exhaustive and that the computer system102may include any additional, or alternative, input devices110. The computer system102may also include a medium reader112which is configured to read any one or more sets of instructions, e.g., software, from any of the memories described herein. The instructions, when executed by a processor, can be used to perform one or more of the methods and processes as described herein. In a particular embodiment, the instructions may reside completely, or at least partially, within the memory106, the medium reader112, and/or the processor110during execution by the computer system102. Furthermore, the computer system102may include any additional devices, components, parts, peripherals, hardware, software or any combination thereof which are commonly known and understood as being included with or within a computer system, such as, but not limited to, a network interface114and an output device116. The network interface114may include, without limitation, a communication circuit, a transmitter or a receiver. The output device116may be, but is not limited to, a speaker, an audio out, a video out, a remote control output, a printer, or any combination thereof. Each of the components of the computer system102may be interconnected and communicate via a bus118or other communication link. As shown inFIG.1, the components may each be interconnected and communicate via an internal bus. However, those skilled in the art appreciate that any of the components may also be connected via an expansion bus. Moreover, the bus118may enable communication via any standard or other specification commonly known and understood such as, but not limited to, peripheral component interconnect, peripheral component interconnect express, parallel advanced technology attachment, serial advanced technology attachment, etc. The computer system102may be in communication with one or more additional computer devices120via a network122. The network122may be, but is not limited to, a local area network, a wide area network, the Internet, a telephony network, a short-range network, or any other network commonly known and understood in the art. The short-range network may include, for example, Bluetooth, Zigbee, infrared, near field communication, ultraband, or any combination thereof. Those skilled in the art appreciate that additional networks122which are known and understood may additionally or alternatively be used and that the exemplary networks122are not limiting or exhaustive. Also, while the network122is shown inFIG.1as a wireless network, those skilled in the art appreciate that the network122may also be a wired network. The additional computer device120is shown inFIG.1as a personal computer. However, those skilled in the art appreciate that, in alternative embodiments of the present application, the computer device120may be a laptop computer, a tablet PC, a personal digital assistant, a mobile device, a palmtop computer, a desktop computer, a communications device, a wireless telephone, a personal trusted device, a web appliance, a server, or any other device that is capable of executing a set of instructions, sequential or otherwise, that specify actions to be taken by that device. Of course, those skilled in the art appreciate that the above-listed devices are merely exemplary devices and that the device120may be any additional device or apparatus commonly known and understood in the art without departing from the scope of the present application. For example, the computer device120may be the same or similar to the computer system102. Furthermore, those skilled in the art similarly understand that the device may be any combination of devices and apparatuses. Of course, those skilled in the art appreciate that the above-listed components of the computer system102are merely meant to be exemplary and are not intended to be exhaustive and/or inclusive. Furthermore, the examples of the components listed above are also meant to be exemplary and similarly are not meant to be exhaustive and/or inclusive. In accordance with various embodiments of the present disclosure, the methods described herein may be implemented using a hardware computer system that executes software programs. Further, in an exemplary, non-limited embodiment, implementations can include distributed processing, component/object distributed processing, and an operation mode having parallel processing capabilities. Virtual computer system processing can be constructed to implement one or more of the methods or functionality as described herein, and a processor described herein may be used to support a virtual processing environment. FIG.2illustrates an exemplary diagram of a network environment with a cloud native dependency registration and impacted service notification system in accordance with an exemplary embodiment. According to exemplary embodiments, the above-described problems associated with conventional approach may be overcome by implementing a dependency registration and impacted service notification (DRISN) system202as illustrated inFIG.2that may provide a platform for implementing a cloud native DRISN module(s) or component(s) that serve as a cloud based monitoring hub across multiple heterogeneous platforms/environments existing in an organization's eco-system. The DRISN system202provides consolidated or standard process automation tools as a service for any potential end-user without the need to learn any other advanced programming languages or concepts. Further, since the DRISN system202standardizes may processes while allowing configuration of rules and parameters, custom automation tools are often unnecessary, which reduces memory requirements for storing of large number of customized user tools. The DRISN system202may be the same or similar to the computer system102as described with respect toFIG.1. The DRISN system202may store one or more applications that can include executable instructions that, when executed by the DRISN system202, cause the DRISN system202to perform actions, such as to transmit, receive, or otherwise process network messages, for example, and to perform other actions described and illustrated below with reference to the figures. The application(s) may be implemented as modules or components of other applications. Further, the application(s) can be implemented as operating system extensions, modules, plugins, or the like. Even further, the application(s) may be operative in a cloud-based computing environment. The application(s) may be executed within or as virtual machine(s) or virtual server(s) that may be managed in a cloud-based computing environment. Also, the application(s), and even the DRISN system202itself, may be located in virtual server(s) running in a cloud-based computing environment rather than being tied to one or more specific physical network computing devices. Also, the application(s) may be running in one or more virtual machines (VMs) executing on the DRISN system202. Additionally, in one or more embodiments of this technology, virtual machine(s) running on the DRISN system202may be managed or supervised by a hypervisor. In the network environment200ofFIG.2, the DRISN system202is coupled to a plurality of server devices204(1)-204(n) that hosts a plurality of databases206(1)-206(n), and also to a plurality of client devices208(1)-208(n) via communication network(s)210. A communication interface of the DRISN system202, such as the network interface114of the computer system102ofFIG.1, operatively couples and communicates between the DRISN system202, the server devices204(1)-204(n), and/or the client devices208(1)-208(n), which are all coupled together by the communication network(s)210, although other types and/or numbers of communication networks or systems with other types and/or numbers of connections and/or configurations to other devices and/or elements may also be used. The communication network(s)210may be the same or similar to the network122as described with respect toFIG.1, although the DRISN system202, the server devices204(1)-204(n), and/or the client devices208(1)-208(n) may be coupled together via other topologies. Additionally, the network environment200may include other network devices such as one or more routers and/or switches, for example, which are well known in the art and thus will not be described herein. By way of example only, the communication network(s)210may include local area network(s) (LAN(s)) or wide area network(s) (WAN(s)), and can use TCP/IP over Ethernet and industry-standard protocols, although other types and/or numbers of protocols and/or communication networks may be used. The communication network(s)202in this example may employ any suitable interface mechanisms and network communication technologies including, for example, teletraffic in any suitable form (e.g., voice, modem, and the like), Public Switched Telephone Network (PSTNs), Ethernet-based Packet Data Networks (PDNs), combinations thereof, and the like. The DRISN system202may be a standalone device or integrated with one or more other devices or apparatuses, such as one or more of the server devices204(1)-204(n), for example. In one particular example, the DRISN system202may be hosted by one of the server devices204(1)-204(n), and other arrangements are also possible. Moreover, one or more of the devices of the DRISN system202may be in the same or a different communication network including one or more public, private, or cloud networks, for example. The plurality of server devices204(1)-204(n) may be the same or similar to the computer system102or the computer device120as described with respect toFIG.1, including any features or combination of features described with respect thereto. For example, any of the server devices204(1)-204(n) may include, among other features, one or more processors, a memory, and a communication interface, which are coupled together by a bus or other communication link, although other numbers and/or types of network devices may be used. The server devices204(1)-204(n) in this example may process requests received from the DRISN system202via the communication network(s)210according to the HTTP-based protocol and/or JavaScript Object Notation (JSON) protocol, for example, although other protocols may also be used. According to a further aspect of the present disclosure, wherein the user interface may be a Hypertext Transfer Protocol (HTTP) web interface, but the disclosure is not limited thereto. The server devices204(1)-204(n) may be hardware or software or may represent a system with multiple servers in a pool, which may include internal or external networks. The server devices204(1)-204(n) hosts the databases206(1)-206(n) that are configured to store metadata sets, data quality rules, and newly generated data. Although the server devices204(1)-204(n) are illustrated as single devices, one or more actions of each of the server devices204(1)-204(n) may be distributed across one or more distinct network computing devices that together comprise one or more of the server devices204(1)-204(n). Moreover, the server devices204(1)-204(n) are not limited to a particular configuration. Thus, the server devices204(1)-204(n) may contain a plurality of network computing devices that operate using a master/slave approach, whereby one of the network computing devices of the server devices204(1)-204(n) operates to manage and/or otherwise coordinate operations of the other network computing devices. The server devices204(1)-204(n) may operate as a plurality of network computing devices within a cluster architecture, a peer-to peer architecture, virtual machines, or within a cloud architecture, for example. Thus, the technology disclosed herein is not to be construed as being limited to a single environment and other configurations and architectures are also envisaged. The plurality of client devices208(1)-208(n) may also be the same or similar to the computer system102or the computer device120as described with respect toFIG.1, including any features or combination of features described with respect thereto. Client device in this context refers to any computing device that interfaces to communications network(s)210to obtain resources from one or more server devices204(1)-204(n) or other client devices208(1)-208(n). According to exemplary embodiments, the client devices208(1)-208(n) in this example may include any type of computing device that can facilitate the implementation of the DRISN system202that may efficiently provide a platform for implementing a cloud native DRISN module that serves as a cloud based alternative to standard process automation tools or user tools as a service for any potential end-user without the need to learn any other advanced programming languages or concepts, but the disclosure is not limited thereto. The client devices208(1)-208(n) may run interface applications, such as standard web browsers or standalone client applications, which may provide an interface to communicate with the DRISN system202via the communication network(s)210in order to communicate user requests. The client devices208(1)-208(n) may further include, among other features, a display device, such as a display screen or touchscreen, and/or an input device, such as a keyboard, for example. Although the exemplary network environment200with the DRISN system202, the server devices204(1)-204(n), the client devices208(1)-208(n), and the communication network(s)210are described and illustrated herein, other types and/or numbers of systems, devices, components, and/or elements in other topologies may be used. It is to be understood that the systems of the examples described herein are for exemplary purposes, as many variations of the specific hardware and software used to implement the examples are possible, as will be appreciated by those skilled in the relevant art(s). One or more of the devices depicted in the network environment200, such as the DRISN system202, the server devices204(1)-204(n), or the client devices208(1)-208(n), for example, may be configured to operate as virtual instances on the same physical machine. For example, one or more of the DRISN system202, the server devices204(1)-204(n), or the client devices208(1)-208(n) may operate on the same physical device rather than as separate devices communicating through communication network(s)210. Additionally, there may be more or fewer DRISN systems202, server devices204(1)-204(n), or client devices208(1)-208(n) than illustrated inFIG.2. According to exemplary embodiments, the DRISN system202may be configured to send code at run-time to remote server devices204(1)-204(n), but the disclosure is not limited thereto. In addition, two or more computing systems or devices may be substituted for any one of the systems or devices in any example. Accordingly, principles and advantages of distributed processing, such as redundancy and replication also may be implemented, as desired, to increase the robustness and performance of the devices and systems of the examples. The examples may also be implemented on computer system(s) that extend across any suitable network using any suitable interface mechanisms and traffic technologies, including by way of example only teletraffic in any suitable form (e.g., voice and modem), wireless traffic networks, cellular traffic networks, Packet Data Networks (PDNs), the Internet, intranets, and combinations thereof. FIG.3illustrates a system diagram for implementing a cloud native dependency registration and impacted service notification system in accordance with an exemplary embodiment. As illustrated inFIG.3, the system300may include a DRISN system302within which a group of API modules306is embedded, a server304, a database(s)312, a plurality of client devices308(1) . . .308(n), and a communication network310. According to exemplary embodiments, the DRISN302including the API modules306may be connected to the server304, and the database(s)312via the communication network310. Although there is only one database has been illustrated, the disclosure is not limited thereto. Any number of databases may be utilized. The DRISN System302may also be connected to the plurality of client devices308(1) . . .308(n) via the communication network310, but the disclosure is not limited thereto. According to exemplary embodiment, the DRISN system302is described and shown inFIG.3as including the API modules306, although it may include other rules, policies, modules, databases, or applications, for example. According to exemplary embodiments, the database(s)312may be embedded within the DRISN system302. According to exemplary embodiments, the database(s)312may be configured to store configuration details data corresponding to a desired data to be fetched from one or more data sources, user information data etc., but the disclosure is not limited thereto. According to exemplary embodiments, the API modules306may be configured to receive real-time feed of data from the plurality of client devices308(1) . . .308(n) via the communication network310. According to exemplary embodiments, as will be described below, the API modules306may include, without limitation, request management API, automation template API, and execution API. The API modules306may provide a master catalog providing access to different APIs. In an example, upon authentication of a user, the API modules306may provide various automation request templates corresponding to an access setting of the user as specified in user profile. The request management API may allow the user to create new automation requests, schedule the automation requests, association requests with existing automation templates, association requests with executions, search for automation requests, check status of an automation request, and trigger creation, submission, and completion notification. The automation template API may fetch automation template inputs form attributes, retrieve automation template variables, retrieve automation template rules/formats attributes, CRUD (create, read, update and delete) for reference data, and the like. The execution API may, without limitation, trigger data ingestion process, trigger automation processing (e.g., validation, transformation, calculation, and arrangement), trigger storage and/or transfer of resulting data and documents, and read/write logs. The API modules306may be configured to implement a user interface (UI) platform that is configured to enable DRISN as a service for a desired data processing scheme. The UI platform may include an input interface layer and an output interface layer. The input interface layer may request preset input fields to be provided by a user in accordance with a selection of an automation template. The UI platform may receive user input, via the input interface layer, of configuration details data corresponding to a desired data to be fetched from one or more data sources. The user may specify, for example, data sources, parameters, destinations, rules, and the like. The UI platform may further fetch the desired data from said one or more data sources based on the configuration details data to be utilized for the desired data processing scheme, automatically implement a transformation algorithm on the desired data corresponding to the configuration details data and the desired data processing scheme to output a transformed data in a predefined format, and transmit, via the output interface layer, the transformed data to downstream applications or systems. The plurality of client devices308(1) . . .308(n) are illustrated as being in communication with the DRISN system302. In this regard, the plurality of client devices308(1) . . .308(n) may be “clients” of the DRISN system302and are described herein as such. Nevertheless, it is to be known and understood that the plurality of client devices308(1) . . .308(n) need not necessarily be “clients” of the DRISN system302, or any entity described in association therewith herein. Any additional or alternative relationship may exist between either or both of the plurality of client devices308(1) . . .308(n) and the DRISN system302, or no relationship may exist. The first client device308(1) may be, for example, a smart phone. Of course, the first client device308(1) may be any additional device described herein. The second client device308(n) may be, for example, a personal computer (PC). Of course, the second client device308(n) may also be any additional device described herein. According to exemplary embodiments, the server304may be the same or equivalent to the server device204as illustrated inFIG.2. The process may be executed via the communication network310, which may comprise plural networks as described above. For example, in an exemplary embodiment, one or more of the plurality of client devices308(1) . . .308(n) may communicate with the DRISN system302via broadband or cellular communication. Of course, these embodiments are merely exemplary and are not limiting or exhaustive. The computing device301may be the same or similar to any one of the client devices208(1)-208(n) as described with respect toFIG.2, including any features or combination of features described with respect thereto. The DRISN system302may be the same or similar to the DRISN system202as described with respect toFIG.2, including any features or combination of features described with respect thereto. FIG.4illustrates a method for performing generation of a dependency chain, identification of impacted services, and transmission of a notification to the impacted services in accordance with an exemplary embodiment. In operation401, performing an automated mining operation for each platform/environment of multiple platforms/environments included in an organization's eco-system for obtaining of resource data of various resources included in the eco-system. The resource data may include identification information of resources, resource type information, and relationships/dependencies with respect to other resources. According to non-limiting aspects of the present application, the auto mining operation may be performed by performing a scan across each platform/environment included in the organization's ecosystem. When performing the scan, various resources included in the respective platform/environment may be identified. Along with the identification of the resources, resource type may also be identified. Resource types may include, without limitation a hardware, application, services and the like. In addition, dependencies or relationships with other resources in a same or a different platform/environment may also be identified. In an example, relationships or dependencies between the identified resources may be determined based on load balancing, based on communications with other resources, interconnectivity, or the like. For example, in a cloud architecture, an internal cloud may be locked down to restrict access. In such cloud architecture, specific ports may be required to be opened before any resource may connect to it. At least since firewall has to be opened up to provide such connection, information of the connecting resource may be obtained. Further, resource information may also be gathered based on user operations. The relationships or dependencies between the identified resources may be determined automatically during a scanning process. However, aspects of the present disclosure are not limited thereto, such that the relationships or dependencies may be further edited or modified after the auto mining operation is completed. Further, the relationships/dependencies may indicate the related/dependent resource as well as the environment/platform in which the related/dependent resource may reside. Further, the relationships/dependencies may indicate the nature of the relationship or dependency. For example, the relationship/dependency may indicate whether one application is a downstream or upstream application in relation to another application. If a related resource is determined to be a downstream resource of a mined resource, the related resource may be affected by a failure of the mined resource. In contrast, if the related resource is determined to be an upstream resource of the mined resource, the failure of the mined resource may not significantly impact performance of the upstream resource. However, aspects of the present application is not limited thereto, such that the dependency chain may be structured as a web, where it indicates whether an application is related to another application or not. In operation402, a dependency chain is generated or built based on the information gathered during the auto mining process in operation401, and stored in a database. The gathered information may indicate identification of the resource, type of resource, and dependencies/relationships with other resources. For example, the dependencies of a target resource may indicate resource(s) that the target resource may transmit information to (e.g., downstream resource) and/or resource(s) that the target resource may receive information from (e.g., upstream resource). Further, the dependencies or relationship information may instead indicate that various resources with which the target resource communicates. Further, the dependencies/relationships with other resources may cut across different platforms/environments in the eco-system of the organization. In an example, one resource of a first platform/environment may have related/dependent resources at a second and/or third platforms/environments. In an example, the dependency chain may also be generated in a graphical format or in a chart form for easier visibility to a user. Based on the dependency chain, a user may have a better overview of various relationships between various resources and/or platforms/environments. Also, the dependency chain may also indicate amount of communication between certain resources and/or platforms/environments for identifications of potential bottlenecks in network traffic. Further, in an example, the dependency chain may be modified based on environment settings or network configurations. In operation403, each of the platforms/environments in the eco-system of is monitored by one or more monitoring tools. Monitoring tools may include, for example, App Mon, GTI Bus or the like. The monitoring tools may monitor performance and availability of a particular resource to detect a resource change event. In an example, performance may be monitored with respect to predetermined threshold values to ensure that the particular resource is performing within expected level of service. When a performance metric of a resource drops below a predetermined or reference threshold value, a resource change event may be detected. In operation404, an anomaly or failure is detected in one or more resources in the eco-system as a resource change event. The anomaly or failure may be a non-functioning status of a resource, a diminished performance thereof, or an inability to connect to the resource. Given the level of performance, it may be indicated as a fix that is to be addressed immediately, after hours, or during a scheduled maintenance. In an example, the level of performance may be based on a number of impacted resources or an importance of the respective resource. In operation405, once the anomaly or failure is detected at a particular resource, the dependency chain is retrieved for determination of impacted resources. In an example, the impacted resources may include downstream resources, which depend on information from the particular resource for which the anomaly or failure is detected, but not upstream resources, which may provide information to the particular resource but does not depend on information from the particular resource. Alternatively, the impacted resources may include any resource that may communicates with the particular resource. In operation406, a notification is transmitted to the impacted resources. In an example, notification may be transmitted as an email, text message, voice mail, other notification medium or a combination of the thereof. Although the present disclosure was described with respect to providing notification to impacted resources upon detecting of an anomaly or failure of a resource within a dependency chain, aspects of the present application are not limited thereto. For example, the dependency chain may be utilized to notify dependent resources of a resource that may be scheduled to have maintenance or upgrade performed thereon. FIG.5illustrates an eco-system of an organization including a dependency registration and impacted service notification system in accordance with an exemplary embodiment. According to aspects of the present application exemplarily illustrated inFIG.5provides that an eco-system of an organization includes a DRISN system500, an application team510, a client terminal520, a plurality of environments or platforms530, a plurality of monitoring tools540, and a mail host550. In an example, the DRISN system500may scan each of the plurality of environments or platforms530to identify resources included in each of the environments or platforms531,532and533, and determine resource for each of the identified resources. The identified resources and corresponding dependency may be utilized to build a dependency chain. In an example, a single aggregate dependency chain may be formed for each of the plurality of environments530, or multiple dependency chains may be generated, one for each of the plurality of environments530. Although a dependency chain including all of the identified resources is described above, aspects of the present application are not limited thereto, such that less than all of the identified resources may be included in the dependency chain, such that only the resources having other resource dependencies may be included in the dependency chain. In another example, only resources having dependencies residing in a different environment may be included in the dependency chain, such that resource dependency residing in the same environment may not be included in the dependency chain. Further, although only three environments531,532and533are illustrated, aspects of the present disclosure are not limited thereto, such that more or less than the number of environments may exist in an eco-system. In an example, the application team510may access the DRISN system500by utilizing the client computer520to access the cloud network in which DRISN system500resides by accessing its API501. Once accessed, the application team510may have access the dependency chain, which may be presented in data format or in a graphical format for easier readability. Once the dependency chain(s) are generated, application monitoring tools540performs monitoring of the network to determine anomaly or failure occurring in any of the resources included in the dependency chain(s). In an example, the monitoring tools540includes a first monitoring tool541and a second monitoring tool542. Although only two monitoring tools are illustrated inFIG.5, aspects of the present disclosure are not limited thereto, such that more or less monitoring tools may be utilized. Once the anomaly or failure is detected by the application monitoring tools540, resource dependencies of the resource for which anomaly or failure are identified. Based on the resource dependencies, notifications will be transmitted to impacted users via the mail host550. Although the mail host550is illustrated as the mechanism via which the notifications are to be delivered, aspects of the present disclosure are not limited thereto, such that other notification systems (e.g., text message, web based message, voice call, or the like) may be utilized. In an example, the impacted users may include users of dependent resources, and parties responsible for supporting or overseeing the dependent resources. FIG.6illustrates a method for generating of a dependency chain in accordance with an exemplary embodiment. In operation601, a DRISN system performs scanning of each of multiple platforms and environments included in an eco-system. In operation602, resources included in the platforms and environments are identified. In an example, each resource in the eco-system may be identified and stored. In another example, only resources connected to other resources may be identified for inclusion in a dependency chain. In operation603, for the identified resources, a determination of number of connections with other resources is made. Further, a determination of locations of the connected resources may also be made. Based on the number of connections and/or their respective locations, a determination of importance or a magnitude of impact may be made. For example, if resource A in platform/environment1is connected with resources B and C in platforms/environments2and3, it may be determined that the failure of resource A may impact three environments and is more important over a resource that may impact only two platforms/environments. Further, a priority for addressing each of the one or more resources that are impacted by a change event may be set based on the number of connections with other resources for each of the one or more resources that are impacted by the change event. In an example, urgency or level of attention required may be distinguished for resources that are determined to be more important than others. According to exemplary aspects, a failure of a more important resource may trigger immediate action(s) by a technical support team, whereas failures of less egregious nature may be relegated to after hour response to minimize impact to users of the eco-system. In operation604, all of the collected data in operations602and603for each of the platforms/environments are consolidated. In operation605, one or more dependency chains are generated. In an example, a single dependency chain may be formed for all of the platforms or environments. In another example, multiple dependency chains may be formed. More specifically, each dependency chain may be formed for one or more of the platforms or environments. However, aspects of the present disclosure are not limited thereto, such that individual dependency chains may be generated for each of the platforms or environments, along with a master dependency chain for all of the platforms or environments. Further still, the generated dependency chains may be modified after generation to manually edit the dependencies. In operation606, the generated one or more dependency chains may be stored. In an example, the one or more dependency chains may be stored on a cloud network. However, aspects of the present disclosure is not limited thereto, such that such information may be stored at a remote server. FIG.7illustrates a method for providing select notification to impact services in accordance with an exemplary embodiment. In operation701, an anomaly or failure is detected as a change event at a resource included in a dependency chain for an eco-system of an organization. In an example, such resource may be identified as an anomalous resource. In operation702, impacted resources for the anomalous resource, for which anomaly or failure is detected, are identified. The impacted resources may include resources that may communicate with the anomalous resource for which the anomaly or failure is detected. The impacted resources may include only resources that depend on the anomalous resource, and not upstream resources that provide information for the anomalous resource. Alternatively, the impacted resources may include any resource that is configured to communicate with the anomalous resource. In operation703, a nature of the anomaly or failure (i.e., change event) is identified. However, aspects of the disclosure is not limited thereto, such that additional attributes of the change event may be identified. In an example, the change event may indicate at least one of a nature of anomaly, a severity of the anomaly, a nature of impact and a severity of the impact. The nature of the detected anomaly or failure may be a slower service, severity of slow service (e.g., speed, length and etc.), incorrect operation, a disconnected status, or the like. Also, nature of impact to related/dependent resources is also identified. For example, the nature of impact to related/dependent resources may indicate slower than normal service, severity of slow service (e.g., speed, length and etc.), incorrect operation, inability to connect or the like. Based on such information, content of the notification message is determined. For example, the notification may specify at least one of an identification of the target resource, a nature of the change event detected at the target resource, an identification of the one or more resources impacted by the change event, a degree of impact to the one or more resources impacted by the change event, an immediacy of resolution, an expected response action and an estimated schedule of the expected response action. In operation704, users and/or groups for notification is identified. The users and/or groups for notification may include users of impacted resources, and one or more groups that may be responsible for overseeing or supporting the impacted resources. For example, users of the impacted resource may include users of the related/dependent resources and users of the resource for which anomaly and failure is detected. Further, aspects of the present disclosure are not limited thereto, such that other users may request notification for any particular resource or resources. In operation705, a mode of notification requested by the identified users and/or groups are identified. In an example, the identified users may have a preferred mode of receiving notification. For example, a normal application user may request to be notified by email, whereas a technical support personnel may request to be notified by email, text message, voice message, or other wireless communication to provide an immediate response. Further, each of the users may modify the notification setting in a user's profile according to the respective users' needs. FIG.8illustrates a method for providing of targeted resolution in accordance with an exemplary embodiment. In operation801, an anomaly or failure is detected at a resource included in a dependency chain for an eco-system of an organization. In operation802, a nature of the anomaly or failure is identified. For example, the nature of the detected anomaly or failure may be a slower service, severity of slow service (e.g., speed, length and etc.), incorrect operation, a disconnected status, or the like. In operation803, a level of impact to connected platform/environment and/or connected/related resources is determined. The level of impact may indicate anywhere from inefficient processing to inaccessibility. If the level of impact rises to inaccessibility of the impacted resource, and such impact to the respective resource may be limited to the impacted resource or may cause an entire platform or environment to be non-functional. Based on the determined level of impact to the connected/related resource, and/or its corresponding platform/environment, immediacy of resolution is determined in operation804. If the impacted resource is determined to be an isolated resource that has little usage, resolution may be set to be performed after hours to avoid unnecessary downtime. On the other hand, if the impacted resource(s) are determined to be heavily utilized resources, or if the impacted resource(s) cause serious performance issues for a corresponding platform/environment, a request for immediate fix may be deployed. Based on the determination of operation804, alert notification is sent to relevant parties (e.g., application users, technical support personnel, stake holders, and etc.) The alert notification may indicate an identity of the resource for which anomaly or failure is detected, a nature of impact, and/or a schedule of the planned resolution (e.g., immediate fix, after hour fix, weekend fix or the like). As noted above, the present application discloses performing auto-mining of resources for each resource on boarded in an organization's eco-system for identifying of resources along with dependencies it connects to. The dependencies may cut across various platforms. The identified dependencies or relationships may be utilized to build one or more dependency chains, which are then stored for later retrieval. Once one or more dependency chains are identified, a team member accessing an application in one of the dependency chain is notified when a failure occurs in another application in the dependency chain. More specifically, failures from various monitoring tools may be utilized to determine impacted services by using the dependency chain and communicate the failures to all impacted service teams immediately. Monitoring tools may include an event bus, App mon and the like. Based on use of the dependency chains, a failure at an application, a resource or a node in the dependency chain may trigger notification to all impacted services. Further, upgrades or maintenances may be accurately and timely communicated to impacted services and/or applications. Further, scale of impact may be determined and across the organization and across all platforms/environments at the organization. Failure and root cause may be shared across dependency chains. Further, although the invention has been described with reference to several exemplary embodiments, it is understood that the words that have been used are words of description and illustration, rather than words of limitation. Changes may be made within the purview of the appended claims, as presently stated and as amended, without departing from the scope and spirit of the present disclosure in its aspects. Although the invention has been described with reference to particular means, materials and embodiments, the invention is not intended to be limited to the particulars disclosed; rather the invention extends to all functionally equivalent structures, methods, and uses such as are within the scope of the appended claims. For example, while the computer-readable medium may be described as a single medium, the term “computer-readable medium” includes a single medium or multiple media, such as a centralized or distributed database, and/or associated caches and servers that store one or more sets of instructions. The term “computer-readable medium” shall also include any medium that is capable of storing, encoding or carrying a set of instructions for execution by a processor or that cause a computer system to perform any one or more of the embodiments disclosed herein. The computer-readable medium may comprise a non-transitory computer-readable medium or media and/or comprise a transitory computer-readable medium or media. In a particular non-limiting, exemplary embodiment, the computer-readable medium can include a solid-state memory such as a memory card or other package that houses one or more non-volatile read-only memories. Further, the computer-readable medium can be a random access memory or other volatile re-writable memory. Additionally, the computer-readable medium can include a magneto-optical or optical medium, such as a disk or tapes or other storage device to capture carrier wave signals such as a signal communicated over a transmission medium. Accordingly, the disclosure is considered to include any computer-readable medium or other equivalents and successor media, in which data or instructions may be stored. Although the present application describes specific embodiments which may be implemented as computer programs or code segments in computer-readable media, it is to be understood that dedicated hardware implementations, such as application specific integrated circuits, programmable logic arrays and other hardware devices, can be constructed to implement one or more of the embodiments described herein. Applications that may include the various embodiments set forth herein may broadly include a variety of electronic and computer systems. Accordingly, the present application may encompass software, firmware, and hardware implementations, or combinations thereof. Nothing in the present application should be interpreted as being implemented or implementable solely with software and not hardware. Although the present specification describes components and functions that may be implemented in particular embodiments with reference to particular standards and protocols, the disclosure is not limited to such standards and protocols. Such standards are periodically superseded by faster or more efficient equivalents having essentially the same functions. Accordingly, replacement standards and protocols having the same or similar functions are considered equivalents thereof. The illustrations of the embodiments described herein are intended to provide a general understanding of the various embodiments. The illustrations are not intended to serve as a complete description of all of the elements and features of apparatus and systems that utilize the structures or methods described herein. Many other embodiments may be apparent to those of skill in the art upon reviewing the disclosure. Other embodiments may be utilized and derived from the disclosure, such that structural and logical substitutions and changes may be made without departing from the scope of the disclosure. Additionally, the illustrations are merely representational and may not be drawn to scale. Certain proportions within the illustrations may be exaggerated, while other proportions may be minimized. Accordingly, the disclosure and the figures are to be regarded as illustrative rather than restrictive. One or more embodiments of the disclosure may be referred to herein, individually and/or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any particular invention or inventive concept. Moreover, although specific embodiments have been illustrated and described herein, it should be appreciated that any subsequent arrangement designed to achieve the same or similar purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all subsequent adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the description. The Abstract of the Disclosure is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, various features may be grouped together or described in a single embodiment for the purpose of streamlining the disclosure. This disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter may be directed to less than all of the features of any of the disclosed embodiments. Thus, the following claims are incorporated into the Detailed Description, with each claim standing on its own as defining separately claimed subject matter. The above disclosed subject matter is to be considered illustrative, and not restrictive, and the appended claims are intended to cover all such modifications, enhancements, and other embodiments which fall within the true spirit and scope of the present disclosure. Thus, to the maximum extent allowed by law, the scope of the present disclosure is to be determined by the broadest permissible interpretation of the following claims and their equivalents, and shall not be restricted or limited by the foregoing detailed description.
57,077
11943093
DETAILED DESCRIPTION Generally described, aspects of the present disclosure relate to an on-demand code execution system enabling rapid execution of code, which may be supplied by users of the on-demand code execution system. An on-demand code execution system may also be known as a “serverless” execution system or a request-drive code execution system. More specifically, embodiments of the present disclosure relate to facilitating recovery of stateful network connections of an execution environment on the on-demand code execution system when those connections are expected to be disrupted, such as by migration of the execution environment to another device or by a change in state of the execution environment (e.g., suspension or deactivation). As described in detail herein, the on-demand code execution system may provide a network-accessible service enabling users to submit or designate computer-executable code to be executed by virtual machine instances on the on-demand code execution system. Each set of code on the on-demand code execution system may define a “task,” and implement specific functionality corresponding to that task when executed on a virtual machine instance of the on-demand code execution system. Individual implementations of the task on the on-demand code execution system may be referred to as an “execution” of the task (or a “task execution”). The on-demand code execution system can further enable users to trigger execution of a task based on a variety of potential events, such as detecting new data at a network-based storage system, transmission of an application programming interface (“API”) call to the on-demand code execution system, or transmission of a specially formatted hypertext transport protocol (“HTTP”) packet to the on-demand code execution system. Thus, users may utilize the on-demand code execution system to execute any specified executable code “on-demand,” without requiring configuration or maintenance of the underlying hardware or infrastructure on which the code is executed. Further, the on-demand code execution system may be configured to execute tasks in a rapid manner (e.g., in under 100 milliseconds [ms]), thus enabling execution of tasks in “real-time” (e.g., with little or no perceptible delay to an end user). The on-demand code execution system may implement a variety of technologies to enable rapid execution of code. Illustratively, the on-demand code execution system may be configured to maintain a number of execution environments, such as virtual machine instances, software containers, or the like, in which code of a task may be provisioned and executed. In some instances, an appropriate execution environment may be currently executing on the on-demand code execution system on a host device nearby to an end user, and thus execution of a task may require little more than provisioning the execution environment with code and executing that code. In other instances, these execution environments may not be executing, and thus execution of a task may also require initializing the environment (e.g., by launching a virtual machine instance, which may include loading an operating system, etc.). In general, maintaining an environment in an executing state can facilitate more rapid execution of a task, but also utilizes some amount of working computing resources of a host computing device, such as central processing unit (CPU) cycles and registers, random access memory (RAM), and the like. In contrast, maintaining an environment in a non-executing state (e.g., a shut down virtual machine instance), can utilize fewer or no working resources, and can instead utilize non-working resources (e.g., resources not required for currently executing processes) such as long term memory storage provided by a hard disk drive (HDD). However, maintaining an environment in a non-executing state may require that the environment first be initialized prior to executing a task within the environment, thus potentially delaying execution of the task. To address this trade-off, embodiments of the on-demand code execution system can be configured to modify a memory state of an execution environment according to a next predicted execution of the task, such that the overall computing resource usage of the on-demand code execution system is reduced, and/or such that the on-demand code execution system can facilitate more executions of tasks within a given set of computing resources. Specifically, embodiments of the present disclosure enable the on-demand code execution system to utilize historical information regarding executions of tasks to predict, for a given task, when a next request to execute that task will occur. The on-demand code execution system may then place an execution environment for the task into a memory state based on that predicted next execution request. For example, where the next execution request is expected to occur shortly, the on-demand code execution system may maintain an environment for the task in an executing state, such as within RAM or other primary memory of a host device. Conversely, when the next execution request is expected to occur at a significantly later time, the on-demand code execution system may maintain the environment for the task in a non-executing state within a secondary memory, such as a hard disk drive. Because secondary memory is generally more readily available within a host device than primary memory, and because maintaining an environment in a non-executing state generally minimizes or negates load on processing resources of a host (e.g., a CPU), maintaining an environment within secondary memory can reduce computing resource usage of a host device and free those resources for use in executing other tasks, thus improving the efficiency of the on-demand code execution system overall. In some instances, a host device of the on-demand code execution system may have access to multiple levels of secondary memory, in addition to a primary memory (e.g., RAM). For example, a host device may have access to very rapid non-volatile memory, such as 3D XPOINT™ memory, developed by Intel Corporation (which memory implements a specific technology architecture generally classified as a type of resistive random-access memory, or “ReRAM”), relatively less rapid flash storage (e.g., within a solid state disk (SSD) drive), relatively less rapid magnetic storage memory (e.g., within an HDD), and still relatively less rapid non-local storage (e.g., a network-attached storage, or “NAS,” which may be implemented by any of a variety of physical memory storage devices, including the preceding memory device types). In general, the speed of memory (e.g., in terms of bandwidth) can be expected to be inversely proportional to the amount of such memory available. Thus, 3D XPOINT memory is expected to be less available than flash storage, which is expected to be less available than magnetic storage, etc. As used herein, the term “lower tier” memory is generally intended (unless noted to the contrary) to refer to memory with lower speed but greater expected availability than a “higher tier” memory, which in turn is generally intended to refer to memory with higher speed and lower expected availability. As such, transitioning an execution environment to a lower tier of memory is generally expected to reduce the resource usage of the on-demand code execution system in maintaining that environment, while at the same time increasing the time required to initialize the environment such that a task may be executed within the environment. As will be discussed below, the on-demand code execution system can therefore be configured to transition an environment to a lowest tier memory possible while still maintaining the ability of the on-demand code execution system to rapidly execute a task within the environment based on a predicted next request to execute the task. In one embodiment, a predicted next request to execute a task may be based on historical information regarding the task. Illustratively, if requests to execute a task have historically (e.g., over a past period of time, such as a day, week, month, year, etc.) occurred at a set frequency of once per minute, the on-demand code execution system may expect that a next request to execute the task will occur one minute after a prior request. This illustrative example may occur when an external system, such as a web service, has been configured to call to the on-demand code execution system for execution of a task at the set frequency. In some instances, the on-demand code execution system may be configured to calculate an expected range of time until a next request to execute a task based on historical information. For example, the on-demand code execution system may apply common statistical techniques to calculate a mean or median predicted time until a next execution, or a standard deviation of a distribution of times between requests to execute the code. As a further example, the on-demand code execution system may calculate an interquartile range of a historical distribution of times between calls to execute a task, and use this range (along with a known last call to execute the task) to predict when a next call to execute a task will occur. In other embodiments, the on-demand code execution system may utilize other inputs to predict a next expected request to execute a task. For example, where little or no history exists for a given task, the on-demand code execution system may instead utilize request histories of similar tasks to calculate the next expected request to execute the given task. Similar tasks may be identified, for example, based on length of source code for the task, functions called within the code, libraries utilized by the task, a type of environment (e.g., operating system or runtime environment) for the task, and the like. In some embodiments, the on-demand code execution system may have more specific knowledge of the next expected request to execute a task. For example, a creator of a task may explicitly ask that the on-demand code execution system execute the task at a set frequency, thus enabling the on-demand code execution system to determine with great accuracy when a next execution of the task will occur. In one embodiment, the on-demand code execution system is configured, on detecting an idle execution environment for a task in a primary memory (e.g., after execution of the task within the environment), to transition the environment to a lowest tier memory which would still enable the environment to be transitioned back to an executing state prior to a time of a next expected request to execute the task. For example, where a next expected request to execute a task in an environment is one minute in the future, and transitioning an environment to a given tier of secondary memory and back into primary memory is expected to take 50 seconds of time (e.g., 25 seconds to halt execution and transition to secondary memory and the same amount of time to transition to primary memory and initialize the environment), the on-demand code execution system may be configured to conduct that transition, thus “freeing up” an amount of primary memory supporting execution of the environment for ten seconds of time. (The amount freed in practice would exceed this, as some amount of primary memory would be expected to become available even before transitioning of the environment out of primary memory completes, and some amount would be expected not to be utilized until transitioning of the environment back to primary memory completes.) While the above example may reduce overall usage of the primary memory, this example may not be desirable for the on-demand code execution system overall, as transitioning execution environments to secondary memory may also incur costs in terms of resource usage. Illustratively, in the example above, transitioning an environment from primary memory to secondary memory over a period of 25 seconds may utilize both the secondary memory and bandwidth of a communication bus between the primary and secondary memory. These other resources (the secondary memory and communication bus) may also support other operations of the on-demand code execution system. Thus, while transitioning an environment to secondary memory may free up a portion of the primary memory for a ten second period, it may also incur costs in terms of use of the secondary memory and the communication bus. To address concern, embodiments of the present disclosure may calculate an expected cost (e.g., in terms of usage of computing resources) associated with maintaining an execution environment in a primary memory, as well as expected costs for transitioning the environment to each potential secondary memory (e.g., in terms of both usage of the secondary memory and usage of resources, such as a communication bus, to move the environment to the secondary memory). Each cost may be based at least partly on a timing of a next expected request to execute a task within the environment. The on-demand code execution system may thereafter transition the environment to a memory tier with a lowest expected overall cost to the on-demand code execution system. In one embodiment, the execution environments managed by the on-demand code execution system correspond to virtual machine instances. To transition such instances from an executing to non-executing state, the on-demand code execution system may utilize “snapshots” of such virtual machine instances. Snapshotting of virtual machines is a known technique, and thus will not be described in detail herein. However, in brief, snapshotting may generate a data file which stores a state of a virtual machine instance at a point in time, including state elements such as a content of CPU registers of the virtual machine instance, contents of RAM of the virtual machine instances, states of pages within RAM (e.g., as “dirty” or “clean”), and any other information required to return the virtual machine instances to its prior state at a later point in time. Thus, as will be described below, the on-demand code execution system may be configured to modify a memory state of a virtual machine instance from primary memory to secondary memory by snapshotting a current state of the instances, and placing that snapshot into secondary memory. The on-demand code execution system may further modify a memory state of a virtual machine instance from a secondary memory to primary memory by utilizing the snapshot to reinitialize the virtual machine image in an executing state. In addition to transitioning a virtual machine instance between primary and secondary memory, embodiments of the present disclosure may further enable transitioning virtual machine instances between memories of different host computing devices. For example, an on-demand code execution system may be implemented by devices distributed across a number of geographic locations, such as multiple data centers, each of which includes a number of host devices. To facilitate rapid execution of code, the on-demand code execution system may be configured to attempt to execute a task requested by an end user within an environment on a host device nearby to the end user (e.g., geographically or in terms of network distance). Due to movement of users, changes in capacity, failure of machines, and the like, it may be desirable to move execution environments between host devices, such that when a request to execute a task is received from an end user, the execution environment is hosted on a host device nearby to an end user with sufficient capacity to execute the task. A number of techniques are known in the art to determine a host device to which an execution environment should be migrated, and these techniques will thus not be discussed in detail herein. After selecting a host device to which an execution environment should be migrated, embodiments described herein may be utilized to migrate the execution environment from a current host device to a destination host device. Illustratively, an execution environment may be migrated from an executing state on a first host device to a secondary memory of the second host device in accordance with embodiments of the present disclosure. One potential issue raised by transitioning an execution environment into an inactive state, or transitioning an environment between host devices (e.g., migrating the environment) is a disruption of stateful network connections. Generally described, stateful network connections are communication sessions between two devices on a communication network, where each device maintains “state” information reflecting a state of the communication session. One example of a stateful network connection is a connection between two computing devices implementing the transport control protocol (TCP). In a TCP network connection, two devices maintain state information regarding a TCP session, and negotiate to establish the state of that session. The information maintained by each device generally includes a “5-tuple” (a set of five values that at least partly define the session) and a session number. The five values of the 5-tuple generally identify for the session a source internet protocol (IP) address and port number, a destination IP address and port number, and an identifier of TCP protocol. Each device further maintains a sequence number, identifying an ordering of communications in the session. If any of the information maintained at a device changes, the TCP connection can be broken. For example, if an execution environment establishes a TCP connection to an external device and is then migrated from one host device to another, the environment may obtain a new IP address based on its new location. This may alter the 5-tuple of the TCP session, resulting in the external device rejecting the TCP connection if the environment attempts to utilize it. Stateful connections can also be broken due to inactivity, such as a timeout occurring at either device. Thus, if an execution environment is rendered inactive for a sufficient period of time, an external device may consider a TCP connection with the environment to be closed. On reactivating the environment, the environment may not be aware of this change (since, from the point of view of the environment, the timeout might not have occurred), and may attempt to utilize the TCP connection, resulting in an error. While examples are provided herein with reference to TCP, similar issues may exist within any stateful or connection-oriented protocol, including protocols layered over otherwise stateless protocols (e.g., the User Datagram Protocol, or “UDP”). Thus, transitions in state or location of an execution environment can be detrimental to stateful network connections of the device. Embodiments of the present disclosure address these problems by providing a mechanism for gracefully halting and recovering stateful network connections on an execution environment that is transitioned in state or location. Specifically, in accordance with embodiments of the present disclosure, a host device hosting an execution environment may, prior to transitioning the environment, notify external devices (e.g., devices to which the environment has a stateful network connection) that connection is ending. The host device may illustratively identify external devices based on a connection table maintained at the host device, and utilized to route network data from the execution environment to the external device. In one embodiment, the host device “masquerades” as the execution environment when notifying the external device that a connection is ending, such as by modifying data packets corresponding to the notification to appear to originate from the execution environment. Thus, the external device may recognize the connection has being terminated. The host device may further prevent communications from or to the execution environment, to prevent the environment from gaining knowledge that the external device believes the connection to be terminated (since such knowledge may cause the environment to attempt to reestablish the connection prematurely). The host device may then transition the execution environment, either in state (e.g., to a secondary memory of the host device) or in location (e.g., to another host device). On reactivating the environment, the host device may notify the environment that each connection (believed by the environment to still be active) is terminating. In one embodiment, the host device “masquerades” as the external device when notifying the execution environment that a connection is ending, such as by modifying data packets corresponding to the notification to appear to originate from the external device. Thus, the environment may also recognize the connection as being terminated, synchronizing its knowledge of the connection with the external device. Because the above interactions enable both the external device and the execution to have the same knowledge of the connection (as terminated), the environment may then reestablish the network connection to the external device, if necessary, while avoiding any extraneous communications based on non-synchronized knowledge (which would be expected to result in errors). Any stateful connections of the environment can therefore be gracefully reestablished after transitioning of the environment. While alternative techniques may exist to gracefully maintain or reestablish stateful network connections on a transitioning execution environment, these techniques may not be preferable on an on-demand code execution system as disclosed herein. For example, during migration of a virtual machine, it may be possible to maintain network connections by utilizing virtual networking or tunneling technologies to redirect network traffic from a prior host device hosting an environment to a new host device. Illustratively, the prior host device may be configured to receive network packets directed to an environment from an external device, and to route the packets to the new host device for delivery to the environment. However, network redirection generally incurs additional latency and overhead in terms of compute resources used to facilitate such redirection. This additional latency and overhead can be particularly problematic in production environments intended to operate at low latencies, like an on-demand code execution system. Moreover, such tunneling or rerouting would generally be insufficient to handle reestablishment of stateful network connections in the instance that of an environment being transitioned to an inactive state for a relatively long duration of time (e.g., longer than a timeout value for the network connection). Thus, the embodiments disclosed herein for gracefully terminating and reestablishing network connections may be preferable to rerouting or tunneling techniques. As will be appreciated by one of skill in the art in light of the present disclosure, the embodiments disclosed herein improves the ability of computing systems, such as on-demand code execution systems, to execute code in an efficient manner. Specifically, embodiments of the present disclosure increase the efficiency of computing resource usage of such systems by enabling execution environments to be transitioned to lower tier memory, while maintaining the ability of such systems to execute code rapidly in response to requests to do so. Further, embodiments of the present disclosure decrease the occurrence of errors in on-demand code execution systems, by enabling graceful recovery of stateful network connections in environments hosted by such systems. Moreover, the presently disclosed embodiments address technical problems inherent within computing systems; specifically, the limited nature of computing resources with which to execute code, the inefficiencies caused by maintaining unutilized environments in an executing state, and the difficulties of maintaining stateful network connections during transitions of execution environments. These technical problems are addressed by the various technical solutions described herein, including the selective transitioning of environments to lower tier memories based on a time until a next expected utilization of such an environment, and the graceful recovery of network connections for the environment by utilization of a host device to notify either or both an external device and the environment that each network connection has terminated. Thus, the present disclosure represents an improvement on existing data processing systems and computing systems in general. The general execution of tasks on the on-demand code execution system will now be discussed. As described in detail herein, the on-demand code execution system may provide a network-accessible service enabling users to submit or designate computer-executable source code to be executed by virtual machine instances on the on-demand code execution system. Each set of code on the on-demand code execution system may define a “task,” and implement specific functionality corresponding to that task when executed on a virtual machine instance of the on-demand code execution system. Individual implementations of the task on the on-demand code execution system may be referred to as an “execution” of the task (or a “task execution”). The on-demand code execution system can further enable users to trigger execution of a task based on a variety of potential events, such as detecting new data at a network-based storage system, transmission of an application programming interface (“API”) call to the on-demand code execution system, or transmission of a specially formatted hypertext transport protocol (“HTTP”) packet to the on-demand code execution system. Thus, users may utilize the on-demand code execution system to execute any specified executable code “on-demand,” without requiring configuration or maintenance of the underlying hardware or infrastructure on which the code is executed. Further, the on-demand code execution system may be configured to execute tasks in a rapid manner (e.g., in under 100 milliseconds [ms]), thus enabling execution of tasks in “real-time” (e.g., with little or no perceptible delay to an end user). To enable this rapid execution, the on-demand code execution system can include one or more virtual machine instances that are “pre-warmed” or pre-initialized (e.g., booted into an operating system and executing a complete or substantially complete runtime environment) and configured to enable execution of user-defined code, such that the code may be rapidly executed in response to a request to execute the code, without delay caused by initializing the virtual machine instance. Thus, when an execution of a task is triggered, the code corresponding to that task can be executed within a pre-initialized virtual machine in a very short amount of time. Specifically, to execute tasks, the on-demand code execution system described herein may maintain a pool of executing virtual machine instances that are ready for use as soon as a user request is received. Due to the pre-initialized nature of these virtual machines, delay (sometimes referred to as latency) associated with executing the user code (e.g., instance and language runtime startup time) can be significantly reduced, often to sub-100 millisecond levels. Illustratively, the on-demand code execution system may maintain a pool of virtual machine instances on one or more physical computing devices, where each virtual machine instance has one or more software components (e.g., operating systems, language runtimes, libraries, etc.) loaded thereon. When the on-demand code execution system receives a request to execute the program code of a user (a “task”), which specifies one or more computing constraints for executing the program code of the user, the on-demand code execution system may select a virtual machine instance for executing the program code of the user based on the one or more computing constraints specified by the request and cause the program code of the user to be executed on the selected virtual machine instance. The program codes can be executed in isolated containers that are created on the virtual machine instances, or may be executed within a virtual machine instance isolated from other virtual machine instances acting as environments for other tasks. Since the virtual machine instances in the pool have already been booted and loaded with particular operating systems and language runtimes by the time the requests are received, the delay associated with finding compute capacity that can handle the requests (e.g., by executing the user code in one or more containers created on the virtual machine instances) can be significantly reduced. Because the number of different virtual machine instances that a host computing device may execute is limited by the computing resources of that host (and particularly by highly utilized resources such as CPU cycles and RAM), the number of virtual machine instances in a pool on the on-demand code execution system is similarly limited. Thus, in accordance with the embodiments of the present disclosure, the on-demand code execution system may generate execution environments for a large number of tasks (e.g., more environments than could be maintained as executing on the on-demand code execution system at a given point in time), and transition a subset (e.g., a majority) of those environments into lower tier memory storage, based on a next expected utilization of each environment. Thus, a primary memory of the on-demand code execution system can be expected to hold environments either being actively used or expected to be used in a very short amount of time. As environments within the primary memory become idle, the on-demand code execution system can transition the environments to secondary memory based on future expected use, and move into primary memory those environments which are next expected to be used. In this manner, the overall efficiency of primary memory within the on-demand code execution system is increased. As used herein, the term “virtual machine instance” is intended to refer to an execution of software or other executable code that emulates hardware to provide an environment or platform on which software may execute (an “execution environment”). Virtual machine instances are generally executed by hardware devices, which may differ from the physical hardware emulated by the virtual machine instance. For example, a virtual machine may emulate a first type of processor and memory while being executed on a second type of processor and memory. Thus, virtual machines can be utilized to execute software intended for a first execution environment (e.g., a first operating system) on a physical device that is executing a second execution environment (e.g., a second operating system). In some instances, hardware emulated by a virtual machine instance may be the same or similar to hardware of an underlying device. For example, a device with a first type of processor may implement a plurality of virtual machine instances, each emulating an instance of that first type of processor. Thus, virtual machine instances can be used to divide a device into a number of logical sub-devices (each referred to as a “virtual machine instance”). While virtual machine instances can generally provide a level of abstraction away from the hardware of an underlying physical device, this abstraction is not required. For example, assume a device implements a plurality of virtual machine instances, each of which emulate hardware identical to that provided by the device. Under such a scenario, each virtual machine instance may allow a software application to execute code on the underlying hardware without translation, while maintaining a logical separation between software applications running on other virtual machine instances. This process, which is generally referred to as “native execution,” may be utilized to increase the speed or performance of virtual machine instances. Other techniques that allow direct utilization of underlying hardware, such as hardware pass-through techniques, may be used, as well. While a virtual machine executing an operating system is described herein as one example of an execution environment, other execution environments are also possible. For example, tasks or other processes may be executed within a software “container,” which provides a runtime environment without itself providing virtualization of hardware. Containers may be implemented within virtual machines to provide additional security, or may be run outside of a virtual machine instance. The foregoing aspects and many of the attendant advantages of this disclosure will become more readily appreciated as the same become better understood by reference to the following description, when taken in conjunction with the accompanying drawings. FIG.1is a block diagram of an illustrative operating environment100in which an on-demand code execution system110may operate based on communication with user computing devices102, auxiliary services106, and network-based data storage services108. By way of illustration, various example user computing devices102are shown in communication with the on-demand code execution system110, including a desktop computer, laptop, and a mobile phone. In general, the user computing devices102can be any computing device such as a desktop, laptop or tablet computer, personal computer, wearable computer, server, personal digital assistant (PDA), hybrid PDA/mobile phone, mobile phone, electronic book reader, set-top box, voice command device, camera, digital media player, and the like. The on-demand code execution system110may provide the user computing devices102with one or more user interfaces, command-line interfaces (CLIs), application programing interfaces (APIs), and/or other programmatic interfaces for generating and uploading user-executable code (e.g., including metadata identifying dependency code objects for the uploaded code), invoking the user-provided code (e.g., submitting a request to execute the user codes on the on-demand code execution system110), scheduling event-based jobs or timed jobs, tracking the user-provided code, and/or viewing other logging or monitoring information related to their requests and/or user codes. Although one or more embodiments may be described herein as using a user interface, it should be appreciated that such embodiments may, additionally or alternatively, use any CLIs, APIs, or other programmatic interfaces. The illustrative environment100further includes one or more auxiliary services106, which can interact with the one-demand code execution environment110to implement desired functionality on behalf of a user. Auxiliary services106can correspond to network-connected computing devices, such as servers, which generate data accessible to the one-demand code execution environment110or otherwise communicate to the one-demand code execution environment110. For example, the auxiliary services106can include web services (e.g., associated with the user computing devices102, with the on-demand code execution system110, or with third parties), databases, really simple syndication (“RSS”) readers, social networking sites, or any other source of network-accessible service or data source. In some instances, auxiliary services106may be associated with the on-demand code execution system110, e.g., to provide billing or logging services to the on-demand code execution system110. In some instances, auxiliary services106actively transmit information, such as API calls or other task-triggering information, to the on-demand code execution system110. In other instances, auxiliary services106may be passive, such that data is made available for access by the on-demand code execution system110. For example, components of the on-demand code execution system110may periodically poll such passive data sources, and trigger execution of tasks within the on-demand code execution system110based on the data provided. While depicted inFIG.1as distinct from the user computing devices102and the on-demand code execution system110, in some embodiments, various auxiliary services106may be implemented by either the user computing devices102or the on-demand code execution system110. The illustrative environment100further includes one or more network-based data storage services108, configured to enable the on-demand code execution system110to store and retrieve data from one or more persistent or substantially persistent data sources. Illustratively, the network-based data storage services108may enable the on-demand code execution system110to store information corresponding to a task, such as code or metadata, to store additional code objects representing dependencies of tasks, to retrieve data to be processed during execution of a task, and to store information (e.g., results) regarding that execution. The network-based data storage services108may represent, for example, a relational or non-relational database. In another example, the network-based data storage services108may represent a network-attached storage (NAS), configured to provide access to data arranged as a file system. The network-based data storage services108may further enable the on-demand code execution system110to query for and retrieve information regarding data stored within the on-demand code execution system110, such as by querying for a number of relevant files or records, sizes of those files or records, file or record names, file or record creation times, etc. In some instances, the network-based data storage services108may provide additional functionality, such as the ability to separate data into logical groups (e.g., groups associated with individual accounts, etc.). While shown as distinct from the auxiliary services106, the network-based data storage services108may in some instances also represent a type of auxiliary service106. The user computing devices102, auxiliary services106, and network-based data storage services108may communicate with the on-demand code execution system110via a network104, which may include any wired network, wireless network, or combination thereof. For example, the network104may be a personal area network, local area network, wide area network, over-the-air broadcast network (e.g., for radio or television), cable network, satellite network, cellular telephone network, or combination thereof. As a further example, the network104may be a publicly accessible network of linked networks, possibly operated by various distinct parties, such as the Internet. In some embodiments, the network104may be a private or semi-private network, such as a corporate or university intranet. The network104may include one or more wireless networks, such as a Global System for Mobile Communications (GSM) network, a Code Division Multiple Access (CDMA) network, a Long Term Evolution (LTE) network, or any other type of wireless network. The network104can use protocols and components for communicating via the Internet or any of the other aforementioned types of networks. For example, the protocols used by the network104may include Hypertext Transfer Protocol (HTTP), HTTP Secure (HTTPS), Message Queue Telemetry Transport (MQTT), Constrained Application Protocol (CoAP), and the like. Protocols and components for communicating via the Internet or any of the other aforementioned types of communication networks are well known to those skilled in the art and, thus, are not described in more detail herein. The on-demand code execution system110is depicted inFIG.1as operating in a distributed computing environment including several computer systems that are interconnected using one or more computer networks (not shown inFIG.1). The on-demand code execution system110could also operate within a computing environment having a fewer or greater number of devices than are illustrated inFIG.1. Thus, the depiction of the on-demand code execution system110inFIG.1should be taken as illustrative and not limiting to the present disclosure. For example, the on-demand code execution system110or various constituents thereof could implement various Web services components, hosted or “cloud” computing environments, and/or peer to peer network configurations to implement at least a portion of the processes described herein. Further, the on-demand code execution system110may be implemented directly in hardware or software executed by hardware devices and may, for instance, include one or more physical or virtual servers implemented on physical computer hardware configured to execute computer executable instructions for performing various features that will be described herein. The one or more servers may be geographically dispersed or geographically co-located, for instance, in one or more data centers. In some instances, the one or more servers may operate as part of a system of rapidly provisioned and released computing resources, often referred to as a “cloud computing environment.” In the example ofFIG.1, the on-demand code execution system110is illustrated as connected to the network104. In some embodiments, any of the components within the on-demand code execution system110can communicate with other components of the on-demand code execution system110via the network104. In other embodiments, not all components of the on-demand code execution system110are capable of communicating with other components of the virtual environment100. In one example, only the frontend120(which may in some instances represent multiple frontends120) may be connected to the network104, and other components of the on-demand code execution system110may communicate with other components of the environment100via the frontends120. InFIG.1, users, by way of user computing devices102, may interact with the on-demand code execution system110to provide executable code, and establish rules or logic defining when and how such code should be executed on the on-demand code execution system110, thus establishing a “task.” For example, a user may wish to run a piece of code in connection with a web or mobile application that the user has developed. One way of running the code would be to acquire virtual machine instances from service providers who provide infrastructure as a service, configure the virtual machine instances to suit the user's needs, and use the configured virtual machine instances to run the code. In order to avoid the complexity of this process, the user may alternatively provide the code to the on-demand code execution system110, and request that the on-demand code execution system110execute the code. The on-demand code execution system110can handle the acquisition and configuration of compute capacity (e.g., containers, instances, etc., which are described in greater detail below) based on the code execution request, and execute the code using the compute capacity. The on-demand code execution system110may automatically scale up and down based on the volume, thereby relieving the user from the burden of having to worry about over-utilization (e.g., acquiring too little computing resources and suffering performance issues) or under-utilization (e.g., acquiring more computing resources than necessary to run the codes, and thus overpaying). In accordance with embodiments of the present disclosure, and as described in more detail below, the on-demand code execution system110may configure the virtual machine instances with customized operating systems to execute the user's code more efficiency and reduce utilization of computing resources. To enable interaction with the on-demand code execution system110, the system110includes one or more frontends120, which enable interaction with the on-demand code execution system110. In an illustrative embodiment, the frontends120serve as a “front door” to the other services provided by the on-demand code execution system110, enabling users (via user computing devices102) to provide, request execution of, and view results of computer executable code. The frontends120include a variety of components to enable interaction between the on-demand code execution system110and other computing devices. For example, each frontend120may include a request interface providing user computing devices102with the ability to upload or otherwise communication user-specified code to the on-demand code execution system110and to thereafter request execution of that code. In one embodiment, the request interface communicates with external computing devices (e.g., user computing devices102, auxiliary services106, etc.) via a graphical user interface (GUI), CLI, or API. The frontends120process the requests and makes sure that the requests are properly authorized. For example, the frontends120may determine whether the user associated with the request is authorized to access the user code specified in the request. References to user code as used herein may refer to any program code (e.g., a program, routine, subroutine, thread, etc.) written in a specific program language. In the present disclosure, the terms “code,” “user code,” and “program code,” may be used interchangeably. Such user code may be executed to achieve a specific function, for example, in connection with a particular web application or mobile application developed by the user. As noted above, individual collections of user code (e.g., to achieve a specific function) are referred to herein as “tasks,” while specific executions of that code (including, e.g., compiling code, interpreting code, or otherwise making the code executable) are referred to as “task executions” or simply “executions.” Tasks may be written, by way of non-limiting example, in JavaScript (e.g., node.js), Java, Python, and/or Ruby (and/or another programming language). Tasks may be “triggered” for execution on the on-demand code execution system110in a variety of manners. In one embodiment, a user or other computing device may transmit a request to execute a task may, which can generally be referred to as “call” to execute of the task. Such calls may include the user code (or the location thereof) to be executed and one or more arguments to be used for executing the user code. For example, a call may provide the user code of a task along with the request to execute the task. In another example, a call may identify a previously uploaded task by its name or an identifier. In yet another example, code corresponding to a task may be included in a call for the task, as well as being uploaded in a separate location (e.g., storage of an auxiliary service106or a storage system internal to the on-demand code execution system110) prior to the request being received by the on-demand code execution system110. As noted above, the code for a task may reference additional code objects maintained at the on-demand code execution system110by use of identifiers of those code objects, such that the code objects are combined with the code of a task in an execution environment prior to execution of the task. The on-demand code execution system110may vary its execution strategy for a task based on where the code of the task is available at the time a call for the task is processed. A request interface of the frontend120may receive calls to execute tasks as Hypertext Transfer Protocol Secure (HTTPS) requests from a user. Also, any information (e.g., headers and parameters) included in the HTTPS request may also be processed and utilized when executing a task. As discussed above, any other protocols, including, for example, HTTP, MQTT, and CoAP, may be used to transfer the message containing a task call to the request interface122. A call to execute a task (which may also be referred to as a request to execute the task) may specify one or more third-party libraries (including native libraries) to be used along with the user code corresponding to the task. In one embodiment, the call may provide to the on-demand code execution system110a file containing the user code and any libraries (and/or identifications of storage locations thereof) corresponding to the task requested for execution. In some embodiments, the call includes metadata that indicates the program code of the task to be executed, the language in which the program code is written, the user associated with the call, and/or the computing resources (e.g., memory, etc.) to be reserved for executing the program code. For example, the program code of a task may be provided with the call, previously uploaded by the user, provided by the on-demand code execution system110(e.g., standard routines), and/or provided by third parties. Illustratively, code not included within a call or previously uploaded by the user may be referenced within metadata of the task by use of a URI associated with the code. In some embodiments, such resource-level constraints (e.g., how much memory is to be allocated for executing a particular user code) are specified for the particular task, and may not vary over each execution of the task. In such cases, the on-demand code execution system110may have access to such resource-level constraints before each individual call is received, and the individual call may not specify such resource-level constraints. In some embodiments, the call may specify other constraints such as permission data that indicates what kind of permissions or authorities that the call invokes to execute the task. Such permission data may be used by the on-demand code execution system110to access private resources (e.g., on a private network). In some embodiments, individual code objects may also be associated with permissions or authorizations. For example, a third party may submit a code object and designate the object as readable by only a subset of users. The on-demand code execution system110may include functionality to enforce these permissions or authorizations with respect to code objects. In some embodiments, a call may specify the behavior that should be adopted for handling the call. In such embodiments, the call may include an indicator for enabling one or more execution modes in which to execute the task referenced in the call. For example, the call may include a flag or a header for indicating whether the task should be executed in a debug mode in which the debugging and/or logging output that may be generated in connection with the execution of the task is provided back to the user (e.g., via a console user interface). In such an example, the on-demand code execution system110may inspect the call and look for the flag or the header, and if it is present, the on-demand code execution system110may modify the behavior (e.g., logging facilities) of the container in which the task is executed, and cause the output data to be provided back to the user. In some embodiments, the behavior/mode indicators are added to the call by the user interface provided to the user by the on-demand code execution system110. Other features such as source code profiling, remote debugging, etc. may also be enabled or disabled based on the indication provided in a call. To manage requests for code execution, the frontend120can include an execution queue (not shown inFIG.1), which can maintain a record of requested task executions. Illustratively, the number of simultaneous task executions by the on-demand code execution system110is limited, and as such, new task executions initiated at the on-demand code execution system110(e.g., via an API call, via a call from an executed or executing task, etc.) may be placed on the execution queue124and processed, e.g., in a first-in-first-out order. In some embodiments, the on-demand code execution system110may include multiple execution queues, such as individual execution queues for each user account. For example, users of the on-demand code execution system110may desire to limit the rate of task executions on the on-demand code execution system110(e.g., for cost reasons). Thus, the on-demand code execution system110may utilize an account-specific execution queue to throttle the rate of simultaneous task executions by a specific user account. In some instances, the on-demand code execution system110may prioritize task executions, such that task executions of specific accounts or of specified priorities bypass or are prioritized within the execution queue. In other instances, the on-demand code execution system110may execute tasks immediately or substantially immediately after receiving a call for that task, and thus, the execution queue may be omitted. As noted above, tasks may be triggered for execution at the on-demand code execution system110based on explicit calls from user computing devices102(e.g., as received at the request interface). Alternatively or additionally, tasks may be triggered for execution at the on-demand code execution system110based on data retrieved from one or more auxiliary services106or network-based data storage services108. To facilitate interaction with auxiliary services106, the frontend120can include a polling interface (not shown inFIG.1), which operates to poll auxiliary services106or data storage services108for data. Illustratively, the polling interface may periodically transmit a request to one or more user-specified auxiliary services106or data storage services108to retrieve any newly available data (e.g., social network “posts,” news articles, files, records, etc.), and to determine whether that data corresponds to a user-established criteria triggering execution a task on the on-demand code execution system110. Illustratively, criteria for execution of a task may include, but is not limited to, whether new data is available at the auxiliary services106or data storage services108, the type or content of the data, or timing information corresponding to the data. In some instances, the auxiliary services106or data storage services108may function to notify the frontend120of the availability of new data, and thus the polling service may be unnecessary with respect to such services. In addition to tasks executed based on explicit user calls and data from auxiliary services106, the on-demand code execution system110may in some instances operate to trigger execution of tasks independently. For example, the on-demand code execution system110may operate (based on instructions from a user) to trigger execution of a task at each of a number of specified time intervals (e.g., every 10 minutes). The frontend120can further include an output interface (not shown inFIG.1) configured to output information regarding the execution of tasks on the on-demand code execution system110. Illustratively, the output interface may transmit data regarding task executions (e.g., results of a task, errors related to the task execution, or details of the task execution, such as total time required to complete the execution, total data processed via the execution, etc.) to the user computing devices102or to auxiliary services106, which may include, for example, billing or logging services. The output interface may further enable transmission of data, such as service calls, to auxiliary services106. For example, the output interface may be utilized during execution of a task to transmit an API request to an external service106(e.g., to store data generated during execution of the task). In some embodiments, the on-demand code execution system110may include multiple frontends120. In such embodiments, a load balancer (not shown inFIG.1) may be provided to distribute the incoming calls to the multiple frontends120, for example, in a round-robin fashion. In some embodiments, the manner in which the load balancer distributes incoming calls to the multiple frontends120may be based on the location or state of other components of the on-demand code execution system110. For example, a load balancer may distribute calls to a geographically nearby frontend120, or to a frontend with capacity to service the call. In instances where each frontend120corresponds to an individual instance of another component of the on-demand code execution system, such as the active pool148described below, the load balancer may distribute calls according to the capacities or loads on those other components. Calls may in some instances be distributed between frontends120deterministically, such that a given call to execute a task will always (or almost always) be routed to the same frontend120. This may, for example, assist in maintaining an accurate execution record for a task, to ensure that the task executes only a desired number of times. For example, calls may be distributed to load balance between frontend120. Other distribution techniques, such as anycast routing, will be apparent to those of skill in the art. The on-demand code execution system further includes one or more worker managers140that manage the execution environments, such as virtual machine instances150(shown as VM instance150A and150B, generally referred to as a “VM”), used for servicing incoming calls to execute tasks, and that manage the memory states of execution environments. While the following will be described with reference to virtual machine instances150as examples of such environments, embodiments of the present disclosure may utilize other environments, such as software containers. In the example illustrated inFIG.1, each worker manager140manages an active pool148, which is a group (sometimes referred to as a pool) of virtual machine instances150executing on one or more physical host computing devices that are initialized to execute a given task (e.g., by having the code of the task and any dependency data objects loaded into the instance). The active pool148illustratively is implemented using primary memory (e.g., RAM) of host devices implementing or under control of the worker manager140. Although the virtual machine instances150are described here as being assigned to a particular task, in some embodiments, the instances may be assigned to a group of tasks, such that the instance is tied to the group of tasks and any tasks of the group can be executed within the instance. For example, the tasks in the same group may belong to the same security group (e.g., based on their security credentials) such that executing one task in a container on a particular instance150after another task has been executed in another container on the same instance does not pose security risks. As another example, the tasks of the group may share common dependencies, such that an environment used to execute one task of the group can be rapidly modified to support execution of another task within the group. Once a triggering event to execute a task has been successfully processed by a frontend120, the frontend120passes a request to a worker manager140to execute the task. In one embodiment, each frontend120may be associated with a corresponding worker manager140(e.g., a worker manager140co-located or geographically nearby to the frontend120) and thus, the frontend120may pass most or all requests to that worker manager140. In another embodiment, a frontend120may include a location selector configured to determine a worker manager140to which to pass the execution request. In one embodiment, the location selector may determine the worker manager140to receive a call based on hashing the call, and distributing the call to a worker manager140selected based on the hashed value (e.g., via a hash ring). Various other mechanisms for distributing calls between worker managers140will be apparent to one of skill in the art. Thereafter, the worker manager140may modify a virtual machine instance150(if necessary) and execute the code of the task within the instance150. As shown inFIG.1, respective instances150may have operating systems (OS)152(shown as OS152A and152B), language runtimes154(shown as runtime154A and154B), and user code156(shown as user code156A and156B). The OS152, runtime154, and user code156may collectively enable execution of the user code to implement the task. In some instances, each VM150may be associated with additional information, such as state information, maintained across individual executions of a task. For example, when initially created, a VM150may initialize the OS152, and each time the user code156is executed in the VM150, a state of the VM150may change. State of a VM150may be maintained, for example, within registers of a virtual CPU of the VM150, within RAM of the VM150, within a virtual disk drive of the VM150, or the like. In accordance with embodiments of the present disclosure, the worker manager140further includes a management unit142, configured to manage a state of the VMs150. Specifically, the management unit142may be configured to detect an idle VM150within the active pool148and to determine, based on an expected next use of that VM150, whether to transition the VM150to a secondary memory144. The secondary memories144can correspond to one or more lower tier memories, which are less rapid than primary memory, but which are illustratively greater in capacity. The secondary memories144can correspond, for example, to 3D XPOINT, flash memory, magnetic storage, or network-attached storage. Specifically, the management unit142can be configured to calculate a next expected use of a VM150(e.g., when a next request to execute a task within the VM150will be received), and to calculate both (i) a cost of maintaining the VM150within the active pool148until that next expected use and (ii) a cost of transitioning the VM150to one of the secondary memories144and back into the active pool148so that it is available at the time of the next expected use. If the cost of transitioning the VM150into a secondary memory144is less than a cost of maintaining the VM150within the active pool148, the management unit142can transition the VM150into the secondary memory144. The management unit142can later transition the VM150back into the active pool148as required to service a next request to execute a task in the VM150. For example, the management unit142may time a transition of the VM150from the secondary memory144to the active pool148such that the transition completes at or just prior to an expected time of the request. In instances where a request is received before that expected time, the management unit142may transition the VM150earlier, such as immediately (e.g., on receive of the request) or as space is available within the active pool148. To assist in transitioning of VMs150between active pool148and secondary memory144, the worker manager140ofFIG.1further includes a memory utilization monitor146, configured to monitor memory utilization of various VMs150on the worker manager140(e.g., on one or more host devices implementing the worker manager140). In one embodiment, the memory utilization monitor146monitors the memory utilization of a VM150within a primary memory of the worker manager140. In another embodiment, the memory utilization monitor146monitors an amount of difference (or “delta”) between a memory of a VM150in primary memory of the worker manager140and a representation of that VM150within secondary memory144. For example, in some embodiments, the worker manager140may maintain in secondary memory144a snapshot (or other data representation) of a VM150, even when that VM150is executing within the active pool148. Illustratively, the snapshot may have been previously utilized to generate the VM150within the active pool148. The memory state of the VM150within the active pool148and the snapshot of the VM150in the secondary memory144may include a substantial amount of information. For example, a VM150may be loaded with a runtime154which, on initialization, utilizes a given set of memory space (e.g., 100 megabytes). When executing user code156in the runtime154, some subset of that memory space may be modified based on execution of the user code (e.g., 10 megabytes). This memory utilized during execution of a task can represent the “working set” of memory of the task. Thus, if a snapshot of the VM150is taken just after initialization and prior to execution of user code156, the snapshot and a state of the VM150after execution of a task would be expected to overlap by 90 megabytes (the initial utilization minus the working set). Due to this overlap, a new snapshot of the VM150after execution of a task may be created in the secondary memory144based on transferring out of the working set of the VM150within the active pool148, without requiring that all memory of the VM150be transferred from the active pool148. The remaining memory of the VM150may be identified by reference to the preexisting snapshot of the VM150. Such interdependent snapshots are generally referred to as “cascading snapshots.” Thus, the memory utilization monitor146in one configuration of the system110tracks the “delta” of data needed to transition a VM150from the active pool148to a secondary memory144. As noted above, the management unit142can be configured to determine when to transition a VM150to secondary memory144based on a next expected utilization of the VM150to execute a task. To facilitate determination of the next expected utilization, the system110further includes a call history data store164, which stores information regarding a history of calls to the system110of tasks. Illustratively, each time the frontend120receives a call to execute a task, the frontend120may update a record within the call history data store164(e.g., within a database) to reflect that call. The management unit142may utilize the call history of a task to predict a next execution of that task or similar tasks. In some instances, the management unit142may generate statistical information regarding the call history of a task, such as a median or mean duration between calls, a standard deviation of that duration, an interquartile range of that duration, or the like. Such statistical information may also be stored within the call history data store164. The call history data store164may correspond to any persistent or substantially persistent data storage device, including (but not limited to) hard disk drives, solid state disk drives, network attached storage, etc., or any combination thereof. In addition, the management unit142may further be configured to facilitate graceful termination of stateful network connections of VM instances150in connection with transitioning of the VM instances150in memory state or location. Specifically, as will be described below, the management unit142may be configured to determine a set of stateful network connections of a VM instance150, such as by inspecting a connection table or data set maintained by a host device hosting the VM instance150. The management unit142may then transmit, to each “end” of each connection (e.g., the instance150and a device external to the instance, such as another instance150, an auxiliary service106, etc.) a termination signal indicating that the connection has terminated. Subsequent to reactivation of the instance150, a task executing on the instance150may then function to reestablish each required network connection, based on shared knowledge between the instance150and each external device that the respective connections have been terminated. In one embodiment, the management unit142notifies external devices of termination of each connection prior to or during deactivation of a VM instance150on a host device initially hosting the instance. Notifying external devices of termination during or prior to deactivation may be beneficial, for example, in allowing external devices to close the network connections, rather than consider such connections “open” until a timeout event occurs. Moreover, when a VM instance150is transitioning locations (e.g., migrating from a first to a second host device), notifying external devices from the first host device may beneficially increase the likelihood that a termination signal is accepted by the external device as authentic, since the first host is the same physical computing device that has previously transmitted data to the external device on behalf of the VM instance150(and, for example, has been assigned the IP address previously used in communications with the external device). Conversely, the management unit142may be configured to notify the VM instance150of termination of each connection during re-activation of the VM instance150(e.g., as a final stage of migration to a second host, during re-inclusion in the active pool148, etc.). Illustratively, the VM instance150may be halted at a point at which it is executing code of a task, which code is configured on detection of a connection termination to attempt to reestablish the connection. Thus, if a termination signal were transmitted to the VM instance150prior to halting, a “race condition” might occur, where the VM instance150attempts to reestablish a network connection at the same time that the management unit142is attempting to transition the VM instance150. To avoid this, the management unit142may halt the VM instance150without notifying the instance150of any network connection termination. When the VM instance150is halted without receiving a termination of the connection, the code may assume that the connection is active. Thus, on reactivation of the VM instance150, the management unit142may notify the VM instance150that each connection has been terminated, thus causing the VM instance150to reestablish those connections as necessary. While illustrative timings for transmissions of termination signals are discussed above, these timings may be modified in embodiments of the present disclosure. For example, a management unit142may transmit termination signals to both external devices and a VM instance150prior to removing the instance150from an active pool148. In this embodiment (or additionally in the other embodiments disclosed herein), the management unit142may be configured to block communications between the VM instance150and external devices, to prevent the instance150or external device from reestablishing a network connection prior to transitioning the instance150. Additionally or alternatively, the management unit142may transmit termination signals to both external devices and a VM instance150after transitioning the instance150back into an active pool148. In some embodiments, the management unit142may be configured to transmit termination signals to each end of a stateful network connection of a VM instance150at least partly by masquerading as the opposing end of the connection. Specifically, the management unit142may execute to control operation of a hypervisor, “DOM 0,” or lower-level operating system of a host device hosting a VM instance150, which may necessarily (by virtue of hosting the instance150) act as a “middle man” between the VM instance150and each external device. Due to this position, the host device may generate packets for transmission to an external device that are similar or identical to those that an instance150would transmit to the external device. Similarly, the host device may generate packets for transmission to an instance150that are similar or identical to those that an external device would transmit to the instance150. The management unit142may thus control operation of a host device to “masquerade” as one party to a network connection with speaking with the other party, increasing the likelihood that the spoken-to party accepts a transmission as legitimate. While some functionalities are generally described herein with reference to an individual component of the on-demand code execution system110, other components or a combination of components may additionally or alternatively implement such functionalities. For example, while each worker manager140is depicted inFIG.1as including a management unit142, in some instances, one or more centralized state management units142may be provided which provide access to the above-described functionalities to multiple worker managers140. In instances where the worker manager140is implemented across multiple host devices, each host device may include a management unit142, or multiple devices may share access to a common management unit142(e.g., executed on a physically nearby device, such as a device on the same “rack” within a data center). In some instances, the call history data store164may be implemented locally within a worker manager140. For example, where each worker manager140is configured to handle a specific subset of tasks on the system110, each worker manager140may maintain a call history of that specific subset of tasks. Thus, the specific configuration of elements withinFIG.1is intended to be illustrative. FIG.2depicts a general architecture of a computing system implementing a worker manager140ofFIG.1. The general architecture of the worker manager140depicted inFIG.2includes an arrangement of computer hardware and software that may be used to implement aspects of the present disclosure. The hardware may be implemented on physical electronic devices, as discussed in greater detail below. The worker manager140may include many more (or fewer) elements than those shown inFIG.2. It is not necessary, however, that all of these generally conventional elements be shown in order to provide an enabling disclosure. Additionally, the general architecture illustrated inFIG.2may be used to implement one or more of the other components illustrated inFIG.1. As illustrated, the worker manager140includes a processing unit290, a network interface292, a computer readable medium drive294, and an input/output device interface296, all of which may communicate with one another by way of a communication bus. The network interface292may provide connectivity to one or more networks or computing systems. The processing unit290may thus receive information and instructions from other computing systems or services via the network104. The processing unit290may also communicate to and from primary memory280and/or secondary memory298and further provide output information for an optional display (not shown) via the input/output device interface296. The input/output device interface296may also accept input from an optional input device (not shown). The primary memory280and/or secondary memory298may contain computer program instructions (grouped as units in some embodiments) that the processing unit290executes in order to implement one or more aspects of the present disclosure. These program instructions are shown inFIG.2as included within the primary memory280, but may additionally or alternatively be stored within secondary memory298. The primary memory280and secondary memory298correspond to one or more tiers of memory devices, including (but not limited to) RAM, 3D XPOINT memory, flash memory, magnetic storage, and the like. The primary memory280is assumed for the purposes of description to represent a main working memory of the worker manager140, with a higher speed but lower total capacity than secondary memory298. As noted above, the secondary memory298may include multiple tiers of memory, each lower representing a progressively lower speed but potentially higher capacity than a prior tier. The primary memory280illustratively includes the active pool148, which may store information regarding virtual machine instances that are actively executing on the worker manager140. While shown as part of primary memory280, use of the term “active pool” may in some instances also refer to a logical construct including the VMs150executing on a worker manager140. These VMs150within the active pool148may additionally utilize other resources of the worker manager140, such as the processing unit290, the network interface292, etc. Thus, inclusion of the “active pool148” within primary memory280is intended to visually represent a portion of the primary memory280utilized by VMs150within the active pool148, and not as an indication that such VMs150solely utilize the primary memory280. The primary memory280may further store an operating system284that provides computer program instructions for use by the processing unit290in the general administration and operation of the sidecar configuration system160. The memory280may further include computer program instructions and other information for implementing aspects of the present disclosure. For example, in one embodiment, the memory280includes a user interface unit282that generates user interfaces (and/or instructions therefor) for display upon a computing device, e.g., via a navigation and/or browsing interface such as a browser or application installed on the computing device. In addition, the memory280may include and/or communicate with one or more data repositories (not shown), for example, to access user program codes and/or libraries. In addition to and/or in combination with the user interface unit282, the memory280may include a virtual machine configuration unit162, management unit142, and memory utilization monitor146. In one embodiment, the virtual machine configuration unit162, management unit142, and memory utilization monitor146individually or collectively implement various aspects of the present disclosure, e.g., generating virtual machine instances in which to execute code in response to requests for such execution, monitoring memory usage of such machines within the primary memory280, and selectively transitioning VMs150out of primary memory280and into secondary memory298based on a next expected use of the VM150to execute a task within the system110. The worker manager140ofFIG.2is one illustrative configuration of such a device, of which others are possible. For example, while shown as a single device, a worker manager140may in some embodiments be implemented as multiple physical host devices. Illustratively, a first device of such a worker manager140may implement “control plane” functions, such as receiving requests to execute tasks, instructing when to transition VMs150between memory states, and the like, while a second device may house the active pool148and implement “data plane” operations, such as executing code in response to instructions by the first device. In some embodiments, the worker manager140may further include components other than those illustrated inFIG.2. For example, the primary memory280may further include a container manager for managing creation, preparation, and configuration of containers within virtual machine instances. Further variations on functionalities of a worker manager140, any of which may be implanted by the worker manager140of the present disclosure, are described in more detail in U.S. Pat. No. 9,323,556, entitled “PROGRAMMATIC EVENT DETECTION AND MESSAGE GENERATION FOR REQUESTS TO EXECUTE PROGRAM CODE,” and filed Sep. 30, 2014 (the “'556 Patent”), the entirety of which is hereby incorporated by reference. With reference toFIG.3, illustrative interactions are depicted for determining whether to transition the memory state of a VM150based on a next expected use of the VM150to execute a task. The interactions ofFIG.3begin at (1), where the management unit142detects an idle VM150. In the present description, an idle VM150is intended to refer to a VM150that is not actively servicing a request to execute a task, such as by executing code of that task. For ease of description, it will be assumed with reference toFIG.3that each VM150is dedicated to execution of a single task. However, the interactions ofFIG.3may be modified to include VMs150made available to execute multiple tasks (e.g., of the same user, utilizing the same libraries, etc.). Illustratively, the management unit142may detect an idle VM150by receiving an indication that the VM150has completed execution of a task, and determining that no outstanding requests for execution of that task exist on the worker manager140. Thereafter, at (2) and (3) (which interactions are described in sequence but may be implemented concurrently, in parallel, in the reverse order, etc.), the management unit142retrieves task call history information from the call history data store164, and VM memory usage information from the memory utilization monitor146, respectively. As noted above, the task call history information may include a record of requests to execute a task (or one or more similar tasks) and/or statistical information related to such a record. The VM memory usage information can include a current amount of primary memory utilized by a VM150, and/or a delta indicating how much information must be transitioned out of primary memory to transition the VM150's state. At (5), the management unit142determines a timing of a next expected call to execute a task within the VM150, based on the call history information. In instances where the call history information shows variations in the durations between calls, the management unit142can use a statistical measurement to calculate an expected time until a next call to execute the task. In one embodiment, the expected time is calculated based on a probability that, if a next call conforms to the distribution of past calls, that the next call will occur no earlier than a given point in time. Illustratively, a statistical analysis of the call history information may indicate that there is a 99% chance according to the historical distribution of calls that a next call occurs no earlier than 10 seconds from the current point in time, a 90% chance that the next call occurs no earlier than 30 seconds from the current point in time, a 50% chance that the next call occurs no earlier than 60 seconds from the current point in time, etc. As such, the worker manager140may be configured to utilize such a probability threshold to establish an expected timing of a next call. For example, the worker manager140may be configured to utilize an expected timing such that there is a predicted n % chance that the next call occurs no earlier than the expected timing. In some instances, the specific percentage may vary based on a user associated with the task executed within the VM150at issue. For example, specific users may require very low latency completion of requests, and thus may establish a high percentage value for calculating a next expected call. This can generally reduce the expected timing of a next call, and inhibit transferring of a VM150out of the active pool144. Conversely, other users may not require low latency completion of requests, and may thus establish a low percentage value for calculating a next expected call, leading to more aggressive transferring of a VM150to secondary memory144. In either instance, should a request to execute a task come before the next expected utilization, the worker manager140can transition the VM150out of secondary memory144at the time of the request. Thus, such requests may still be serviced, but may incur additional latency due to the need to transition the VM150out of secondary memory144at the time of the request. At (4), the management unit142calculates expected costs for maintaining the VM150within the active pool148, as well as for transitioning the VM150into a secondary memory144. Generally described, the cost of maintaining a VM150within the active pool can be calculated as a sum of the resources of the worker manager140to maintain the VM150within the active pool148for the period of time until a next expected call to execute a task within the VM150. For example, where an executing VM150within the active pool utilizes CPU cycles and RAM, the cost of maintaining the VM150within the active pool148can represent a combination of total CPU cycles expected to be utilized until the next expected call and a metric reflecting RAM usage over time (e.g., “megabyte-seconds”). The worker manager140can be configured to weight each metric according to a relative scarcity of the corresponding resource, and combine the metrics via a weighted linear equation. For example, where a worker manager140has excess CPU cycles but limited RAM, the metric of RAM usage over time may be weighted more highly than CPU cycles. A combination of these two metrics (potentially along with other metrics reflecting other resources used by an executing VM150) can represent the cost of maintaining the VM150within the active pool. Similarly, the cost of transitioning a VM150to secondary memory144can reflected a weighted combination of the resources needed to transition the VM150to the secondary memory144from the active pool, and later to transition the VM150back from the secondary memory144to the active pool to service a next expected request. Such resources can include (but are not limited to) the amount of primary memory used over time to transition the VM150in or out of the primary pool, the amount of a communication bus to transition the VM150(e.g., the bandwidth over time, which may in some instances be represented as a total amount of data transferred over the bus), and the amount of secondary memory144used over time to store the VM150within the secondary memory144. Each resources may be weighted according to the relative scarcity of the resource within the worker manager140. The total expected use of each resource may further be based on the expected timing of each phase of a transition of a VM150into secondary memory144, including a transition-out phase (e.g., using resources of a primary memory, a secondary memory144, and a communication bus between the two to transition a VM150from the active pool148to the secondary memory144), an idle phase (e.g., using resources of the secondary memory144to maintain the VM150in the secondary memory144), and a transition-in phase (e.g., using resources of a primary memory, a secondary memory144, and a communication bus between the two to transition a VM150from the secondary memory144to the active pool148). The worker manager140can determine an expected timing of transition-in or transition-out phases based on a theoretical or observed timing of transfer of data between primary and secondary memories, as well as the amount of data needed to be transferred to transfer the VM150. For example, where 10 megabytes of data is required to be moved between a primary memory and a secondary memory144to transfer a VM, and a communication bus between those two memories has an expected speed of 1 gigabyte per second (GB/s) (which may be determined either based on specification of the bus or based on historical observation of bus speed with respect to VM data), the expected transfer-out time may be one one-hundredth of a second (1 GB/s divided by 10 MB). As noted above, the total amount of data required to transition out a VM150can be reduced by maintaining a prior snapshot of a VM150within secondary memory144, and transitioning only a “delta” representing a difference of a currently executing version of that VM150. Such a technique is generally not suitable for reducing the amount of data required to transition a VM150back into the active pool148, as a goal of the current technique is to reduce resource usage of the active pool148. However, in many instance, it may not be necessary for all data of a VM150to be transitioned from secondary memory144to primary memory in order for the VM150to execute within the active pool148. Rather, only a minimum portion of that data (e.g., a state of CPU registers) may be transitioned initially in order to facilitate execution of the VM150. The remaining data may be transitioned “on read” of the data during execution of a task. For example, as a VM150attempts to read a portion of data in virtualized RAM of the VM150, that portion of data may be moved from secondary memory144to a primary memory of the worker manager140, and passed to the VM150. Using this technique, the cost of transitioning a VM150from secondary memory144to the active pool148may be reduced considerably. For example, embodiments of the present disclosure may assume, for the purposes of calculating a cost of transitioning a VM150into the active pool148, only the minimum resources needed to facilitate execution of the VM150. The worker manager140may further predict a timing of this “transfer-in” transition as a latest point, before the expected next request, at which the phase-in can occur such that at the time of the expected next request, the VM150is executing within the active pool148and ready to execute the task. The resource cost during the “idle” phase of a transition to secondary memory144can be calculated as the amount of secondary memory144needed to maintain the VM150in an inactive state during the period until the next expected request, subtracting the timing expected to be required for the transfer in and transfer out phases. This idle time cost can be weighted according to the resources used and combined with weighted representations of the transfer in and transfer out phase costs to result in a calculated cost of transitioning the VM150into secondary memory144. In the instance that the worker manager140includes multiple tiers of secondary memory144, the above-noted costs can be calculated for each available tier of secondary memory144. In general, the cost of resources of each lower tier of secondary memory144is expected to be lower (given the expected greater availability of such resources), but the time required to transition in and out a VM150from such memory is expected to be higher. Thus, larger more frequently used VMs150may incur minimal costs when transitioned to higher tier memories144, while smaller less frequently used VMs150may incur minimal costs when transitioned to lower tier memories144. At (6), the management unit142compares the previously calculated costs, and determines a desired memory state of a VM150based on a minimal calculated cost. Illustratively, where the cost of keeping a VM150idle within the active pool148is lower than the costs of transitioning the VM150to a secondary memory144, the management unit142can determine that the VM150should remain within the active pool148. For the purposes of description, it is assume that at (6) the management unit142determines that a cost of transitioning the VM150to a secondary memory144is less than the cost of keeping the VM150idle within the active pool148. Thus, at (6), the management unit142determines that the VM should be transitioned out of the active pool148and into secondary memory. Illustrative interactions for transitioning a VM150from the active pool148to a secondary memory144A, and back into the active pool148to service an expected customer request, are depicted withinFIG.4. The interactions ofFIG.4may illustratively occur subsequent to those ofFIG.4, after the management unit142determines that a VM150should be transitioned from the active pool148to a secondary memory144. The interactions ofFIG.4begin at (1), where the management unit142transmits instructions to the active pool148to being transitioning a VM150from the active pool148to the secondary memory144. In the context ofFIG.4, the transmission of instructions may correspond, for example, to software implementing the management unit142transmitting a request to a hypervisor or other “DOM0” or lower-level operating system to initiate snapshotting of the VM150. At (2), the active pool148transitions the VM150to secondary memory144. Transitioning of the VM150can include, for example, storing a snapshot of the state of the VM150within the secondary memory144. As noted above, the snapshot may be “cascading” and thus reference a prior snapshot of the VM150within the secondary memory144in order to minimize the data transfer required from a primary memory. Thereafter, the VM150can remain in secondary memory144until required (or expected to be required), reducing the overall computing resource usage of the worker manager140. Thereafter, at (3), the management unit142can detect or predict a call to execute a task within the VM150. Detecting such a call can for example correspond to receiving a request to execute the task, which request may have been transmitted by a user or system external to the on-demand code execution system110. Predicting a call can for example correspond to detecting that the expected time of a next call (e.g., as used to determine whether to transition the VM150to secondary memory144, illustrative interactions for which are described above with reference toFIG.3) is imminently approaching, such that a transition-in phase for the VM150should begin to ensure that the VM150is ready to execute the task at the expected time of the next call. In response to detecting or predicting a next call, the management unit142at (4) instructs the active pool148to transition the VM150from the secondary memory144to the active pool148. In one embodiment, such instructions correspond to instructing a hypervisor or other host operating system to initialize the VM150based on a snapshot of the VM150within the secondary memory144. At (5), the active pool148transitions the VM150to the active pool148, placing the VM150in an executing state. Thus, the VM150can be rendered ready to receive a call to execute the task at the expected time of that call, enabling total resources used on the worker manager140to be reduced relative to maintaining the VM150in the active pool148in an idle state, while minimally or not impacting responsiveness of the VM150to requests to execute the task. With reference toFIG.5one illustrative routine500for managing a memory state of an execution environment, such as a VM150, based on an expected next use of the environment to execute a task on the on-demand code execution system110will be described. The routine500may be carried out, for example, by the worker manager140ofFIG.1(e.g., by use of the management unit142). The routine500begins at block502, where the worker manager140detects an idle environment associated with a task. Detection of an idle environment can correspond, for example, to detecting that an execution of the task has completed, and that no additional requests to execute the task are pending. The routine500then continues to block504, where the worker manager140predicts an idle period for the environment, corresponding to an amount of time until a next request to execute the task within the environment. As discussed above, the idle period may be predicted based on statistical analysis of call history information for a task. Illustratively, the idle period may be calculated from a past distribution of times between calls. For example, the idle period may be calculated such that there is a n % chance that a call is not received before the expiration of the idle period. In some instances, additional statistical analysis, such as a regression analysis, may be applied to determine an expected idle period based on other factors, such as a time-of-day. For example, call history may indicate that a given task is called frequently during business hours but infrequently during nighttime hours. A regression analysis of historical information can thus be used to determine a relationship between current time and a period between calls, which can be applied to a current time (e.g., a time of implementing the routine500) to determine an expected next call to execute a task. In some instances, such statistical analysis may not be required, and other information may be used to predict a time of a next request to execute a task. For example, where the system110is configured to execute a task with a set periodicity (e.g., every minute), the idle period can be calculated from that periodicity, without requiring statistical analysis of historical calls. The routine500continues to block506, where the worker manager140determines an expected resource cost of maintaining the environment in an executing state within primary memory. As noted above, the resource cost may be represented as based on the “resource time” of each resource used to maintain the environment in an idle state (e.g., as a time-factored metric). For example, the RAM usage of an environment may be measured in “megabyte-seconds,” or the amount of megabytes used to maintain the environment in an idle state multiplied by the seconds during which the megabytes would be used (e.g., assuming the predicted idle period is correct). As another example, CPU usage may be measured in CPU cycles, bandwidth may be measured in total data transferred, etc. In one embodiment, each resource time is weighted according to the value or scarcity of the resource and used to calculate an overall (e.g., resource agnostic) cost for maintaining the environment in an idle state. Similarly, at block508, the worker manager140determines an expected resource cost for transitioning the environment to secondary memory during an idle period, and back into primary memory at a time of a request to utilize the environment, or just prior to that request being expected. As with the cost of block506, the transition cost may be calculated as a weighted combination of resource time for each resource used to transition the environment. Illustratively, the transition cost account for resources used to transfer out an environment from primary memory, maintain the environment in secondary memory, and transfer in the environment back into primary memory to execute code within the environment. Block508may be repeated for each potential secondary memory. As noted above, because the speed of each tier of secondary memory is expected to vary, the costs of transferring an environment to each tier is also expected to vary. However, these costs may not vary with any particular aspect of an environment, but rather with a combination of numerous factors, including the amount of data needed to be transferred for an environment (e.g., the “working set”) and the expected time until next use of that environment. At block510of the routine500, the worker manager140conducts a comparison of the costs calculated in the above-noted blocks506and508, and determines whether the cost to transition the environment to a secondary memory is less than the cost of maintaining the environment idle within primary memory. If block510evaluates to false, the routine500proceeds to block518and ends. This scenario illustratively corresponds to an instance in which a next call to utilize the environment is expected to come relatively quickly, and/or where the cost of transitioning the environment to a secondary memory is very high (e.g., the environment has a large working set of memory). In such an instance, maintaining the environment within primary memory may represent the most efficient use of resources on the system110. Alternatively, if block510evaluates to true, the routine500continues to block512, where the worker manager140transitions the environment to the secondary memory associated with a lowest expected cost. Where the environment is a virtual machine, such a transition can include snapshotting the environment and storing the snapshot in the secondary memory. Where the environment is a software container or other software construct, such a transition can include checkpointing or “freezing” the environment and storing the checkpoint within the secondary memory. Thereafter, at block514, the worker manager140detects or predicts a call to execute the task associated with the environment. Detecting the call can correspond to receiving an instruction or request to execute the task. Predicting a call can correspond to determining that the predicted idle period of the environment (e.g., as determined at block504) is nearing completion, such that a transition of the environment into primary memory should begin to enable the environment to be executing at the time when the predicted idle period ends. Thus, at block516, the worker manager140transitions the environment back into primary memory, and sets the environment to executing within the active pool148. Where the environment is a virtual machine, transitioning the environment may include recreating the virtual machine from the snapshot. Where the environment is a software container or other construct, transitioning the environment may include restoring the container or construct from a checkpoint. In either instance, transitioning the environment may rely on functionality to transition data of the environment on an as needed basis, minimizing the amount of data that must be transmitted between the secondary memory and the primary memory to place the environment in a working state. The environment can then facilitate execution of a task within the environment. Thus, by implementation of the routine500, the overall computing resource usage of the system110with respect to an environment during an idle period can be reduced, while maintaining the ability of the system110to rapidly service requests to execute code within the environment. Indeed, where a subsequent request to utilize an environment is received at or after the expected next use of the environment, the total time of the system110to begin execution of a task is expected to be unchanged. However, because at least a portion of the idle time of the environment is spend stored in secondary memory, primary memory of the system110can be freed during that idle time to facilitate other executions of tasks. Thus, the overall capacity of the system110to execute tasks can be increased. With reference toFIGS.6A and6B, illustrative interactions will be described for gracefully recovering network connections after transitioning of a VM instance150between state or host devices. Specifically,FIG.6Adepicts illustrative interactions undertaken on a first worker manager140A, representing at least one first host device, to notify external devices that each stateful network connection to a VM instance150A is terminated.FIG.6Bdepicts illustrative interactions to migrate the VM instance150A to a second worker manager140B representing at least a second host device, and to notify the VM instance150A that each stateful network connection has been terminated, enabling the VM instance150A to reestablish such stateful network connections. While the interactions ofFIG.6A and6Bare depicted as occurring in conjunction with migration of a VM instance150to a different host device, similar interactions could be undertaken on a single host device to facilitate transition of the VM instance150between memories (e.g., to facilitate halting the VM instance150for a substantial period of time likely to cause timeouts). The interactions ofFIG.6Abegin at (1), where the management unit142determines that the VM instance150A is to be transitioned. In some instances, the management unit142may determine that such a transition is to occur based on the interactions described above (e.g., based on determined that a next expected use of the instance150exceeds a given value). In other instances, the management unit142may determine that such a transition is to occur based on receiving instructions from another device or component. For example, the management unit142may receive instructions from an administrator or control plane device that the instance150should be migrated to another worker manager140. At (2), the VM instance150A retrieves from a connection state store602of the worker manager140A information identifying current stateful network connections of the VM instance150. The connection state store602may be implemented within memory of the worker manager140A (e.g., a primary memory and/or secondary memory144), and reflect network information used by the worker manager140A to handle network traffic to and from the VM instance150A. Illustratively, the connection state store602may reflect a “state table” or other table maintained by a hypervisor, “DOM-0,” lower-level operating system, virtual switch, or the like, which reflects how traffic is routed from a physical network connection of a host device to various execution environments on the host device. Because the host device maintains that physical network connection, it may necessary operate as a middle-man between each VM instance150and a physical network, thus enabling the host device to maintain knowledge of network connections of each VM instance150to external devices (e.g., external to the VM instance150). Thus, by inquiry to the connection state store602, the management unit142can receive information identifying open connections of the VM instance150A. In some embodiments, this information may be received independent of communications with the VM instance150A. Thereafter, at (3), the management unit142transmits, for each open connection of the VM instance150A, a termination signal to the other party of the connection. Illustratively, if the VM instance150A has an active TCP connection to a given auxiliary service106, the management unit142may transmit a termination signal (e.g., a TCP reset packet) to the auxiliary service106. Illustratively, the management unit142may instruct a hypervisor or other low-level operating system to transmit the signal, or may generate and transmit the signal itself. In one embodiment, the management unit142“masquerades” as the VM instance150A in order to send the termination signal, such as by altering aspects of the termination signal to match what would be expected from such a signal if generated at the VM instance150A. Illustratively, the management unit142may alter a source network address to match an address of the VM instance150A, or modify a sequence number to match a current sequence number for the connection (e.g., as indicated in the connection state store602). Thus, each other party to a stateful network connection of the VM instance150A will understand that such connection has terminated. In one embodiment, the interactions ofFIG.6Amay occur without notification to the VM instance150A. Thus, from the point of view of the VM instance150A, each network connection will be understood to remain open. Beneficially, this may prevent the VM instance150A from attempting to reestablish the connections. In some instances, the management unit142may be configured to prevent the VM instance150A from receiving further communications from an external device after sending a termination signal to the external device, to prevent the external device from prematurely reestablishing the network connection. Thus, via the interactions ofFIG.6A, each external device with a stateful network connection to a VM instance150A may be notified that such connection has terminated. The interactions ofFIG.6Aare continued inFIG.6B, which depicts illustrative interactions for migrating a VM instance150A to a worker manager140B, and for notifying the VM instance150A that stateful network connections to external devices have been terminated, in order to enable the instance150A to gracefully recover those connections. The interactions ofFIG.6Billustratively occur subsequently to those ofFIG.6A, and the numbering ofFIG.6Ais therefore continued inFIG.6B. However, the interactions of these figures may in some embodiments be implemented separately. With reference toFIG.6B, at (4), the VM instance150A is migrated from the worker manager140A to the worker manager140B. A number of mechanisms for virtual machine migration are known in the art, and therefore will not be discussed in detail herein. Any such suitable migration may be utilized in accordance with embodiments of the present disclosure. Additionally or alternatively, migration of a VM instance150A may include transfer of a representation of the VM instance150A, such as a snapshot, from a memory of the worker manager140A to a memory of the worker manager140B. In some instances, transfer of such representation may include transfer of a “delta” between a snapshot of the VM instance150A on the worker manager140A and a related snapshot maintained in a memory of worker manager140B. As noted above, various snapshots in the on-demand code execution system110may be cascading or interrelated. Thus, where the VM instance150A was generated based on a snapshot also maintained at the worker manager140B (or related to a snapshot maintained at the worker manager140B), migration of the VM instance150A may require transfer to the worker manager140B of only the differences between the snapshot of the VM instance150A on the worker manager140A and a related snapshot on the worker manager140B. In some instances, related snapshots (e.g., representing “ancestors” states of various VM instances150) may be distributed among worker managers140in the system110to facilitate later rapid migration of VM instances150, if required. In addition to information representing the VM instance150A, the worker manager140A transfers to the worker manager140B, at (4), information representing stateful network connections understood by the VM instance150A to have been maintained at the time that the instance150A was halted. The information correspond, for example, to that retrieved from the connection state store602at interaction (2) ofFIG.6A, above. Thereafter, at (5), the management unit142determines that the VM instance150A should be placed into the active pool148B on the worker manager140B. The decision to place the VM instance150into the active pool148B may be based, for example, on a predicted request to execute a task within the instance150A. As noted above, the VM instance150A is assumed to have been halted at a point where it understands one or more stateful network connections to be active. However, migration of the VM instance150A may impair such connections, for example because an external device understands a timeout to have occurred on the connection, or because a change in location of the VM instance150A has changed a parameter of the connection (such as a source IP address). To reduce the likelihood of errors occurring due to the VM instance150A attempting to use impaired stateful connections, the management unit142, at (6), transmits to the VM instance150A termination signals for each stateful network connections that the VM instance150A understands to be active (e.g., as indicated in the connection state information received from the worker manager140A). Transmission may include, for example, causing a host device for the VM instance150A to generate and transmit termination signals to the VM instance150A. A termination signal may correspond, for example, to a TCP reset packet. In one embodiment, the termination signals may be generated so as to appear to originate from the external devices to which the network connections were made. For example, the management unit142may modify a source IP address of each termination signal to match an IP address of the relevant external device, and may include within the termination signal a next sequence number for the connection. Thus, the VM instance150A can be expected to process the termination signal as if it originated from the external device. The termination signals may illustratively be transmitted during initialization of the VM instance150A, before a task execution begins. Alternatively, termination signals may be transmitted after execution of code begins. Prior to transmission of termination signals, the management unit142may in some instances inhibit communication between the VM instance150A and external devices to which the network connections were made, to prevent errors related to attempted use of such connections. Because both the VM instance150A and the relevant external devices have received termination signals related to stateful network connections, these endpoints can be expected to have shared, synchronized knowledge of the state of those network connections. As such, at (7), the VM instance150A can gracefully reestablish network connections required for further processing on the VM instance150A. Illustratively, the VM instance150A may be configured to execute code that detects termination of stateful network connections, and attempts to reestablish those connections. As such, on reception of the termination signals from the management unit142, the VM instance150A can communicate with the relevant external device (e.g., auxiliary services106) to reestablish stateful network connections. While the interactions ofFIG.6A and6Bdepict notifying both parties to a stateful network connection of termination of that connection, some embodiments of the present disclosure may notify only one such party. For example, where a VM instance150is the initiator of such a connection, it may be sufficient to notify only the VM instance150that the connection has terminated to cause the VM instance150to reestablish the connection. However, notification of the external device as well may in some instance mitigate problems such as an abundance of open connections at the external device, which might otherwise prohibit reestablishment of a connection to the VM instance150at a later time or from another location. One illustrative routine700for ending stateful connections of a VM instance150in connection with transitioning of the VM instance150is depicted inFIG.7. The routine700may be implemented in one or more contexts, identified inFIG.7as Context A and Context B. In one embodiment, Context A and B correspond to different host devices. Thus, routine700may be implemented when migrating the VM instance150. In another embodiment, Context A and B are the same host device. Thus, routine700may be implemented when transitioning the VM instance150between memory states (e.g., when the VM instance150is expected or could potentially be inactive for a period of time that causes stateful network connections to be broken). The routine700may be carried out, for example, by one or more management units142(e.g., by controlling a host device hosting a VM instance150). Illustratively, a management unit142of a first host device representing Context A may implement blocks702through706of the routine700, while a management unit142of a second host device representing Context B may implement blocks710through714of the routine. The management units142of each context may cooperate to implement block708, which as discussed below represents transition of the VM instance150between contexts. Where Context A and B are the same context, one or more management units142of that context may implement the routine700in its entirety. The routine700begins at block702, where the management unit142determines that the VM instance150A is to be transitioned. In some instances, the management unit142may determine that such a transition is to occur based on the interactions described above (e.g., based on determined that a next expected use of the instance150exceeds a given value). In other instances, the management unit142may determine that such a transition is to occur based on receiving instructions from another device or component. For example, the management unit142may receive instructions from an administrator or control plane device that the instance150should be migrated to another worker manager140. At block704, the management unit142identifies stateful network connections maintained by the VM instance150. The stateful network connections may be determined, for example, by interrogation of a host device hosting the VM instance150, which may maintain information related to open network connections of the VM instance150in order to facilitate routing of network data packets to the VM instance150. At block706, the management unit142, for each open network connection, transmits to an external device that is party to the connection a termination signal. The termination signal may illustrative correspond to a TCP reset data packet (e.g., a TCP formatted packet with a reset bit flag set to a true value). In one embodiment, the management unit142“masquerades” as the VM instance150in order to send the termination signal, such as by altering aspects of the termination signal to match what would be expected from such a signal if generated at the VM instance150. Illustratively, the management unit142may alter a source network address to match an address of the VM instance150, or modify a sequence number to match a current sequence number for the connection (e.g., as indicated in the connection state store602). Thus, the other parties to the stateful network connections of the VM instance150will understand that such connection has terminated. At block708, the management unit142transitions the VM instance150from Context A to Context B. In one embodiment, implementation of block708may include migration of the VM instance150from a first host device to a second host device. In another embodiment, implementation of block708may include transitioning the VM instance150from an active state to an inactive state, and back to an active state at a later point in time (e.g., to save resources associated with maintaining the VM instance150in an active state). At block712, the management unit142again determines open network connections that the VM instance150understands to be open. Where Context A and Context B are the same host device, block712may be duplicative and thus omitted. Where Context A and Context B are different host devices, block712may include a management unit142of a host device corresponding to Context B receiving a listing of open connections from a management unit142of a host device corresponding to Context A. At block714, the management unit142transmits to the VM instance150termination signals for each stateful network connections that the VM instance150understands to be open. In one embodiment, the management unit142may generate the termination signals so as to appear to originate from the external devices to which the network connections were made. For example, the management unit142may modify a source IP address of each termination signal to match an IP address of the relevant external device, and may include within the termination signal a next sequence number for the connection. Thus, the VM instance150can be expected to process the termination signal as if it originated from the external device. The routine700then ends at block716. As the VM instance150may be assumed to be executing code that operates to reestablish network connections on reception of a termination signal for those connections (as necessary), implementation of the routine700can thus provoke the VM instance150into reestablishing those connections. Moreover, because the routine700notifies both an external device and the VM instance150of termination of network connections, the parties to the connection are unlikely to attempt to utilize the connection during transition of the VM instance150, which may be beneficial as such use may result in errors. As will be appreciated by one skilled in the art, the routine700may in some embodiments be implemented by a host device hosting a VM instance150, without requiring control of VM instance150or external devices to which VM instances150communicate. Such operation may be desirable to reduce or eliminate the need to customize code executing on VM instances150or external devices. All of the methods and processes described above may be embodied in, and fully automated via, software code modules executed by one or more computers or processors. The code modules may be stored in any type of non-transitory computer-readable medium or other computer storage device. Some or all of the methods may alternatively be embodied in specialized computer hardware. Conditional language such as, among others, “can,” “could,” “might” or “may,” unless specifically stated otherwise, are otherwise understood within the context as used in general to present that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. Disjunctive language such as the phrase “at least one of X, Y or Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y or Z, or any combination thereof (e.g., X, Y and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y or at least one of Z to each be present. Unless otherwise explicitly stated, articles such as ‘a’ or ‘an’ should generally be interpreted to include one or more described items. Accordingly, phrases such as “a device configured to” are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations. For example, “a processor configured to carry out recitations A, B and C” can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C. Any routine descriptions, elements or blocks in the flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or elements in the routine. Alternate implementations are included within the scope of the embodiments described herein in which elements or functions may be deleted, or executed out of order from that shown or discussed, including substantially synchronously or in reverse order, depending on the functionality involved as would be understood by those skilled in the art. It should be emphasized that many variations and modifications may be made to the above-described embodiments, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.
122,594
11943094
DETAILED DESCRIPTION Referring toFIG.1A, data networks, such as those used by large corporate enterprises and services providers, are modeled as eight network layers to describe the methods and systems of the present invention described herein. Within this framework, Layer 1 is the physical layer120consisting of basic networking hardware and data transmission infrastructure. Networking hardware may be deployed within a network across a plurality of physical sites, including but not limited to a data center128, such as a distributed cloud computing network, and branch152locations that are connected to the data center128. Data center128type as used herein includes, but is not limited to, a private cloud (e.g., Amazon or Google), a scientific community (e.g., a national laboratory), co-location centers, in-house data centers, or some other type of data center. Data centers128may be classified as Tier I, Tier II, Tier III and Tier IV. Branches152may connect to data center128through physical or logical connectivity150. Physical connectivity148may include, but is not limited to, a T1 link, such as that from a branch152to multiprotocol label switching (MPLS144), or connectivity through a cable, or connectivity through DSL, or connectivity through wireless and cellular networks. Logical connectivity150may include, but is not limited to, a private wide area network (WAN) or a virtual private network (VPN). Connectivity may also be a hybrid, for example, a physical connectivity148, through a T1 link, to the Internet142that includes elements of logical constructs. In an example, hardware, such as a multi-tenant controller122that controls switches, routers162, hub devices, spoke devices, configurable devices124, or some other hardware type may be deployed within a data center128. A multi-tenant controller122as the term is used herein may refer to a multi-tenant, single-tenant, and/or plurality of associated single-tenant or multi-tenant controllers. A spoke device as the term is used herein may include, but is not limited to a branch device, each of which may be a configurable device124, as described herein. A hub device as the term is used herein may refer to a hub device that is routed, in-path, distributed and load balanced, and/or distributed, load-balanced and routed. Switch as used herein includes, but is not limited to, a managed switch (e.g., a smart switch or enterprise-managed switch), an unmanaged switch, a stackable switch, or some other switch type. Device as the term is used herein may include, but is not limited to, a hardware device, software device, physical device, virtual device or logical device. Branches152may route data, data packets and other information to and from a data center128through, for example the Internet142or MPLS144. Branches152may also route data, data packets and other information to and from Applications148, Ad Networks134and Content Delivery Networks138through, for example the Internet142or MPLS144. The Internet142may include a private cloud and be less expensive, but also less secure, than MPLS144. MPLS144may include a private WAN. Examples of branch152types include, but are not limited to, a retail store, a bank store front, a data input point, such as a point-of-sale device, remote offices of an enterprise, and distributed computing sites. Layer 2 of a data network is referred to as the data link layer118. This layer includes the functional elements that enable the transfer of data, such as data units consisting of frames, from one network entity to a second network entity. Physical addressing of network elements also resides within Layer 2. The Media Access Control (MAC) sub layer controls how a computer on the network gains access to the data and permission to transmit it and the Logical Link Control (LLC) layer controls frame synchronization, flow control and error checking. Layer 3 of a data network is referred to as the network layer114. It is at this layer that functionality such as packet forwarding occurs, as well as elements of routing protocol management. Forwarding at this layer is packet-based as opposed to based on, for example data flows defined by occurring within an application session, as described herein. Routing protocols may include, but are not limited to, static routes, Border Gateway Protocol (BGP), Intermediate System-to-Intermediate System Protocol (IS-IS), Enhanced Interior Gateway Routing Protocol (EIGRP), and Open Shortest Path First (OSPF). Routing protocols provide a mapping of reachable network endpoints (IP Addresses) and the available network interfaces and paths available to reach them. Layer 4 of a data network is referred to as the policy driven flow and session forwarding layer112. Logical addressing, multiplexing, data flow control, packet fragmentation and reassembly may occur at this layer. Data forwarding and path selection at this layer may be based on Policy. A Policy specifies the treatment that should be offered to the application flow or application session, including choice of network paths. Policies thus provide a filtered set of network paths from Layer 3 available to an application or user at a network device. Data forwarding and path selection at this layer may be based upon an application session. As used herein, a “session” refers to one or more flows identified as belonging to a client512accessing a specific application. As used herein, “client device” refers to a laptop computer, desktop computer, tablet computer, phone, smart phone, server, PDA, smart watch, wearable device, or some other type of client device capable of computing and/or interacting with a data network, as described herein. As used herein, a “flow” refers to a stream of packets (bidirectional) associated with a single session. As used herein, “packets” refer to the basic unit of data transfer in a network. As used herein, “session-flow” refers to any flow belonging to an identified session. Current solutions at this layer may be limited because they define applications by, for example, IP addresses or port numbers. Port numbers are not workable in cloud environments. For example, an application like Microsoft Exchange (Outlook email) may be served out of a port135. When an email is delivered as a service from a cloud-computing environment, such as Gmail, Yahoo and the like, the client device164used may primarily connect over HTTPS (port443). It is likely that many cloud-based applications are delivered through the same HTTPS port443. Thus, the port information alone is not sufficient to define the application. The concept of an “application session,” and the ability to define and identify an application session has utility because existing solutions are imprecise in determining the data flows that are associated with a given application. Existing solutions typically measure performance on those flows, which includes jitter, bandwidth (BW) and other metrics. “Flow” and “data flow” as used herein may include, but are not limited to a connection, TCP connection, transaction, application transaction, 3-way handshake, connections per second or CPS, session, application session, bi-directional exchange, bi-directional flow, unidirectional flow (e.g., in UDP based data exchanges), mixed flows, channels, netflow, concurrent flows, or some other type of data flow. A flow may consist of multiple packets, and a session may consist of multiple flows. Various parameters may be used to categorize data flows including, but not limited to, transactional, transactional real time and the like. According to the methods and systems of the present invention described herein, a flow may be detected as belonging to an application, but also as belonging to an application session. An application session may collate different flows originating from same source and mark them as belonging to a session. Some of these flows may appear to be a completely different application, when in fact they are really not. Performance measurements may be cohesively used on all the flows belonging to a session, thus improving actual application performance and control. For example, some applications may not be impacted by packet loss. In another example, round-trip latency might not be relevant to applications that require multiple round trips for proper application performance. Latency issues may include, but are not limited to, geographical diversity of Internet connectivity, network traffic volume, the distance between a client512and a server, and packet loss. For example, packet loss may be caused by natural “lossiness” of the link/connectivity or due to network volume that causes congestion and thus increases packet loss. In an embodiment, an application session may be defined as a time frame, for example data derived from Time1to Time2. In another embodiment, an application session may be defined as the time occurring between a user168logging in to use an application, and logging out to discontinue use of the application. In another example, an application session may be defined as data derived from activities that are considered related applications or sites, such as websites. For example, the Office365 application may include data flows to an Office365 portal, an Office365 Sharepoint, Lync or some other related service or site. Layer 5 of a data network is referred to as the application reachability forwarding layer110. Application-based forwarding may occur at this layer, and may be based on “reachability.” Reachability refers to a point of differentiation within forwarding where the issue is not “can I reach a particular IP address through a specific path/connectivity,” but instead “can I reach an application and can I do so in a manner that is consistent with a policy regarding the application.” Enterprises may use multiple connectivity choices. For example, an enterprise may use MPLS144plus a Comcast Internet connection (i.e., a hybrid connectivity) for data routing. Layer 5 thus provides a filtered set of network paths from Layer 4 available to an application based on reachability. Layer 6 of a data network is referred to as the application performance forwarding layer108. Application based forwarding may be based on performance. A user's168Internet point of presence (POP) might be geographically diverse. This can create a latency problem (slower performance). A configured device and controller may monitor applications of interest to user168and adjusts selectable paths and forwards traffic based on application performance. Controller as the term is used herein refers to a multi-tenant controller. Forwarding decisions taken may be carried out consistent with a policy, with connectivity permissions, or some other rule or criterion. Layer 6 thus provides a filtered set of network paths from Layer 5 available to an application based on applications performance on one or more network paths. Layer 7 of a data network is referred to as the application and user layer104. This is where applications and users operating over the network resides. The traffic from user to application and from application to user enters at this layer and based on the final filtered set of network paths for that application and user, an appropriate path is chosen for forwarding. Layer 8 of a data network is referred to as the operator layer102. This is where the network operators reside, such as an IT administrator that is responsible for setting user-based policies, generating reports, or some other administrative function. In accordance with exemplary and non-limiting embodiments, and referring toFIG.1B, a multi-tenant controller122may be associated with a configured device within a data network. The multi-tenant controller122and the configured device may have physical or logical connectivity150with a plurality of facilities, including but not limited to, a data center128or a plurality of data centers128, an ad network134, a content delivery network138, and third party data and applications140. Data centers128within the network in this simplified example, and with which the multi-tenant controller122may communicate, may have hub devices, additional configured devices and/or applications132, with which the multi-tenant controller122, and/or configured device associated with the multi-tenant controller122, may communicate. Ad networks134, content delivery networks138, and third party data and applications140may provide data into the network, such as pairing an advertisement with a web content delivery to a client device164, the data of which may be recognized, tracked and recorded by the multi-tenant controller122, and/or the configured device that is associated with the multi-tenant controller122. Data within the network may derive from a branch152of the network, such as a physical office or store of an enterprise. A branch152may have network facilities, including but not limited to, a shaper154, a device switch158, an application132, a configured device, a server160, a router162, or some other network facility or device. The branch152may communicate with other facilities within the network, such as the data center128or the multi-tenant controller122, through the Internet142or MPLS144, using physical or logical connectivity150. A user168of the network may interact with a client device164to access the network, for example, a user168may use a computing tablet, such as an iPad, to interact with a bank branch152in order to transfer money from one bank account to another. As described herein, a multi-tenant controller122, including in conjunction with a configured device, may monitor the network activity, record and store such activity for the purpose of optimizing the flow of traffic over the network so that it is consistent with a policy, such as a policy indicating that a certain type of data flow (e.g., “sensitive banking data”) be given preferential routing relative to less important data (e.g., “Internet banner advertisements), or some other type of policy. In traditional networks, applications used in enterprises have different levels of need from a wide area network (WAN). Available WAN solutions forward traffic primarily based on reachability information and at best do so utilizing data indicative of link level conditions such as packet error rates and packet loss rates. Legacy networks, especially WANs, connect locations or sites. These sites may be remote offices or branches152and data centers128. Typically, applications reside in these data centers128and users168access these applications through the network. If a specific application needs differential treatment either for quality of service (QoS) or routing (e.g., want to send certain application traffic through a certain location), the only way to achieve this was to manually translate that application to a network identity, typically an IP address or IP address range, and then manipulate routers162and routing protocols to achieve the intent (differential treatment). The complexity surrounding such manipulation and achievement of the business intent can be enormous, and mostly involves a “change and pray” approach. Policy based security/firewalls and policy based routing has been in place for a while. Firewalls mostly use policies to allow/deny traffic, and, in some cases, allowed traffic to be given forwarding/path constraints. Typically, router devices162use “routemaps” which allow certain packets matched by IP address and packet lengths to be classified and treated according to some policy specifications. These policy specifications allow for choice of the next-hop destination for each packet. Therefore there is a need for methods and systems for determining the location-based distribution of applications and related definitions, path characterizations and the dynamic characterization of applications for determining optimal, policy-based path selection of application traffic flows across the available network assets. In accordance with exemplary and non-limiting embodiments, there is provided a method by which the WAN may adjust to yield to the needs of applications in order to provide an optimal or near optimal experience to the users168. The production of such an experience begins with the routing of application flows on the most suitable link present at a branch office152, meeting the needs of an application. Once an application flow is placed on an appropriate link, sufficient network resources may be guaranteed in accordance to a policy specification for the application flow. With reference toFIG.1C, there is illustrated a method according to an exemplary and non-limiting embodiment. First, at step100C applications are identified as are the network requirements of the applications. Then, at step102C application flows are forwarded appropriately on the various available links, based on the current state of the network528. These forwarding decisions constantly evolve based on the dynamic state of the network528. In accordance with exemplary and non-limiting embodiments, configurable spoke devices perform the traffic forwarding decisions, and are provided the necessary pieces of information: a) application identification details and b) application traffic profiles. Each configurable device124also maintains the path characterization information. Path characterization may include information on bandwidth availability on a link, latency profiles, traffic volume caps on a link and the like. The configurable device124may then choose a link meeting all the constraints that is most optimal or most nearly optimal for an application at that time instant. In accordance with exemplary embodiments, application intelligence and unique path characterization details are utilized to choose the most suitable link. The multi-tenant controller122effectively “understands” what an application is, beyond its IP address, and configurable devices124at a branch152may parse and detect if a specific flow represents a specific application or not. As a result, a simple policy model may be implemented by which one can specify a business intent directly and the multi-tenant controller122and device together make it happen in the network. Such an expression of business intent may be, for example: “For Application Salesforce, give it Gold priority, send it through Atlanta DC”. With reference toFIG.1D, there is illustrated a method according to an exemplary and non-limiting embodiment. At step100D, application specific research yields a signature unique to every application. In embodiments, an application signature may include a combination of one or more of the following: (a) Domain name(s), (b) URLs, (c) a Server Network Identity (SNI), (d) a source and/or destination IP address, (e) a source and/or destination port and (f) any sequence of bytes at static positions in encrypted or unencrypted flow transactions, (g) any sequence of bytes following a matching previous sequence of bytes in encrypted or unencrypted flow transactions, (h) size/transmission/frequency patters of encrypted flow transactions (i) order and pattern of flow establishment and data relative to other flows seen. In accordance with an exemplary and non-limiting embodiment, a method comprises determining a signature for each of a plurality of applications based, at least in part, on at least one of a domain name, a URL, a Server Network Identity (SNI), source IP address, a destination IP address, a source port, a destination port, including any sequence of bytes at static or dynamic positions in encrypted or unencrypted flow transactions, any sequence of bytes following a matching previous sequence of bytes in encrypted or unencrypted flow transactions, size, transmission, frequency patters of encrypted flow transactions, order and pattern of flow establishment and data relative to other flows seen and HTTP metadata, transmitting each of the signatures to a branch device152and transmitting a policy string202defining a business policy to be applied to a data flow exhibiting the signature to the branch device152. In accordance with an exemplary and non-limiting embodiment, a method comprises receiving a signature for each of a plurality of applications based, at least in part, on at least one of a domain name, a URL, a Server Network Identity (SNI), source IP address, a destination IP address, a source port, a destination port, including any sequence of bytes at static or dynamic positions in encrypted or unencrypted flow transactions, any sequence of bytes following a matching previous sequence of bytes in encrypted or unencrypted flow transactions, size, transmission, frequency patters of encrypted flow transactions, order and pattern of flow establishment and data relative to other flows seen and HTTP metadata, receiving at least one policy string202defining a business policy, analyzing at least one network data flow to detect the presence of the signature and applying the business policy to the at least one network data flow comprising the detected signature. In accordance with an exemplary and non-limiting embodiment, a centrally controllable multi-tenant controller122for controlling a plurality of assets across a plurality of distributed computing environments is configured to determine a signature for each of a plurality of applications based, at least in part, on at least one of a domain name, a URL, a Server Network Identity (SNI), source IP address, a destination IP address, a source port, a destination port, including any sequence of bytes at static or dynamic positions in encrypted or unencrypted flow transactions, any sequence of bytes following a matching previous sequence of bytes in encrypted or unencrypted flow transactions, size, transmission, frequency patters of encrypted flow transactions, order and pattern of flow establishment and data relative to other flows seen and HTTP metadata, transmit each of the signatures to a branch device152and transmit a policy string202defining a business policy to be applied to a data flow exhibiting the signature to the branch device152. In accordance with an exemplary and non-limiting embodiment, a networked branch device152is configured to receive a signature for each of a plurality of applications based, at least in part, on at least one of a domain name, a URL, a Server Network Identity (SNI), source IP address, a destination IP address, a source port, a destination port, including any sequence of bytes at static or dynamic positions in encrypted or unencrypted flow transactions, any sequence of bytes following a matching previous sequence of bytes in encrypted or unencrypted flow transactions, size, transmission, frequency patters of encrypted flow transactions, order and pattern of flow establishment and data relative to other flows seen and HTTP metadata, receive at least one policy string202defining a business policy, analyze at least one network data flow to detect the presence of the signature and apply the business policy to the at least one network data flow comprising the detected signature. In accordance with various embodiments, these application signatures are generated and published to the multi-tenant controller122. In practice, the customer may use whatever method he/she is naturally inclined to utilize. In some embodiments, customers may specify the signature in domain name/urls as applications are increasingly becoming web oriented. Next, at step102D, the multi-tenant controller122may push the application signatures to configurable branch devices124. Every configurable device124has the capability to parse every flow that goes through the configurable device124and detect the presence of the application signature as the flow starts. Once matched, additional functions can be applied to the flow. Finally, at step104D, the multi-tenant controller122exposes an API through which the network admin/user can define a POLICY STRING202to express the Business Intent as follows:<APP>, <IDENTITY>-<PRIORITY>, <SERVICE>, <VXWAN>, <PATH>Where, APPApplication name/id, provided by provider of configureddevices or custom defined by network admin/userIDENTITYUser name/id or Device id, optionalPRIORITYBusiness Priority for the <APP, IDENTITY> combinationSERVICEService(s) that should be provided for the <APP,IDENTITY> combination, optionalVXWANVirtual WAN isolation identifier for the <APP,IDENTITY> combination, optional. Examples include, butare not limited, to a network isolation identifier.PATHPath constraints for the <APP, IDENTITY> combination,optional As used herein, “policy string”202defines, generally, who can connect to what application with what kind of priority. “Business Priority” refers to the abstraction of quality of service (QoS) into categories. In exemplary embodiments, a standardized policy string202format may be utilized. In accordance with an exemplary and non-limiting embodiment, a centrally controllable multi-tenant controller122for controlling a plurality of assets across a plurality of distributed computing environments may be configured to allocate a network-isolation identifier to each of a plurality of policies that contains the same VXWAN directive and transmit each of the plurality of policies with that network-isolation identifier to one or more devices in a network. A network-isolation identifier as used herein includes, but is not limited to, a virtual network identifier (VNI), segment identifier, and/or network identifier. A network-isolation identifier may be or may include numeric data, alphabetic data, and/or alphanumeric data. A policy identifier may include a network-isolation identifier. With the capabilities described above, every branch configurable device124may look for the application signature associated with an application, and when detected, enforces the Policy as follows: PRIORITYEnforces priority queues, similar to classic QoS, within thedevice and specific to the interface on which the flow isexiting and entering the device. The enforcement happensin a bidirectional manner affecting the download andupload directions, thus causing the flow and thus theapplication to operate within the bounds of the PRIORITYintent.SERVICEForwards the flow, and thus the application, to a specificservice. A SERVICE, which is defined separately, couldbe reached from the device through normal L3 forwarding,or through an L3 tunnel, or be defined as a recipient of thecopy of the flow. In the latter case, the configured devicemakes a copy of every packet of the flow in both directions,and sends it to the service, wherever it may exist in thenetwork. The other two services are expected to be in thepath of the flow and thus the flow is forwarded towards thatService through normal L4 -> L3 forwarding.VXWANAs the flow is forwarded, the configured device canoptionally add a “VXWAN” label/encapsulation, which is aVirtual WAN Identifier. This VXWAN 204, similar toVXLANin the data center 128, has applications beyondjust encapsulation.PATHPath directive, if present, constrains the paths the flow forthe APP and/or IDENTITY specified in the policy, isallowed to take as the flow exits the configured device. VXWAN204allows customers to create several small, isolated overlay tunnels over the same physical underlay. Each tunnel may be encrypted with a different set of keys, such that even if keys of one of the tunnels is compromised, others are not exposed. From a customer's perspective, VXWAN204allows customers to create isolated network segments end to end (with VLANs in the branch152and DCs, and VXWAN204in the WAN). This helps in setting operational boundaries between the various teams in the enterprise, securing the various parts of network infrastructure independently, and meet various regulatory compliance requirements, such as PCI. Advantages associated with these embodiments include a policy string202used for business intent and not for just choosing the next hop or security treatment. Further, currently implemented solutions lack the concept of VXWAN204. In addition, while certain security products have used APP and IDENTITY for the purposes of local security enforcement and have separately provided a way to direct traffic on specific interfaces, the described method goes beyond this such that the APP and IDENTITY are used to specify the network treatment of the flow across the entire WAN. From a routing perspective, a server160or a next hop may be reachable even though an application is not available on the path. For example, an HTTP proxy sitting on a path may be having performance issues and may be blocking connections. From a routing perspective, this blocking will not be seen as a failure and packets will continue to be forwarded to the proxy. It is further possible that the latency on a path is so high that an application is effectively unusable while operation of the application on a different path would be fine. In embodiments, an application may also reside in the Internet, such as possibly with an Ad Network134, Content Delivery Network138, or Third Party Data Applications140. In accordance with exemplary and non-limiting embodiments, application level probing ensures that an application is reachable and usable by a user168at all times. Configurable spoke devices perform “app probing”, where the configurable spoke device124periodically checks for the health of the applications on all of the paths. The configurable device124may then rate the links from best path to worst path, and provide this information to the multi-tenant controller122. Upon receiving the information, the multi-tenant controller122may use the information to route application flows. Rating of the links may be as simple as ordering by the lowest network latency, or may be as sophisticated as considering application level transaction latencies. In accordance with exemplary embodiments, active probing and rating the network paths may be performed in the context of each application (e.g., box.com) or sub-application (e.g., office365-lync, office365-sharepoint) for SaaS services. In the case of enterprise applications, active probing may be performed on the specific servers160of an application, as there can be many application delivery end points in an enterprise. For example, for Server Message Block (SMB) (file share) applications, there may be many servers160that serve an application with different content. As described herein, active probing differs from generic probing such as is typically practiced. When employing generic probing, a typical ping to a known server may get through (establishing L3 connectivity) even when a connection to the server160may not be established. By establishing a session with the server160via active probing, present embodiments establish application level connectivity. In accordance with various embodiments, there may be performed a server level check, in the context of applications. For example, some applications like SMB have a service end point per server160, unlike a SAAS service where there can be many servers160backing a single HTTP service. The applications in the former category are internal enterprise applications. For this category of applications, one may monitor at a server level. In some scenarios it is even possible that a single server may host multiple services, like SMB and HTTP. In that case, the same server will be probed for each service end point. In other words, the probing is done actual service delivery endpoint to make sure that the application is accessible. In some scenarios, the location from which an application is served is less than optimal. This is often the case with SAAS apps where applications may be delivered from multiple locations, e.g., western United States, eastern United States, the European Union, Latin America, etc. In such cases, it is often best to send such application flows initiated by a user168at a branch152to the closest possible app delivery location. In practice, this may not happen due to, for example, (1) the location of DNS servers and query points, (2) sub-optimal policies and (3) the unavailability of information related to application delivery locations. With reference toFIG.1E, there is illustrated a method according to an exemplary and non-limiting embodiment. First, at step100E, according to exemplary embodiments, SAAS application delivery locations may be identified via (1) manual information collection and feeding the system, (2) triangulation based on sourcing DNS queries from various locations across the globe and (3) information collected through app-probing as described above. After collecting such information, at step102E, the system may apply analytics by a component of the multi-tenant controller122on all observed traffic flows and extract the following three pieces of information: (1) where is the user168located (based on site information), (2) where is the closest application delivery location (using the above set of data) and (3) where was the user flow serviced from. Then, at step104E, the system may then (1) aggregate instances where the application was delivered from a sub-optimal location (based on originating geographic location), (2) recognize patterns and (3) if there is significant and consistent sub-optimal use, reports such use to a user, such as the administrator, of the multi-tenant controller122. In an exemplary and non-limiting embodiment, the methods and systems described herein may be applied to address and remediate instances of sub-optimal use. As a result, there is provided (1) automatic application service delivery location learning, (2) a geo-IP based approach to determine the actual application delivery location and (3) identification and reporting of sub-optimal selection of service delivery location. In accordance with exemplary embodiments, the definition of applications may be performed on a per-site basis based, at least in part, on geography. In such instances, application detection profiles may be defined by system operators/administrators and/or customers. These profiles may then be propagated from the multi-tenant controller122to configurable devices124so that the configurable devices124can detect applications. SAAS applications604typically have service/application offering locations distributed around the globe. When a user168accesses an application, they are directed to the closest service offering location. Because different service offering locations have different HTTP domain names, application definitions need to include the domain names. In accordance with exemplary embodiments, instead of uniformly distributing an application definition with all of the domain names to all of the configurable devices124, the application definition may be customized based on the location of the configurable device124to which it is distributed. For example, one may send the application definition “eu.xforce.com” to a device in Europe, and “na.xforce.com” to a device in North America. Definitions sent to configurable devices124are customized based, at least in part, on the location of the configurable device124. In accordance with exemplary and non-limiting embodiments, each configurable spoke device124may collect and report statistics from the site where located to the multi-tenant controller122. The multi-tenant controller122may then send back a traffic profile to the configurable device124applicable to the site. For new sites, for which a traffic profile may not exist, the system may choose the most common traffic profile for distribution and then customize it for the site soon after. In some embodiments, measurements of throughput from a server160to an end point may be normalized by subtracting out the server160contribution and normalizing the time in flight based on the size of a transaction. Thus, there is provided a method for the dynamic and on-going characterization of applications on a per site basis and adjusting the behavior of the network in response thereto. In accordance with exemplary embodiments, the methods described herein may be applied to transactional applications including, but not limited to, chat programs, VoIP, video conferencing and screen sharing programs. In accordance with various exemplary embodiments, methods and systems for network traffic characterization described above may utilize physical connectivity148, logical connectivity150or hybrid connectivity as described elsewhere herein. Furthermore, such methods and systems may be applied to application sessions as described above. Such methods and systems as described herein may be utilized by and for all manner of data center128applications as described herein. In accordance with various exemplary embodiments, methods and systems for network traffic characterization described above may utilize dynamic automated QoS definition as described herein. Exemplary embodiments may utilize file transfer applications and QoS standardized controls as described herein. Other exemplary embodiments may utilize parameters for categorizing data flows as described herein. The following are illustrative clauses demonstrating non-limiting embodiments of the inventions described herein: A method comprising:determining a network requirement for at least one application;dynamically determining a link suitable for data transmission in accordance with a policy based at least in part on a current network condition to meet the network requirement; androuting one or more application network data flows associated with the at least one application over the link. A method comprising:determining a network requirement for at least one application;determining a link suitable to meet the network requirement based, at least in part, on at least one of a plurality of application identification details and an application traffic profile; androuting one or more application network data flows associated with the at least one application over the link. Any of the clauses above wherein the application is operating at a node. Any of the clauses above wherein the application is operating at a branch. Any of the clauses above wherein dynamically determining the link is performed by a configurable spoke device. Any of the clauses above wherein the configurable spoke device maintains path configuration information. Any of the clauses above wherein path configuration information is selected from the group consisting of bandwidth availability on a link, a latency profile and a traffic volume cap. A method comprising:determining a signature for each of a plurality of applications based, at least in part, on at least one of a domain name, a URL, a Server Network Identity (SNI), source IP address, a destination IP address, a source port, a destination port, any sequence of bytes in one of a static position and a dynamic position in encrypted or unencrypted flow transactions, any sequence of bytes following a matching previous sequence of bytes in encrypted or unencrypted flow transactions, at least one of a size, a transmission and a frequency pattern of encrypted flow transactions, an order and a pattern of flow establishment and data relative to other flows seen and HTTP metadata;transmitting each of the signatures to a branch device; andtransmitting a policy string defining a business policy to be applied to a data flow exhibiting the signature to the branch device. A method comprising:receiving a signature for each of a plurality of applications based, at least in part, on at least one of a domain name, a URL, a Server Network Identity (SNI), source IP address, a destination IP address, a source port, a destination port, any sequence of bytes at one of a static position and a dynamic position in encrypted or unencrypted flow transactions, any sequence of bytes following a matching previous sequence of bytes in encrypted or unencrypted flow transactions, at least one of a size, a transmission and a frequency pattern of encrypted flow transactions, an order and a pattern of flow establishment and data relative to other flows seen and HTTP metadata;receiving at least one policy string defining a business policy;analyzing at least one network data flow to detect the presence of the signature; andapplying the business policy to the at least one network data flow comprising the detected signature. A centrally controllable multi-tenant controller for controlling a plurality of assets across a plurality of distributed computing environments wherein the controller is configured to:determine a signature for each of a plurality of applications based, at least in part, on at least one of a domain name, a URL, a Server Network Identity (SNI), source IP address, a destination IP address, a source port, a destination port, any sequence of bytes at one of a static position and a dynamic position in encrypted or unencrypted flow transactions, any sequence of bytes following a matching previous sequence of bytes in encrypted or unencrypted flow transactions, at least one of a size, a transmission and a frequency pattern of encrypted flow transactions, an order and a pattern of flow establishment and data relative to other flows seen and HTTP metadata;transmit each of the signatures to a branch device; andtransmit a policy string defining a business policy to be applied to a data flow exhibiting the signature to the branch device. A networked branch device configured to:receive a signature for each of a plurality of applications based, at least in part, on at least one of a domain name, a URL, a Server Network Identity (SNI), source IP address, a destination IP address, a source port, a destination port, any sequence of bytes at one of a static position and a dynamic position in encrypted or unencrypted flow transactions, any sequence of bytes following a matching previous sequence of bytes in encrypted or unencrypted flow transactions, at least one of a size, a transmission and a frequency pattern of encrypted flow transactions, an order and a pattern of flow establishment and data relative to other flows seen and HTTP metadata;receive at least one policy string defining a business policy;analyze at least one network data flow to detect the presence of the signature; andapply the business policy to the at least one network data flow comprising the detected signature. Any of the clauses above wherein the policy string comprises elements selected from the group consisting of a user identity, a SERVICE, a path and a VXWAN. A method comprising:checking at a predetermined interval a status of one or more network paths to a remotely located application;rating the relative status of each of the one or more network paths wherein the rating is based at least in part on at least one of network latency and an application level transaction latency; andutilizing the rating to route an application data flow associated with the application. A networked branch device configured to:check at a predetermined interval a status of one or more network paths to a remotely located application;rate the relative status of each of the one or more network paths wherein the rating is based at least in part on at least one of network latency and an application level transaction latency; andutilize the rating to route an application data flow associated with the application. Any of the clauses above wherein the rating of the network paths is in the context of an operability of an application. Any of the clauses above wherein the rating of the network paths is further based, at least in part, on a bandwidth of the path, a MOS score, a packet loss and jitter. A method comprising:determining a delivery location of an application based, at least in part, upon at least one of manual information collection, triangulation based on source DNS queries and application probing;analyzing at least one network traffic flow associated with the application to extract data comprising at least one of a user location, a closest application delivery location and a source of user flow service;aggregating the extracted data to identify a plurality of sub-optimal use instances wherein the application was delivered from a sub-optimal location; andreporting the sub-optimal use instances to a user of the application. Any of the clauses above further comprising taking an action to at least partially mitigate the sub-optimal use instance. Any of the clauses above wherein the user is an administrator. A centrally controllable multi-tenant controller for controlling a plurality of assets across a plurality of distributed computing environments wherein the controller is configured to:determine a delivery location of an application based, at least in part, upon at least one of manual information collection, triangulation based on source DNS queries and application probing;analyze at least one network traffic flow associated with the application to extract data comprising at least one of a user location, a closest application delivery location and a source of user flow service;aggregate the extracted data to identify a plurality of sub-optimal use instances wherein the application was delivered from a sub-optimal location; andreport sub-optimal use instances to a user of the application. Any of the clauses above configured to take an action to at least partially mitigate the sub-optimal use instance. Any of the clauses above wherein the user is an administrator. A method comprising:determining a location of one or more service offering locations each hosting a plurality of applications each having an associated application definition;determining a location of a device configured to access at least one of the plurality of applications; andtransmitting to the device an application definition for each of the applications to which the device has access, wherein the application definition is based, at least in part, on the location of one or more service offering locations and the location of the device. A centrally controllable multi-tenant controller for controlling a plurality of assets across a plurality of distributed computing environments wherein the controller is configured to:determine a location of one or more service offering locations each hosting a plurality of applications each having an associated application definition;determine a location of a device configured to access at least one of the plurality of applications; andtransmit to the device an application definition for each of the applications to which the device has access, wherein the application definition is based, at least in part, on the location of one or more service offering locations and the location of the device. A centrally controllable multi-tenant controller for controlling a plurality of assets across a plurality of distributed computing environments wherein the controller is configured to:receive application data flow information from a spoke device within a network that is associated with an enterprise site; andtransmit a traffic profile to the enterprise site based at least in part on a current network data flow characteristic. Any of the clauses above wherein a data transfer control over the network is at least one of application based and policy based. Any of the clauses above further comprising applying time-based application domain classification and mapping to determining steps. Any of the clauses above wherein the network comprises connectivity selected from the group consisting of hybrid, physical and logical. Any of the clauses above wherein the application analysis is performed on a per session basis. Any of the clauses above wherein at least one application comprises a data center application. Any of the clauses above wherein the policy string format is standardized. Any of the clauses above wherein routing is based, at least in part, on a dynamic and automated QoS definition. Any of the clauses above wherein applications are selected from the group consisting of transactional applications and file transfer applications. Any of the clauses above further comprising employing QoS standardized controls selected from the group consisting of shaping, policing, random early discard, tail drop, low latency queues, anomaly queues, fair queues and buffers. Any of the clauses above further utilizing parameters to categorize data flows. In traditional networking, adequately partitioning network traffic can be problematic. Partitioning may be done based at least in part on, but not limited to, routing uniqueness, security criteria, or network isolation. Using existing techniques, partitioning is typically implemented by virtual local area networks (VLANs) and virtual routing and forwarding (VRFs). VLANs and VRFs may be linked manually, where the manual linkage is at the branches152and data centers128. This manual linkage may provide network isolation across a wide area network (WAN), but there exists a need for controlling data flows in a policy-based manner that is application-specific and/or identity specific, where the implementation of the policy, and related data flow rules, is embodied in a device that is centrally controlled by a multi-tenant controller122. The multi-tenant controller122may be further used to establish extranet IPSEC VPN tunnels between separate enterprises (and associated networks). According to the methods and systems of the present invention described herein, a software defined enterprise WAN (SDEwan) may be used to create a network that is embodied in an application network driven model. A business intent of the model may be expressed through a policy string202. A policy string202may allow a specific application or identity to be specified that provides control over data flows occurring over the network. Control may be per-application, per-identity, a combination of application and identity, or based on some other parameter. In an embodiment, as part of a policy string202, a virtual extensible WAN (VXWAN)204may be specified that causes a device to encapsulate a specific set of flows identified by that policy in a unique VXWAN204. A VXWAN204may comprise a network-isolation identifier or label that may be put on the wire as data/packets belonging to the flows captured by the policy are forwarded between devices. A VXWAN204may comprise an optional unique encryption afforded only to packets belonging to the flows captured by the policy. A VXWAN204may comprise a way to insert a VXWAN network-isolation identifier at an entry point in the network and remove the network-isolation identifier at the exit in the network for the flow in a specific direction. A VXWAN204may be bidirectional in that the same VXWAN network-isolation identifier can be used for both directions of the traffic. A VXWAN network-isolation identifier name space may be global and allocated and managed by a multi-tenant controller122, as defined herein. In embodiments, a network administrator may specify a policy for an application or identity, or application-identity combination. This may include a VXWAN directive and may also include the type of path to take, including but not limited to the Internet142, a private WAN, a wireless route, and, for example, whether to use encryption or not. Encryption as used herein includes, but is not limited to, standard encryption methods, differentiated encryption methods, advanced encryption standard, OpenPGP, CipherSaber, or Triple Data Encryption Algorithm. A policy may also specify a priority level to be given to an identity, application or identity-application combination. For example, a policy may provide that all PCI applications, irrespective of the VLAN on which they arrive, on the LAN side, should be placed on a PCI VXWAN. In another example, a policy may provide that all traffic from voice VLAN be placed on a voice VXWAN. Examples of identities include, but are not limited to a user group, executive user group, specific individual (e.g., Tim Jones the CEO), specific enterprise department (e.g., Finance), or some other identity. According to the methods and systems of the present invention described herein, and referring toFIG.2, a multi-tenant controller122may be provided and used to implement a policy, policy string202, or some other rule-based operation governing data flows over a network. A multi-tenant controller122may allocate a numeric network-isolation identifier or label from a namespace it manages for a tenant. The controller may send policies to devices in the network, and such policies may have VXWAN204specified and include any network-isolation identifiers allocated by the multi-tenant controller122. As a device identifies the application and identity of data flows entering and exiting the device, it may apply the policies it received from the multi-tenant controller122. If a VXWAN network-isolation identifier is present for a policy, the data path software on the device may insert the label on the wire as the packets of the flow exit the device. If the VXWAN204is an encrypted VXWAN, then a policy-specific encryption may be applied before the packets of the flow exit the device. At the other end of the data flow, the receiving device may de-cap, for example, the sending device may place the VXWAN header on the packet with the VXWAN network-isolation identifier. The receiving device may remove the VXWAN header, remove the VXWAN network-isolation identifier from the header, and pass the packet to either a specific virtual routing and forwarding (VRF) on the core router178, or to a global VRF on the core router178. Examples of policy strings202include, but are not limited to, examples provided in the following table: VXWAN(possibly,including,withoutlimitation,anetwork-NetworkPathsBusinessisolationContextAppUserAllowedPriorityidentifier)Guest WiFiAnyAnydirectP4Noneinternet 142AnyPCIAnyVpn,P1PCIapplicationsprivatewanVoice VLANAnyAnyVpn,P1voiceprivatewanAnyAnyCEOVpn,P1CEOprivatewan In embodiments, a multi-tenant controller122may be used to establish Extranet IPSEC VPN tunnels between two separate enterprises, sites or networks. The multi-tenant controller122may be used to indicate with which other tenant and to which site of the other tenant they want IPSEC VPN tunnels established. The multi-tenant controller122may also specify which prefixes in each of a plurality of sites can communicate with each other. In an example, the IT administrator of a tenant may indicate to the multi-tenant controller122the partner tenant to which they want to establish a VPN, to which site of the partner they want the VPN to occur, their own site which should participate in the VPN, the source prefixes or specific IPs in their site from which application traffic is allowed to go over the IPSEC VPN tunnels, and the destination prefixes or specific IPs for the application traffic which can go over the IPSEC VPN tunnels. Once the IT administrator of the partner tenant agrees to the request to allow creation of IPSEC VPN tunnels between the enterprises, the multi-tenant controller122may instruct the devices on each of the sites of the two enterprises to establish an IPSEC data tunnel between the two sites. This may further restrict the enterprises to allow traffic only between those source and destination prefixes that were specified by the administrators of the two tenants. In embodiments, the most restrictive subset of the allowed prefixes may be used on both sides. Examples of prefixes include, but are not limited to, 10.1.1.0/20 and 192.168.2.1/24. In accordance with various exemplary embodiments, methods and systems for application and policy based network data transfer control described above may utilize physical connectivity148, logical connectivity150or hybrid connectivity as described elsewhere herein. Furthermore, such methods and systems may be applied to application sessions as described above and may employ a standardized policy string202format and encryption. The following are illustrative clauses demonstrating non-limiting embodiments of the inventions described herein: A method comprising:allocating an identifier to each of a plurality of policies each comprising a network-isolation identifier associated with a VXWAN directive; andtransmitting each of the plurality of policies to one or more devices in a network. A centrally controllable multi-tenant controller for controlling a plurality of assets across a plurality of distributed computing environments wherein the controller is configured to:allocate an identifier to each of a plurality of policies each comprising a network-isolation identifier associated with a VXWAN directive; andtransmit each of the plurality of policies to one or more devices in a network. A method comprising:receiving each of a plurality of policies at a networked device wherein each policy comprises a network-isolation identifier associated with a VXWAN directive; andinserting the network-isolation identifier into one or more packets transmitted by the networked device. A networked branch device configured to:receive each of the plurality of policies wherein each policy comprises a network-isolation identifier associated with a VXWAN directive; andinsert the network-isolation identifier into one or more transmitted packets. A method comprising:receiving from a first network tenant an indication of a partner network tenant with which to establish a IPSEC VPN tunnel, at least one of a site associated with the partner network tenant to which a IPSEC VPN TUNNEL is to be established, source prefixes at a tenant site from to which application traffic is allowed to travel over the IPSEC VPN TUNNEL and destination prefixes for application traffic that can travel over the IPSEC VPN TUNNEL;receiving an indication from the partner network tenant of agreement to establish the IPSEC VPN TUNNEL; andinstructing one or more devices of the first network and the partner network to establish an IPSEC data tunnel between themselves. A centrally controllable multi-tenant controller for controlling a plurality of assets across a plurality of distributed computing environments wherein the controller is configured to:receive from a first network tenant an indication of a partner network tenant with which to establish a IPSEC VPN TUNNEL, at least one of a site associated with the partner network tenant to which a IPSEC VPN TUNNEL is to be established, source prefixes at a tenant site from to which application traffic is allowed to travel over the IPSEC VPN TUNNEL and destination prefixes for application traffic that can travel over the IPSEC VPN TUNNEL;receive an indication from the partner network tenant of agreement to establish the IPSEC VPN TUNNEL; andinstruct one or more devices of the first network and the partner network to establish an IPSEC data tunnel between themselves. Any of the clauses above wherein the network comprises connectivity selected from the group consisting of hybrid, physical and logical. Any of the clauses above wherein the application analysis is performed on a per session basis. Any of the clauses above wherein the policy string format is standardized. Any of the clauses above wherein encryption is employed. In traditional networks, devices make next-hop routing decisions for data at the packet level. Routing protocols are used to exchange routing information between devices. However, local networks often times need to be manually configured via network statements or redistribution for each routing protocol so that prefixes can be advertised to other routing peers. In accordance with various exemplary and non-limiting embodiments, there is provided a method for reducing and/or eliminating asymmetric session flows using a flow table to determine a forwarding path based, at least in part, on the direction of incoming traffic in order to eliminate to eliminate asymmetric flows (each direction of flows taking two different paths) and/or packet looping. As described below, flow tables may be maintained at the configurable device124. Network failures may cause transient or semi-permanent asymmetry after an application flow has started. In the present instance, “asymmetry” refers to the different paths traffic takes in the forward and reverse direction as it is sent and received. In some instances, network asymmetry can adversely impact the performance of feedback-based transport protocols such as Transmission Control Protocol (TCP). Even if the network path in the direction of session flow is uncongested, congestion in the opposite direction can disrupt the flow of feedback leading to poor performance. Furthermore, troubleshooting the underlying causes of congestion may pose a problem as one may need to consult the routing tables at each device to see which path is taken by each direction of session flow traffic. Analysis of asymmetry at the “TCP flow” level may cause significant performance issues and visibility challenges for security devices like firewalls. Devices need to see all the traffic on the network to function properly, but when asymmetry occurs not all the traffic passes through the device due to rerouting. As a result, firewalls, for example, may only “see” one side of the traffic. With reference toFIG.3A, there is illustrated an example of network flow between two configured devices labeled as “A” and “D” as the session flows amongst various nodes “B”, “C”, “E” and “F”. As illustrated, network flow proceeds from A→B→C→D where A and D are configured devices. With reference toFIG.3B, there is illustrated an example wherein the network link between nodes C and D is broken. In such an instance, the flow may take an alternate path from C→F→D. Consequently, the reverse direction flow from node D to node A may instead take path D→F→E→A. In this scenario, at node A, the configurable device124observes that the flow was sent to node D through node B but came back from node D via node E. In response, the configurable device124at node A may automatically make a path switch decision and commence to send the flow in both directions through node E on its way to node D thus preserving/enforcing symmetry. Note that it is possible for a transition period to exist where the packets on a flow may be received on both paths. In that case the packets on the old path are still received and forwarded to the client, while the new packets in both directions take the new path. Also, a hold down period is employed after switching paths before another path switching decision is taken. This helps in avoiding flaps due to packets in transit and stale packets. In accordance with exemplary and non-limiting embodiments, configurable devices124maintain flow-based tables. Flow-based tables are similar to routing tables and forwarding information base (FIB) tables used by legacy routers162to forward packets. As used herein, “flow-based tables” are tables that allow configurable devices124to determine which interface a flow was forwarded on in one direction. As a result, the configurable device124can detect an anomaly arising when a detected reverse path is not the same. Received packets may be identified as being associated with a given flow based on the five tuples of a packet, specifically, (1) protocol, (2) source IP address, (3) source port, (4) destination IP address, and (5) destination port. Every packet may be identified using the above parameters as belonging to a flow. The configurable device124may next proceed to make decisions for the detected anomalous flow to move the forward direction of the flow to a new path, thus preventing any further asymmetry for the flow. An anomalous flow, for example that arises from an application session, may be detected, mapped, monitored and stored at a configurable device, as described herein, and/or associated with an identity, such as a user identity or group identity. An anomalous flow, for example that arises from an application session, may be detected, mapped, monitored and stored at a multi-tenant controller, as described herein, and/or associated with an identity, such as a user identity or group identity. Note that configurable devices124maintaining flow-based tables and detecting asymmetry is separate from the manner in which the multi-tenant controller122detects asymmetry. Multi-tenant controller122based asymmetry operates on a larger time scale and can detect network level asymmetry rather than interface (configurable device) level asymmetry. Network level asymmetry may happen when, for example, prefixes assigned to a site are “moved” to a new site and traffic flows take inefficient and sometimes asymmetric paths. In accordance with exemplary embodiments, sites may be assigned predetermined prefixes, and each configurable device124may choose a set of paths to forward a given packet based on the destination IP address on the packet and based on the prefixes that have been assigned to a site. One factor giving rise to asymmetry in such instances is that the prefixes assigned to a site may be incorrect, or were correct at the time of assignment, but became incorrect due to some changes made in the physical underlay network. When this happens, the device may choose a wrong set of paths, resulting in asymmetry. By observing flows end-end, the multi-tenant controller122may assess which flows take these inefficient paths, if they are due to prefix issues and if sufficient evidence and strength or duration of evidence exists to alert the IT administrator. In an exemplary embodiment, an IT administrator may configure the multi-tenant controller122to take automatic action. The multi-tenant controller122may then reconfigure the prefixes to reflect the new site it is believed to belong to. In embodiments, configurable devices124at nodes make forwarding decisions at the L4—TCP/UDP flow-level. As a result, each flow that crisscrosses a network including configurable devices124enters and exits configurable devices124that forward a specified flow on a specified path in both forward and reverse directions. As a result, flow symmetry is preserved. In exemplary embodiments, L4 flow-based tables may be implemented. Each flow entering a node may be detected when a synchronize packet (SYN) (for TCP flows) or when a new UDP session (previously unknown server-client user datagram protocol (UDP) transaction) is seen. Upon a successful decision to forward a flow on a specific interface on a network path, the originating configurable device124remembers its decision at the flow level. Where the traffic belonging to the flow goes through hub device within the enterprise network irrespective of the traffic's eventual destination, the hub device similarly remembers the interface the traffic arrived in its flow table before forwarding to the traffic to its destination server160. When the return flow traffic from server160arrives at the configurable device124, instead of blindly choosing the next hop for a packet as a typical router178would do, the hub device may look up the flow in its flow table and forward the flow and all of its packets on the same interface that it originally received the flow from. With the above, the return traffic to the configurable device124should arrive on the same network the forward traffic was sent and thus on the same interface the forward traffic was sent. The configurable device124may look up the flow in its flow table and forward the flow and all of its packets on the same interface that it originally received the flow from the Client. As a result, symmetry is preserved. Where the traffic belonging to the flow was sent directly to other destinations, including but not limited to, Ad Networks134, Content Delivery Network138and Third-Party Data And Applications140, or a network path that is beyond the access of a configurable device, the traffic may not flow through the configurable device124. In such cases, the return traffic to the configurable device124should arrive on the same network the forward traffic was sent and thus on the same interface the forward traffic was sent as long as there are no network issues in the external network. The configurable device124may look up the flow in its flow table and forward the flow and all of its packets on the same interface that it originally received the flow from the Client. As a result, symmetry is preserved. The following are illustrative clauses demonstrating non-limiting embodiments of the inventions described herein: A method comprising:detecting a data flow and an associated originating interface on a network;determining a first link over which to forward the data flow;transmitting the data flow over the determined link;receiving a return data flow; andmoving a forward direction of the return data flow to a new path if the return data flow arrived via a link other than the first link;wherein all packets following a first packet on the flow are forwarded on the same path as the first packet. Any of the clauses above further comprising storing the determined first link at an L4 level in a flow table. Any of the clauses above wherein the data flow is detected via a SYN or a detected UDP session. A networked branch device configured to:detect a data flow and an associated originating interface on a network;determine a first link over which to forward the data flow;transmit the data flow over the determined link;receive a return data flow; andmove a forward direction of the return data flow to a new path if the return data flow arrived via a link other than the first link;wherein all packets following a first packet on the flow are forwarded on the same path as the first packet. Any of the clauses above further configured to store the determined first link at an L4 level in a flow table. Any of the clauses above wherein the data flow is detected via a SYN or a detected UDP session. Any of the clauses above further comprising applying time-based application domain classification and mapping to determining steps. Any of the clauses above further comprising modeling application sessions and predicting bandwidth requirements. Any of the clauses above wherein data flows are employed via SYN. Typical Enterprise wide area networks (WANs) are implemented via routing protocols. Existing software designed networking (SDN) techniques are commonly employed in data centers128(non-WAN) to pull “network intelligence” to a central controller leaving simple forwarding instructions on data path devices. Typically, one cannot employ SDN techniques to a WAN environment because WAN does not allow for safe network behavior and scale. Specifically, unlike with SDN controller based solutions where there is high capacity and resilient connectivity between controller and devices, WAN is notoriously non-resilient and bandwidth availability is problematic. As a result, (a) having controllers deliver fine grained but “simple” instructions to devices all the time is not practical and (b) devices have to work even in the absence of connectivity between controller and themselves, sometimes for an extended period of time. Typical routing protocols exchange and use three different types of information: (1) network segments and addresses, (2) information indicative of how the network segments are connected and (3) which external network segments are not controlled by the routing instance. Using such information, typical router devices162set up packet forwarding tables812to allow packets to move from an incoming interface to an outgoing interface in the direction of its destination. In accordance with exemplary and non-limiting embodiments, a multi-tenant controller122may be utilized to algorithmically determine and establish a loop-free data forwarding model that may include the creation and management of forwarding tables812. As described more fully below, this algorithm may account for network failures and adjust the loop-free data forwarding model to account for asset loss within the network. As described herein, a centrally configured multi-tenant controller122may be both the creator and the source of (1) network segments and addresses and (2) information indicative of how the network segments are connected. The central multi-tenant controller122may then discover what external network segments are not controlled by the routing instance via a DC or configurable device124. For example, a DC/configurable device124may run routing protocols like Border Gateway Protocol (BGP) to peer with other routing devices/entities in the DC. These other routing entities may provide information about the rest of the network and how they are connected or how to reach network addresses and segments. The DC/configurable device124may export this data to the multi-tenant controller122periodically if there is a change, which the multi-tenant controller122may save in a network DB. The same network DB may also store the network description that the multi-tenant controller122created using configurable devices124and instances. Merging the two, the multi-tenant controller has a view of the entire network without running a routing protocol itself. The multi-tenant controller122may directly set up forwarding tables812on configured devices so that traffic is forwarded on to the correct interface. The result is a loop-free forwarding model wherein various rules are enforced. For example, one rule may require that spokes are never transit points. As a result, traffic cannot be forwarded between configured VPNs or between Private WAN and configured VPNs. In a traditional network, any available interface or circuit is a candidate to forward traffic on to a destination if routing tables allow. This requires that (a) there are no misconfigurations and (b) during network changes, there exist no alternate paths that might accidentally cause a loop. Traditional networking attempts to prevent such a situation by configuring “route redistribution rules” and other carefully selected manual constraints that prevent random interfaces or exit points, such as spokes, from being used to forward traffic. In accordance with the embodiments described herein, branch devices152may be restricted from allowing traffic to transit through them thus eliminating one aspect of loop causes. A second rule may require that the following transits are allowed at a configurable device124: (1) Spoke F←→Hub←→Spoke and (2) Spoke←→Hub←→DC/DC Transit/PrivateWAN. Once a branch/spoke is prevented from forwarding traffic between network segments as described above, rules are established for DC/configurable device124. Since DC is indeed a transit point for traffic, there is a need to allow transit traffic. In this manner, traffic may be forwarded between (a) system controlled network segments and (b) system controlled and non-system controlled network segments. A third rule may require that hub to hub transit is not permitted. While a HUB device needs to allow transit traffic as described above, allowing direct transit of traffic between two DC/configurable device124may also result in loops. This problem may be addressed by using classic routing protocols such as, for example, BGP, between the DC/configurable device124. However, it may also be addressed by preventing direct forwarding between DC/configurable device124and instead requiring an external non-system controlled network or non-system controlled network segment between the DC/configurable device124. Lastly, a fourth rule may require that each hub has at most one path for any given IP prefix at all times. This rule operates to ensure that traffic destined to a prefix takes a pre-calculated path and not one of many equally available paths. As a result, the multi-tenant controller122automatically creates and manages forwarding tables812in medium to very large networks in a scalable and reliable manner. In accordance with various exemplary embodiments, configurable device124connectivity may exhibit physical connectivity148, logical connectivity150or hybrid connectivity as described elsewhere herein. In accordance with various exemplary embodiments, algorithmic determinations and the establish of a loop-free data forwarding model described above may take into consideration latency issues as well as various forms of asymmetry and various possible failure conditions described herein. The following are illustrative clauses demonstrating non-limiting embodiments of the inventions described herein: A method comprising:determining a plurality of network segments comprising a network;determining a manner in which the plurality of segments are connected;determining network segments and how segments are connected, at least in part, without a routing protocol;discovering a plurality of external network segments via a hub device associated with the network;utilizing the plurality of network segments comprising the network, the manner in which the plurality of segments are connected and the plurality of external network segments. A centrally controllable multi-tenant controller for controlling a plurality of assets across a plurality of distributed computing environments wherein the controller is configured to:determine a plurality of network segments comprising a network;determine a manner in which the plurality of segments are connected;determining network segments and how segments are connected, at least in part, without a routing protocol;discover a plurality of external network segments via a hub device associated with the network;utilize the plurality of network segments and addresses comprising the network, the manner in which the plurality of segments are connected and the plurality of external network segments to establish at least one forwarding rule. A system comprising:a multi-tenant controller configured to create a network description of a network; anda HUB device forming a part of the network and configured to execute a routing protocol with at least one other device resulting in network information and to communicate the network information to the multi-tenant controller, wherein the multi-tenant controller is configured to merge the network information with the network description. Any of the clauses above wherein the network topology identification, simulation and load testing is controlled by the multi-tenant controller. Any of the clauses above further comprising detecting asymmetric network data traffic and associated network devices. Any of the clauses above wherein the network comprises connectivity selected from the group consisting of hybrid, physical and logical. Any of the clauses above wherein there are employed routing protocols selected from the group consisting of BGP, IS-IS, EIGRP and OSPF. Any of the clauses above wherein routing is based, at least in part, on a network prefix type. Asymmetry in network data flows can cause performance issues and impact users168that access those applications. Asymmetry may also cause flow-aware functions, like firewalls, to fail since they might be able to detect only one direction of the flow. In legacy network operations, when performance issues arise, network and IT administrators must troubleshoot the issue. If the issue is caused by asymmetry, the administrator typically must evaluate the network, device-by-device, to identify the specific devices that might have been in the path of the affected data flows, and inspect the routing and forwarding tables812and then rectify the likely routing configuration that might have caused the asymmetry. This manual process is slow and may disrupt network usage while being performed. What is needed are methods and systems for monitoring application and network data flows, such as may be collected by a multi-tenant controller122, and using such data to automatically identify asymmetric flows and identify network devices through which the asymmetric data flows likely traversed. According to the methods and systems of the present invention described herein, a multi-tenant controller122may maintain a network topology at least in the form of a network topology database in which the entry and exit points for each device in a network are maintained, and periodically running an algorithm using this network data to determine asymmetric data flows and the devices associated with such asymmetric data flows. Asymmetry as the term is used herein includes, but is not limited to, bandwidth asymmetry, media access asymmetry, path asymmetry, and loss-rate asymmetry (e.g., the structure of a network topology may create greater packet loss in data flows traveling one direction relative to an alternate or reverse direction). In embodiments, a configured device may collect monitoring data, such as information relating to data flows transmitting through a device. Such data flow records may carry important information relating to the entry and exit points on a device. As shown inFIG.5, millions of these data flow records may be exported from devices throughout the network to the multi-tenant controller122. The multi-tenant controller122may store information, including, but not limited to, a network topology database that includes information on the entry and exit points of data flows occurring on each network device and how the data flows are interconnected, and data flow records on a single data flow from multiple devices in the network. In embodiments, the multi-tenant controller122may use the stored network topology508and related data flow information to periodically run an “asymmetric detection” algorithm504that in essence identifies the existence of a plurality of data flows that exited a device through one path and return data flows that re-entered the device on a different path (i.e., asymmetry). In a simplified example as shown inFIG.5, a Branch B520may be associated with a spoke device. A Data Center1128may have a Hub Device1522, and a Data Center2128may have a Hub Device2524. Continuing the example, a connection that is initiated from Branch B520, intended for a server160sitting in Data Center1, may end up taking a path to Data Center2, and from Data Center2go to Data Center1(e.g., due to legacy routing reasons, or due to misconfiguration of IP prefixes available at the two data centers128), and the return traffic may then go directly from Data Center1to Branch B520. Branch B520, upon receiving the traffic from Data Center1, may immediately correct so the forward direction of traffic is sent directly to Data Center1. This correction may be made on demand, and outside of the multi-tenant controller122. The change may be reported, for example by a spoke device, to the multi-tenant controller122. The multi-tenant controller122may identify asymmetric flows, and see which data center devices participated in the transmission of the flow (which in this example are Hub Device1522and Hub Device2524), and deduce that the flow is incorrectly going to Hub Device2524, and reports this. The multi-tenant controller122may identify the IP address ranges on which this asymmetry is happening, aggregate them and report to them to a user, such as an IT administrator, so that the IP prefix configuration may be corrected on the sites. To prevent false positives, the algorithm may use additional weighting factors including other devices and different applications and their data flows, before declaring existence of asymmetry in the network. The multi-tenant controller122may read data flow tables in order to allow the multi-tenant controller122to determine how the data traffic egresses (e.g., data deriving from a branch152). The asymmetry may be detected, and the multi-tenant controller122may then notify configured devices, as shown inFIG.5, and the configured devices may take forwarding actions to ameliorate the data flow asymmetry. In accordance with exemplary embodiments, such methods and systems for application session modeling and prediction of bandwidth requirements described herein may be applied to application sessions as described above and may address various forms of asymmetry described herein. The following are illustrative clauses demonstrating non-limiting embodiments of the inventions described herein: A method comprising:monitoring a plurality of network devices to collect network performance data comprising data flow records of each device on the network and to create a network topology database;accessing the network topology database comprising information on an entry and exit point of each device, the manner in which the devices are connected and a plurality of data flow records on a single data flow from multiple devices on the network; and utilizing the network topology database to perform an asymmetric detection algorithm to identify one or more data flows that exited a device via a first path and returned to the device via a different second path. Any of the clauses above wherein a controller is configured to receive information and detect asymmetry. Any of the clauses above wherein detection is based, at least in part, on latency attributes. Any of the clauses above wherein the application analysis is performed on a per session basis. Any of the clauses above wherein asymmetry is selected from the group consisting of bandwidth asymmetry, media access asymmetry and loss rate asymmetry. Traditionally, applications and application data have been defined using the domain name of an application. However, this is less than satisfactory, because there can be many domain name variants used as an entry point to an application, and not knowing all variants may interfere with correctly identifying applications. For example, a software-as-a-service (SaaS) application webpage may embed widgets that invoke HTTP application programmer interface (API) calls to multiple domain names, such as that done by content delivery network138(CDN) provider domains, including but not limited to Akamai, Microsoft Azure, and Amazon CloudFront. These widgets may generate additional traffic that should ideally also be associated with the SaaS application604that embedded them. Context can also impact which data flows are considered part of an application. For example, an email application that is opened within a web browser may have windows that are creating data flows by accessing the mail servers, but there might also be data flows that are created by ad servers sending content to the email application. It is a more representative depiction to include such ad traffic flows within what is considered part of the application that is the email application since a policy that is intended to govern traffic from the email application will underestimate the data flow volume if such other flows like ad traffic are not included. Therefore, there is a need for methods and systems that enable the use of time-based cohesiveness as a determinant of application-based data classification, and time-clustered data flows to identify domains that are associated with an application, including an anchor domain that is a high-traffic, or frequently used, entry point for usage of an application. According to the methods and systems of the present invention described herein, a multi-tenant controller122may receive data from configurable devices124indicating which domains are associated with an anchor domain, and assign these domains to a SaaS application604so that a fuller set of entry points to the SaaS application604are known and stored by the multi-tenant controller122. This may have advantages in measuring, monitoring and routing the true data flows associated with a SaaS application604, whereas traditional techniques may underestimate the traffic associated with a SaaS application604by not recognizing the full set of domains that are actually generating traffic in association with an application. Domain types may include, but are not limited to, top-level domains608(e.g., “.com”), geographic domains (e.g., country domains), and infrastructure domains (e.g., “.arpa”). Anchor domains as the term is used herein typically refer to entry point domains for SaaS applications604. Anchor domains may be thought of as the top-level domain608and may be used to find sub-domains that are associated with SaaS applications604. For example, domains accessed following accessing an anchor domain may be correlated with the anchor domain. In an example, Office365 may have an entry point domain but have many related domain names (e.g., serverl.office365.com). In another example, an email application like Yahoo mail may use multiple mail servers and also have frames that are serving ads to the browser from other servers160. The multi-tenant controller122may store these anchor domain-domain associations and a time-distance approach may be used to discover the domains that are associated with the anchor domain. In an example, and referring toFIG.6, the domains that are accessed with in a configurable time period after the same client512accesses an anchor domain may be correlated with the anchor domain. The correlation technique may identify some domains that are not closely related to the anchor domain (e.g. domains accessed from a different tab in the browser window) and associate these with the anchor domain. The domains identified to be associated with the anchor domain may be sent to the multi-tenant controller122, and at the multi-tenant controller122, further processing may be performed to eliminate the false positives and distill them to a corrected set of domains. A configurable device124within the network may detect an application's anchor domain and mark the source of the data and the time of detection. Following this time of detection, ongoing data flows from the same traffic source may be marked as belonging to the application that is associated with the anchor domain. The multi-tenant controller122may continue to receive data regarding the traffic source from the configurable devices124and classify the data flows, for example, into one of the following i) Applications B and C that might belong to Application A, ii) Unknown Applications without a known definition that might qualify as belonging to A, or iii) Applications D and E that might not qualify as belonging to A. The time-based cohesiveness of the data flows may be used to cluster traffic and data flows that appear to have a common underlying SaaS or other application. This time-based cohesiveness may allow ancillary data traffic to be detected and associated with the application, such as ad traffic that is presented within or in conjunction (e.g., interstitial ads) with an application. In accordance with exemplary embodiments, such methods and systems for time-based application domain classification and mapping may be applied to application sessions as described above. The following are illustrative clauses demonstrating non-limiting embodiments of the inventions described herein: A method comprising:detecting at a device an application having an anchor domain;marking the application with a traffic source having an entry point domain that accessed the application and a time of detection; anddesignating network flows from the traffic source within a predetermined time from the time of detection as belonging to the application. A networked branch device configured to:detect an application having an anchor domain;mark the application with a traffic source having an entry point domain that accessed the application and a time of detection; anddesignate network flows from the traffic source within a predetermined time from the time of detection as belonging to the application. Any of the clauses above further comprising modeling application sessions and predicting bandwidth requirements. Any of the clauses above further comprising performing anomalous session identification, quarantine and restriction. Any of the clauses above wherein the application analysis is performed on a per session basis. Planning for enterprise bandwidth requirements based on application usage, and developing associated policies for the handling of data flows associated with applications, can be difficult and inaccurate based at least in part on the fact that the same application, within the same enterprise, might be utilized in very different ways by the user groups within the enterprise. For example, the same application may have different operational modes that create different data flow types. As a simple example, User Group 1 might use Application Z to download summary financial reports, while User Group 2 might use Application Z to create these reports. In this simplified example, User Group 2 is a heavier user of Application Z in terms of time used, but also in terms of the nature and volume of the data flows that are associated with Application Z. If such differences are not discovered and used as part of creating application profiles, then such profiles can miscalculate the network resources needed to run an application. When aggregated across applications, the problem may be compounded and can lead to enterprise bandwidth estimates that are unreliable for predicting actual future network bandwidth requirements. In another example, personnel in a Sales and Marketing Dept. may have a different usage profile for the same application than when the application is used by personnel from the Engineering Dept. Therefore there is a need for a process of identifying usage differences the may assist in planning for bandwidth capacity needs and making informed policy decisions, and modeling application session(s) to determine network bandwidth, where application sessions include transactional and bulk applications and those of mixed use, and using data mining and other statistical techniques to predict network requirements and potential points of limited capacity that could adversely impact data flows, or other network weaknesses. Such a solution may identify anomalies in the usage patterns and apply necessary checks, select appropriate WAN paths, and allocate appropriate bandwidth capacity (based on what an application requires). According to the methods and systems of the present invention described herein, statistical and data mining techniques may be applied for dynamic and on-going characterization of application usage, by session, for performance and predictive planning purposes may improve network performance. Big data-type analytic techniques may be used to characterize flows that are associated with application usage. Configured spoke devices may collect and send to the multi-tenant controller122such flow data. Analysis of this data may improve an administrator's ability to summarize the resource needs of applications along at least the following dimensions:Bandwidth usage in both directionsNetwork response timesApplication response timesNumber of active and idle application sessionsMaximum number of concurrent application sessions The results of analyzing the flow data may be reported to users, such as IT administrators of the network and provided to spoke devices in the form of traffic profile summaries. The spoke devices may in turn use the traffic profiles to monitor bandwidth usage and application performance. Data sent to a spoke device may include, but is not limited to i) ingress and egress bandwidths determined based on analytics performed at the controller, and ii) application response times. A spoke device may use this information to determine which one of the various available paths for an application is best suited to the application. Modeling and other analytic techniques, as described herein, may also be used to characterize data beyond flow data that are associated with application usage. Parameters used in modeling may include, but are not limited to application response time, ingress/egress bandwidth, server response time, transaction start and end times, size of transactions, or some other parameter. Applications may broadly be classified into three categories: Transactional applications, bulk applications, and media applications. Transactional applications may be described by the size of the transactions processed by an application, and the frequency of transactions, which can be modeled and characterized. Data mining techniques like centrality and classification may also be applied in combination to fine-tune the model. The resulting models may be better able to predict the size and frequency of transactions for a typical application session, as well as the frequency of application sessions. Bulk applications by nature consume any amount of bandwidth offered to them for bulk data transfer and are often file transfer applications. These applications may also be modeled, but only to predict the frequency of these application sessions. Applications that fall into both transactional and bulk categories may be modeled to identify the transactional periods versus bulk periods of usage, and the frequency of transactional and bulk transactions. Media applications are unique in their own perspectives. These applications have the following types of characteristics: a) sensitive to latency and jitter, b) may be unidirectional, 3) when bi-directional, will appear to be two unidirectional flows with different properties. These applications are further characterized by the type of codec negotiated and used on a unidirectional basis. Some example codecs used include, H.264, Mp4 for video, and G.711 and G.729 for audio. These codecs dictate how much bandwidth is needed in each direction, and what the latency and jitter requirements are. In embodiments, the modeling and analytic methods used to characterize an application session and the corresponding network and bandwidth requirements may also be used to monitor a network in real time for bandwidth requirements at the sub-channel level (transactional and bulk application types) and identify and predict trends in bandwidth requirements. Because traffic flow characteristics associated with an application are not static, and a single data flow's characteristics may change over the course of a session, accurate measurement and predictions of application bandwidth needs may require real time monitoring, such as that performed by a configurable device124, as described herein, to detect usage and bandwidth changes. For example, within a data flow for a single session of an application's usage, different behaviors may be monitored. For example, application behaviors moving from a transactional-type data flow to a bulk-type. Further, different human users168may cause an application session to have a different bandwidth profile. Application versioning may also impact bandwidth (i.e., recent upgrade may offer new features utilizing bandwidth differently than prior versions). Monitored data is sent from configurable devices124to multi-tenant controller122for processing. In order to develop an accurate application profile, a longer time window of an application session may better allow a multi-tenant controller122to detect the plurality of characteristics present that have an impact on bandwidth and network resource requirements. Within a single application session there may be a plurality of traffic types detected, such as: Transactional (e.g., human-to-human chat session over the internet142), real-time, bi-directional (e.g., a VoIP call (low bandwidth); video call, like Skype (high bandwidth)), and/or bulk transfers (e.g., large file sharing applications). Traffic may be directed over transmission control protocol (TCP) or user diagram protocol (UDP) data flows. Bandwidth needs may be determined for an application and/or sub-application as a historical trend, using statistical models. Baseline application transaction times may be computed in the context of each site, each application as a historical trend using statistical models. Packet loss, jitter, and MOS scores may also be computed for media applications as a historical trend using statistical models. Based on application type detection, a multi-tenant controller122may analyze application session data that it receives from configurable device124at the branch152level to determine what amount of bandwidth is necessary for proper application performance. Application performance scores may be used to record and adjust the bandwidth profile for an application, where the profile states performance and bandwidth criteria needed (i.e., a rule) for the type(s) of data flows detected during application sessions. In an example, application traffic quality metric (ATQM) is one type of a performance score, and it is a measure of the times taken for completing application transactions. To measure the ATQM, according to the methods and systems described herein, a configured device may track the requests and responses between client512and server160(for transactional applications) and determine the time taken, from a user's168perspective, for the full transaction to complete. For transactional data flows, the peak-sustained bandwidth needed during the period of the transaction based on each direction may be determined. This determination may be based on historical monitoring measures. The bandwidth may then be recorded in a traffic profile. For transactional applications, the size of the transactions and frequency of the transactions may be modeled with various distributions like, normal, Weibull or a long tailed distribution, such as Pareto. Such distributions may be characterized using parameters, such as the average/peak bandwidth over short time intervals (e.g., few 100s of msec to a couple of seconds), the frequency of transactions, and spatial and temporal localities identified, and/or data mining techniques like centrality and classification may also be applied in combination to fine-tune the model. Such models may be used to predict the size and frequency of the transactions for one application session and thereby the combination of ‘n’ number of sessions. In an example, at the start of a transaction, it may be determined that there is a need for 3 MB per second based on the codecs being used. By peering into the session handshake, the configurable device124may be able to determine, based on the network traffic for example, which codec is used and at what quality being negotiated. The configurable device124may use this information to place data traffic on the appropriate data path, for example, to avoid the placement of a bulk download on a high speed “lane.” Because malware is often performing data flows that are inappropriate for the data lane on which they occur, such identification may assist security implementations and analyses of suspect data flow. Frequently in data networks, a single bulk data flow for an individual application can potentially consume all available bandwidth, but in practice due to TCP behavior of client512and server160, socket buffers at either end and the bandwidth delay product (BDP) of the flow, there is a maximal bandwidth that can be less than the total link capacity. The monitoring information received by a multi-tenant controller122can be used to determine the range of minimum, average, and maximum bandwidth seen for an application flow based on data flow type, and can compute and add to the traffic profile for the application. The multi-tenant controller122may recognize that a transactional flow has become a bulk flow and vice versa based at least in part on the traffic classification profile, and applications may be classified into categories based at least on the profiles developed. This may assist with appropriately categorizing applications with a plurality of functional capacities. For example, a single application may enable chat and video conferencing, the first is transactional low bandwidth and the second transactional, real time and high bandwidth. The user type may in part determine which aspect of the application is more likely to be used, the time of day, or some other factor. Users168of the applications can also be classified and incorporated into the application profiles (tellers v. loan officers, etc.). Once the spoke devices in a network have available to them the application profiles from the multi-tenant controller122, they can better adapt to altering data traffic treatment and link capacity usage. In accordance with exemplary embodiments, such methods and systems for application session modeling and prediction of bandwidth requirements described herein may be applied to application sessions as described above. Further, such methods and systems may utilize dynamic automated QoS definitions as described herein. In accordance with exemplary embodiments, the methods described herein may be applied to applications as described herein. Exemplary embodiments may utilize file transfer applications as described herein. The methods and systems described herein may employ modeling and analytics as described herein. Other exemplary embodiments may utilize parameters for categorizing data flows as described herein. The following are illustrative clauses demonstrating non-limiting embodiments of the inventions described herein: A method comprising:receiving from a networked spoke device information describing network flows to and from an application;analyzing the information to characterize the application in at least one dimension selected from the group consisting of bi-directional bandwidth usage, network response times, application response times, a number of idle and active application sessions and a maximum number of concurrent application sessions; andtransmitting the dimensions to at least one networked spoke device as traffic profile information. Any of the clauses above wherein the information describes L4 level network flows. A centrally controllable multi-tenant controller for controlling a plurality of assets across a plurality of distributed computing environments wherein the controller is configured to:receive from a networked spoke device information describing network flows to and from an application;analyze the information to characterize the application in at least one dimension selected from the group consisting of bi-directional bandwidth usage, network response times, application response times, a number of idle and active application sessions and a maximum number of concurrent application sessions; andtransmit the dimensions to networked spoke devices as traffic profile information. Any of the clauses above wherein the information describes L4 level network flows. A method comprising:modeling a transactional application operating in a networked environment by a size of transactions and a frequency of transactions with a set distribution to produce one or more modeled parameters; andanalyzing the modeled parameters to predict a data flow size and frequency of one or more transactional application sessions. Any of the clauses above wherein the set distribution is selected from the group consisting of normal, weibull and pareto. A method comprising:modeling a bulk application operating in a networked environment by a size of transactions and a frequency of transactions with a set distribution; andanalyzing the modeled parameters to predict the frequency of one or more bulk application sessions. Any of the clauses above wherein the set distribution is selected from the group consisting of normal, weibull and pareto. A method comprising:modeling an application operating in a networked environment by a size of transactions and a frequency of transactions with a set distribution to produce one or more modeled parameters;detecting, based at least in part on the modeling, a hybrid application traffic characteristic, wherein the hybrid application traffic characteristic comprises a first data traffic type that is correlated with at least one data traffic characteristic of a transactional application and a second data traffic type that is correlated with at least one data traffic characteristic of a bulk application; andanalyzing the modeled parameters to predict a data flow size and frequency of one or more hybrid application sessions. Any of the clauses above wherein the set distribution is selected from the group consisting of normal, Weibull and Pareto. A method comprising:sending network usage data relating to an application usage session from a configurable device to a multi-tenant controller, wherein the network usage data comprises a first type of data flow associated with an application and a second type of data flow associated with the application;identifying a bandwidth requirement for each of the first and second types of data flows associated with the application based, at least in part, on historical data relating to the performance of the application and an associated data flow;storing within the multi-tenant controller the bandwidth requirement as an application bandwidth profile;sending the application bandwidth profile to at least one spoke device; andusing the application bandwidth profile to route data flows associated with the application. Any of the clauses above wherein the application analysis is performed on a per session basis. Any of the clauses above wherein routing is based, at least in part, on a dynamic and automated QoS definition. Any of the clauses above wherein applications are selected from the group consisting of transactional applications and file transfer applications. Any of the clauses above further comprising employing modeling and analytics selected from the group consisting of Pareto and Weibull. Any of the clauses above further utilizing parameters to categorize data flows. In traditional networks, the attribute indicating the amount of bandwidth that is available is often specified manually and is often incorrect and/or out of date. This may negatively impact factors, such as, routing, traffic engineering, and quality of service (QoS) controls. What is needed are mechanisms that utilize link bandwidth detection for active and idle links that can be automated and updated in real time. Traditional networks often use QoS standardized controls at an individual interface level to manage network performance, and to determine such characteristics as prioritization of traffic flows. QoS standardized controls include, but are not limited to, shaping, policing, random early discard (RED), weighted RED, tail drop, low latency queues, fair queues818, and buffers attached to a plurality of queue types. These controls may be used to classify traffic and provide, for example, a weight-based allocation of bandwidth according to traffic classification. In some embodiments, QoS may be defined in a dynamic and automated manner by the system that does not require use input and operates to adjust QoS to network conditions and requirements. However, enterprise needs are most often expressed in terms of business priorities that may differ from network priorities, and when in conflict, the traditional controls may give greater preference to a network priority over a business priority. For example, a real-time time application may have lower business priority than a bulk application, however current technologies will still often give preference to the real-time application over the bulk application. Traditional networks may also not allow or facilitate dynamically changing the number of queues, buffers, schedulers and shapers154, which are fixed. Thus, if new applications need to be added and they need a higher number of lower latency queues it will not be possible to serve those applications and achieve the needed performance. What is needed are methods and systems for using business priorities that are expressed as network policies that give preference to, and enforce, business priorities over network priorities, and that may scale by dynamically increasing or reducing QoS resources (e.g., queues, buffers, shapers154, and schedulers) to ensure that business priority needs are meet with changes in policies. In traditional networks, hybrid applications having transactional and bulk functionalities and related data flows have varying levels of bandwidth requirements that traditional networks are often poor at recognizing and adjusting to accommodate for optimal application performance. What is needed are methods and systems for identifying the current functional capacity of an application (e.g., in a transactional or bulk mode of operation), and adjusting the network bandwidth available to the application based at least in part on the current mode of operation of the application, and in accordance with a business policy. Further, anomalous traffic detection is needed in order to characterize what is a typical application session's bandwidth requirement for a given application and restrict usage of the bandwidth that is allocated to the application to what is a level appropriate to allow the application to properly function. Constraining the bandwidth may be advantageous, for example, in preventing denial of service (DoS) attacks during which bandwidth needs will exceed the permitted bandwidth threshold specified by a policy. Traditionally link capacity has been determined using a test-response measurement and/or active-traffic based estimation. Test-response may be performed when links are idle and typically happens at the “beginning” of a site's operation. Specific requests may be sent to servers160hosted by an enterprise or a third party, and the available bandwidth measured. Active-traffic based estimation refers to a method by which the system estimates available bandwidth while there is active traffic going through the links. However, when active traffic is flowing through the links, test-response measurement can be disruptive and could yield wrong results. Active-traffic based estimation relies on three key behaviors of TCP connections between a client512and server160: 1) if there is congestion anywhere on the path of the TCP connection, there will be packet loss, 2) a client512that initiates a relatively long-lived TCP connection that has more than just a few packets exchanged in each direction, will attempt to continually increasing the rate at which it transmits data. The server160, if it sends lot of data, will do the same. For a very long-lived connection, the client512and/or server160may use the entire link's capacity should network conditions allow. And, 3) after performing behavior 2, above, if network utilization has reached its limit, TCP connections will correctly settle on their share of the link before performing behavior 2 again. Referring toFIG.7, a configured device located within a network branch152may employ a shaper154on every link connecting to a network705in each direction (to and from). A shaper as the term is used herein may be a component of a configurable device or a component that is independent of a configurable device. A shaper154is a network artifact that limits the amount of data to a prescribed value. For example, if a shaper154is set to 10 Mbps in one direction, it will not allow more than 10 Mbps in that direction irrespective of how much is being attempted in that direction. If more than 10 Mbps is attempted to be sent, the shaper154will drop packets to keep the rate at 10 Mbps. Congestion control algorithms (e.g., Reno and binary increase congestion control (BIC)) may be used for bandwidth estimation. For idle links the estimation may be performed in a manner that is similar to an Internet speed test. For active links, link capacity may be determined through continuous monitoring and correction. In an example, links may have an ingress and egress shaper154, as described herein, that controls how much traffic can be put on the link or received from the link. Any traffic exceeding the shaper's154limits may be dropped. A shaper's limit may be set by bandwidth estimation logic. At the start, this limit may be set to ‘unlimited’ or a very large value. Bandwidth estimation may periodically consider two inputs for every shaper154: (a) count of dropped packets at the shaper154[DROP-SH] (b) count of dropped packets due to congestion on the collection of flows on the link [DROP-CNG]. Bandwidth estimation logic may “hunt” for proper link capacity as depicted inFIG.8A. If no DROP-CNG, but DROP-SH is detected at the shaper154for two consecutive minutes, the result, in one example, may be to increase link-bandwidth by 5%. In another example, if DROP-CNG is detected for 2 consecutive minutes, the link-bandwidth may be set at 95% of current link bandwidth (one exception to this may be in a startup/initialization phase, where the link-bandwidth=80% of the current link-bandwidth to allow for faster convergence). In another example, if nothing is detected for more than two hours, the link-bandwidth may be increased by 5%. According the methods and systems, as described herein, these drops may be detected and if these drops are present over a period of time (e.g., 120 seconds) the system may declare that it has detected actionable drops at the shaper154(DROP-SH). The configured device at the branch152may also monitor packet retransmits that might occur on TCP connections. Packet retransmits may indicate packet loss (hence it is getting retransmitted). However, packet loss may happen due to many conditions, some of which are not due to general network congestion. For example, TCP connections to a specific server160in Kansas might see packet retransmits but others do not. This very likely is due to a server160or network issue close to the server160in Kansas. To avoid false positives, the system and methods described herein may monitor packet retransmits across all connections holistically, assess the number of applications seeing packet retransmits, and perform such operations over a moving time window of, in one hypothetical example, 180 seconds, processed every 10 seconds. Thus, the time window of 180 seconds moves to the “right” every 10 seconds by 10 seconds. In another example, if twelve such consecutive measurements indicate that, for example, more than two-thirds of currently active applications at that device are seeing packet retransmits, then the system may declare that it has detected actionable congestion in the network528on that link. Such congestion caused packet loss (DROP-CNG) may cause packet retransmits. With DROP-SH and DROP-CNG, the system may, for example, perform the following: HuntingStateDROP-SHDROP-CNGInferenceActionInit 802PresentAbsentShaper 154 is tooIncrease shaperrestrictive and154 limit by 5%.there could beGo to Huntingmore capacity onstatethe linkDon′t carePresentNetwork isDecrease shaperseeing154 limit to 80%congestionof current value.Go to HuntingstateAbsentAbsentIf thisGo to Stable 808combinationstatepersists for 5minutes, we haveachievedSTABLE 808conditionHuntingPresentAbsentShaper 154 is tooIncrease shaperrestrictive and154 limit by 5%there could bemore capacity onthe linkAbsentPresentNetwork isDecrease shaperseeing154 limit to 95%congestionof current valuePresentPresentNetwork isDecrease shaperseeing154 limit to 95%congestionof current valueAbsentAbsentIf thisGo to Stablecombinationstatepersists for 5minutes, afterhaving adjustedshaper 154 limit,we have achievedSTABLE 808conditionStable 808AbsentAbsentIf thisIncrease shapercombination154 limit by 5%persists for 2Go to Huntinghours in thestatepresence of nearcapacity linkusage, time to seeif more capacityis available innetworkPresentAbsentShaper 154 is tooIncrease shaperrestrictive and154 limit by 5%.there could beGo to Huntingmore capacity onstatethe linkDon′t carePresentNetwork isDecrease shaperseeing154 limit to 95%congestionof current value.Go to Huntingstate If there is no congestion in the network and there is demand for more utilization as seen by the drops at the shaper154, then an attempt to increase link capacity may be performed by easing the shaper154limit. If there is congestion in the network, then a decrease in the rate of data pushed on the link may be implemented by decreasing the shaper154limit. The result may be for each TCP connection to adjust to the new settings and, after a set period of time, settle on the new available capacity at the shaper154. According to the methods and systems of the present invention described herein, shapers154may be added or deleted based on the level of business priority associated with the traffic. This may be performed dynamically, for example, based on application demand, where the dynamic performance is carried out in compliance with a policy maintained at a multi-tenant controller122. Performance-based routing may be accomplished by estimating link capacity automatically and dynamically, in near-real-time. Network traffic may be classified based at least in part on a business priority, and bandwidth allocation to a plurality of applications running on a network may be determined based at least in part on assigned business priorities. A network administrator may create multiple business priority classes and assign each a priority order and/or ranking. In an example, a lower number for priority order may be considered a higher priority. For example, an administrator may create 4 classes “Platinum”, “Gold”, “Silver” and “Bronze” and give them priority order 10, 20, 30 and 40. Important applications like Office365 may get a Platinum, high-priority ranking, whereas recreational applications, such as YouTube, may be ranked lowest priority. An administrator may also provide a percent-allocation of bandwidth between the priority classes (e.g., 50% to Platinum, 30% to Gold, and so forth). A configured device may first detect the network type of each application data flow, for example, real-time, bulk or transactional. The configured device may create multiple queues for each business priority (e.g., strict priority814, fair818and anomaly820). A strict priority queue814, for example, may be a queue for sending latency sensitive real time traffic (e.g. voice). If traffic is present in the queue it may be serviced before other queues (e.g., fair818or anomaly820) are serviced. Fair priority818queue examples include, but are not limited to, queues that carry transactional or bulk traffic, such as web applications, or file download. These queues may be found in round robin fashion along with the anomaly queues820. Anomaly820queues are queues that carry traffic for applications showing anomalous bandwidth consumption behavior (e.g., a chat application that rarely consumes more than 10 Kbps starts consuming 5× its regular consumption). An anomalous queue820may be used for those flows that are to exceed their traffic profile based bandwidth by a large margin. A shaper154may be used in front of the anomaly queues820to ensure that the anomaly flows do not impact bandwidth available for other flows. Anomaly queues may have a shaper154that restricts maximum bandwidth allowed to such anomalous behaving applications. The number of queues and the amount of buffering for each may be based on a formula that uses the current number of data flows and their network types. Fair queues818may be split into fair queues818for transactional traffic v. bulk traffic. In embodiments, real-time traffic flows may be assigned to real-time queues, transaction flows to transactional fair queues and bulk flows to bulk fair queues818and directed by a multi-tenant controller122, as described herein. A flow that changes from transaction to bulk and vice versa may be dynamically moved to the right queue. A configured device may use a shaper154per business priority class. This configured shaper154may be set to the sum of the total estimated bandwidth across application flows that are currently queues against the strict814, fair818and anomaly820queues. This configured shaper154may be used if the administrator has given a percentage bandwidth allocation across the business priority classes. The administrator can add or delete a business priority class dynamically. This, in turn, may be used to dynamically add or delete business priority shapers154and queues. A business priority scheduler may “drain” traffic. As used in this context, “drain” refers to removing a packet sitting in a queue and sending it out of an interface from the queues across queues of a plurality of business priorities classes. The scheduler may use an algorithm that uses a combination of business priority and network type of the queues to decide how to drain them. This algorithm may ensure that if the interface has sufficient bandwidth then a lower business priority traffic having real time network type will continue to have satisfactory performance while not impacting the performance of real-time, bulk or transaction application flows belonging to a higher business priority class. If sufficient bandwidth is not available for both business classes, then the business priority scheduler algorithm may give preference to flows of higher business priority. The algorithm may use a similar method when there is more than one business priority class. In embodiments, and referring toFIG.8B, the bandwidth that is considered sufficient for a business priority class may be the sum of real-time flow bandwidth amounts, sustained peak bandwidth for transactional flows and minimal sustained bandwidth of bulk flows. This information may be presented in the traffic profile of an application. Within a business priority class, real-time flows may be given a higher priority, followed by transactional flows, followed by bulk flows. The priority between transactional and bulk may not be strict814but may instead be weight-based with a higher weight provided for transaction flows but non-zero weight for bulk flows. Real-time flows however may have strict814priority and get drained first. An algorithm may take into consideration the dynamically changing link bandwidth of Internet links and also the percentage allocation specification set by an administrator between business classes. The dynamic, algorithmic monitoring may be used bidirectionally for each WAN interface. It may be applied against the egress direction on WAN interface for WAN traffic going towards WAN. It may also be applied against egress direction towards LAN for WAN traffic that is coming from WAN, and the configured device may be able to monitor the bandwidth usage in a passive manner and without introducing additional data traffic in order to test the bandwidth (i.e., the configured device is not adding to congestion). In embodiments, configured spoke devices may adjust the bandwidth queues at reasonable or set time periods to ensure that the bandwidth offered to all sessions of an application matches the need of the application sessions, while honoring the business priorities. For example, if there are 10 sessions of a P1 application, and each session needs 10 Kbps, then the bandwidth queues may be adjusted such that 100 Kbps is offered to that application, assuming bandwidth is available for the rest of the P1 applications, and the total share of bandwidth used by all P1 applications doesn't exceed a certain percentage. If the number of sessions of the same application increased to 50, then the bandwidth queues may be adjusted such that 500 Kbps is offered to that application, while still meeting the other constraints. Configured devices may detect anomalous application sessions depending on the amount of bandwidth consumed, and may move those sessions to an anomalous queue820, where the bandwidth offered to such applications sessions is restricted. In accordance with exemplary embodiments, such methods and systems for anomalous application session identification, quarantine and restriction described herein may be applied to application sessions as described above. Further, such methods and systems may utilize dynamic automated QoS definitions as described herein. The methods and systems described herein may employ modeling and analytics and use QoS standardized controls as described herein. Other exemplary embodiments may utilize parameters for categorizing data flows as described herein. The following are illustrative clauses demonstrating non-limiting embodiments of the inventions described herein: A method comprising:issuing a tuned request on a specified active link having an ingress shaper and an egress shaper to a server utilizing a link capacity to an extent in both a forward path and a return path;determining a link capacity for the active link;monitoring the active link; anddropping any traffic traveling via the active link when a limit of at least one of the ingress shaper and the egress shaper is exceeded. A method comprising:issuing a request on a specified active link having an ingress shaper and an egress shaper to a server utilizing a link capacity to an extent in both a forward path and a return path;determining a link capacity for the active link;monitoring the active link;determining a limit of the ingress shaper and the egress shaper based, at least in part, using bandwidth estimation logic; anddropping any traffic traveling via the active link when the limit is exceeded. A method comprising:receiving a plurality of business priority classes;detecting a network type associated with each of a plurality of application data flows in a network wherein the network data flow types are selected form the group consisting of real-time, bulk and transactional;creating a plurality of strict priority queues, fair queues and anomaly queues for each of the plurality of business priority classes; andassigning real-time network data flows to real-time queues, transactional flows to fair queues and bulk flows to bulk fair queues. A method comprising:monitoring a bandwidth of an application comprising an application session at a configured spoke device at a predetermined time interval; andadjusting the bandwidth such that the bandwidth matches a requirement of the application session while meeting the requirements of one or more defined business priorities. Any of the clauses above further employing a multi-tenant controller based mapping of a user or device identity to a network level identity. Any of the clauses above wherein the application analysis is performed on a per session basis. Any of the clauses above wherein routing is based, at least in part, on a dynamic and automated QoS definition. Any of the clauses above further comprising employing modeling and analytics selected from the group consisting of Pareto and Weibull. Any of the clauses above further comprising employing QoS standardized controls selected from the group consisting of shaping, policing, random early discard, tail drop, low latency queues, fair queues, anomaly queues and buffers. Any of the clauses above further utilizing parameters to categorize data flows. Traditionally network traffic engineering is based on destination IP addresses and/or a source IP address, but not on higher-level user or device identity. The creation of usage-based policy may allow an enterprise to enforce network policy using the identity of a user168that is accessing an application. For example, Active Directory (AD)904may be used as a source of user168and group identity information. What is needed are methods and systems for enforcing a user168or user-based policy by mapping IP-to-user-events912,914to a spoke site, where such events derive from multiple elements which may be present in multiple other sites, and enforcing policy at the network level using higher-level identity an administrator needs to map higher-level user168and/or device identity to network level identity. Network level identity as used herein may be terms of L2 (MAC Address, VLAN Tag) and/or L3 identity (IP Address) and/or L4 Identity (e.g. TCP port number, UDP port number). According to the methods and systems of the present invention described herein, a central multi-tenant controller122may be used to collect the identity information relating to users168of an application running on a network, and use a horizontally scalable algorithm to keep track of changing identity information so that the multi-tenant controller122can keep a policy updated so that it can enforce user168based network policy and provide accurate user168based networking and application monitoring data. In embodiments, a multi-tenant controller122may be used to collect the IP-to-user mapping events912,914and may use a horizontally scalable algorithm within the multi-tenant controller122to decide which spoke site needs an event. Based on this information, the multi-tenant controller122may then distribute the mapping to the spoke device in the correct spoke site. A spoke device may use the IP-to-user mappings912,914to identify which user168or user group a particular application flow belongs. It may then enforce a user-, or user group-based policy on that data flow. Spoke devices and hub devices support roles, including but not limited to discovery of AD using DNS [DNS ROLE], and tracking changes in user168and group identities using lightweight directory access protocol (LDAP) communication with AD [LDAP ROLE]. In embodiments, a multi-tenant controller122may run horizontally scalable services called Identity Management (IDM) Services902. For a given tenant, the multi-tenant controller122IDM service may select a set of elements and instruct them to run DNS ROLE and discover all AD domains and list the AD servers in the enterprise. The DNS ROLE may continuously monitor additions and deletions of AD domains, addition/deletions of AD server, and changes in IP address of AD servers. The DNS ROLE may keep the multi-tenant controller122IDM service informed of these changes and the IDM service902. The multi-tenant controller122may present the list of AD domains and AD servers to the tenant administrator and request credentials to communicate with the AD servers using LDAP and Windows Management Instrumentation (WMI). WMI is a communication protocol supported by Microsoft AD Servers. Devices external to Windows AD servers can use this protocol to communicate with Windows AD server and obtain information on events (including login and logout events) on the Windows AD server. For each AD domain, the multi-tenant controller122IDM service may select an element at a hub site that will run LDAP ROLE to communicate using LDAP with one or more AD servers for the AD domain. The multi-tenant controller122may use a specific algorithm to decide which hub site is the optimal hub site whose elements should run the LDAP ROLE and also determine if there are multiple elements in the hub site elements communicating with the AD servers and which AD servers are involved. The algorithm may minimize load on the AD servers, achieve fair load distribution across hub device elements, and contain LDAP communication to LAN rather than WAN. The element running LDAP ROLE may derive basic information such as full name, AD numerical ID, AD string ID for users168and groups, or some other type of identifying information. It may do this for those users168and user groups that the multi-tenant controller122wants to track for the purpose of enforcing a policy (e.g., the user168or user group is used in a policy's identity field) and for the purpose of monitoring and troubleshooting (e.g., the user168is present in a spoke site with a configurable element). In embodiments, a multi-tenant controller122may make available an existing user168or user group identity to an IT administrator so that the administrator can use them in a policy. The multi-tenant controller122may generate a separate controller-based ID for each AD identity and further keep a mapping of its ID in AD. The multi-tenant controller122may send a policy to each element and, in lieu of the AD identity, send down the ID it has locally generated. This ID may then be used by each element to enforce policy. The multi-tenant controller122IDM service902may also make available the information of the AD to a statistics and analytics service so that it can display full user name and AD information about the user168for the purpose of reporting and monitoring. The benefits of these methods and systems of identity-based policy may include, but are not limited to, 1) the use of user identity in policy may improve WAN traffic engineering for performance and compliance, 2) the multi-tenant controller122may optimize mapping of element-to-AD preferred communication paths for the purpose of monitoring user/group identity additions, deletions and changes (the multi-tenant controller122may accomplish this at scale across hundreds of AD sources and hundreds of elements per enterprise and across thousands such enterprises), and 3) the multi-tenant controller122's use of algorithms may allow for a horizontally scalable controller service and that specifically minimizes LDAP traffic on WAN. Each configurable element may support a role for tracking of security login events using Windows management instrumentation (WMI) communication with AD [WMI ROLE]. The security login events may include information relating to user name and IP address. In embodiments, a multi-tenant controller122may run a horizontally scalable service IP to Site Mapping (ISM) Service903. For a given tenant, the multi-tenant controller122IDM Service may select a set of elements and instruct them to run DNS ROLE and discover all AD domains and list of AD servers in the enterprise. The DNS ROLE may continuously monitor additions and deletions of AD domains, addition/deletions of AD servers and changes in IP address of AD servers, and keep the controller IDM service902informed of these changes and the IDM service902. The multi-tenant controller122may present the list of AD domains and AD servers to the tenant administrator and request credentials to communicate with the AD server's WMI. The multi-tenant controller122may run an algorithm to decide which element (from all the spoke and hub elements) will run the WMI ROLE that will contact specific AD instances. The algorithm may use information about subnets/prefix served by each AD, the local and discovered prefix sites of a site, and select a spoke or hub element to run WMI role so that it can get the necessary security login events. The algorithm may identify the correct AD, contain WMI communication to LAN and minimize it over WAN and take care of failures of the AD. In embodiments, the configurable elements running WMI roles using WMI with AD may receive specific AD login security events indicating the IP and AD user string ID and AD user numerical ID, and send a IP-to-user mapping events912,914to the multi-tenant controller122's IP-to-Site Mapping (ISM) service903. In an example, a WMI Role on a configurable element may receive an AD security event (login event) from AD using WMI. This event may include the IP address and AD ID and name of the user168who has logged into AD. The WMI Role may then form a message using this information, and this message may be called as an IP-to-user mapping, and send the message to the multi-tenant controller122's ISM service903. The ISM service903may run an algorithm that builds a data-based lookup method that uses the local and discovered prefix sets915of sites and the IP address in the IP-to-user mapping event912,914to find to generate a IP-to-Site mapping916. It uses this IP-to-Site mapping916to decide which spoke site a given IP-to-user mapping event912,914should be sent. The ISM903may change the AD user ID to a multi-tenant controller122user ID (which matches the user ID in a policy of interest) in the IP-to-user mapping event912,914. It may also add group ID's to the event for all groups to which the user belongs. The ISM903may send the enriched IP-to-user mapping event912,914to the correct spoke site and also record this IP as belonging to the user in the multi-tenant controller122database. Each database entry may have a validity period of 30 days (or some other timeframe) after which the entry is purged. In embodiments, spoke elements may use the IP-to-user event912,914obtained from the ISM903service to build an IP-to-userID and IP-to-groupID cache. For each incoming application flow, this cache may be used to determine the user168and group ID for the flow and use this ID to lookup policy and enforce policy. When the spoke elements generate flow statistics for the flow they may include the user168and group ID's as part of the flow statistic record and send it to a multi-tenant controller statistics service. The multi-tenant controller statistics and analytics service may store the statistical records with the user168and group ID information, and provide monitoring and troubleshooting information for a particular user168or group. The multi-tenant controller122may collect all the IP-to-user events912,914from multiple elements and map them to the final site having an element that needs to consume that event for enforcing policy on the data path. The multi-tenant controller122may map which element should communicate with which AD as IP-to-user event source. The multi-tenant controller122may scale across hundreds of AD sources and hundreds of elements per enterprise and across thousands such enterprises. The use of algorithms may allow for a horizontally scalable multi-tenant controller service that eliminates WMI traffic on WAN and reduces the amount of IP-to-user mapping traffic on WAN. The following are illustrative clauses demonstrating non-limiting embodiments of the inventions described herein: A method comprising:executing at a controller a horizontally scalable service Identity Definitions Manager (IDM) Service;mapping active directory (AD) domains to WAN network elements DNS ROLE and LDAP ROLE;instructing a plurality of network elements associated with a tenant to discover a plurality of AD domains and AD servers in an enterprise using the DNS ROLE;receiving from the plurality of network elements running DNS ROLE information indicative of changes to network attributes selected from the group consisting of AD domains, additions and subtractions of AD servers and changes in an IP address of AD servers;transmitting the received AD domains and AD servers to a tenant administrator and requesting credentials to communicate with added AD servers using LDAP; andexecuting an algorithm to determine which element will contact specific AD instances tominimize lightweight directory access protocol (LDAP) traffic volume occurring on the WAN and to ensure AD instances can still be reached in case of failure of any one network element;monitoring in Active Directory servers changes in at least one identity (user or group) of a network user by using the LDAP ROLE on the network elements; andupdating a policy, based at least in part on the mapping of user identity in AD domains, at a multi-tenant controller, wherein the tracking of changing identity information is implemented as a horizontally scalable service the Identity Definitions Manager Service A centrally controllable multi-tenant controller for controlling a plurality of assets across a plurality of distributed computing environments wherein the controller is configured to:running at a controller a horizontally scalable service Identity Definitions Manager (IDM) Service;mapping active directory (AD) domains to WAN network elements DNS ROLE and LDAP ROLE;instructing a plurality of network elements associated with a tenant to discover a plurality of AD domains and AD servers in an enterprise using the DNS ROLE;receiving from the plurality of network elements running DNS ROLE information indicative of changes to network attributes selected from the group consisting of AD domains, additions and subtractions of AD servers and changes in an IP address of AD servers;transmitting the received AD domains and AD servers to a tenant administrator and requesting credentials to communicate with added AD servers using LDAP; andexecuting an algorithm to determine which element will contact specific AD instances tominimize lightweight directory access protocol (LDAP) traffic volume occurring on the WAN and to ensure AD instances can still be reached in case of failure of any one network element;receiving from the LDAP ROLE of network element changes in Active Directory servers in at least one identity (user or group) of a network; andupdating a policy, based at least in part on the mapping of user identity in AD domains, at a multi-tenant controller, wherein the tracking of changing identity information is implemented as a horizontally scalable service the Identity Definitions Manager Service A network element executing DNS ROLE and LDAP ROLE configured to:obtain instructions from a central multi-tenant controller IDM server on which domains to discover AD servers in using DNS role Discovering using DNS ROLE information indicative of changes to network attributes selected from the group consisting of AD domains, additions and subtractions of AD servers and changes in an IP address of AD servers, and transmitting this information to the multi-tenant controller IDM service;receiving from the multi-tenant controller IDM service the credentials to communicate with discovered AD servers using LDAP; andmonitoring in Active Directory servers changes in at least one identity (user or group) ofa network user by using the LDAP ROLE on the network elements and transmitting these changes to the multi-tenant controller IDM service. A method comprising:executing at a controller a horizontally scalable service IP to Site Mapping (ISM) Service;instructing a plurality of network elements associated with a tenant to discover a plurality of AD domains and AD servers in an enterprise;receiving from the plurality of network elements information indicative of changes to network attributes selected from the group consisting of AD domains, additions and subtractions of AD servers and changes in an IP address of AD servers;transmitting the received AD domains and AD servers to a tenant administrator and requesting credentials to communicate with added AD servers using WMI;executing an algorithm to determine which element will contact specific AD instances in order to contain WMI communication over LAN and minimize WMI communication over WAN;monitoring, using the WMI role on the network elements, the AD servers security login events comprising an IP address, a user AD ID and a user name;converting the login events to IP-to-user events and transmitting these to the ISM service in the controller;using the ISM service to map these IP-to-user events to the right spoke site;sending the events with enriched information comprising one or more group IDs for the user to the element in the spoke site; andusing the enriched IP to user event at the spoke site to enforce policy based on user and group IDs and to enrich flow and application statistics with user and group information. A centrally controllable multi-tenant controller for controlling a plurality of assets across a plurality of distributed computing environments wherein the controller is configured to:execute at a controller a horizontally scalable service IP to Site Mapping (ISM) Service;instruct a plurality of network elements associated with a tenant to discover a plurality of AD domains and AD servers in an enterprise;transmitting the received AD domains and AD servers to a tenant administrator andrequesting credentials to communicate with added AD servers using WMI;transmitting the credentials to the network elements running WMI ROLE;executing an algorithm to determine which element will contact specific ADinstances in order to contain WMI communication over LAN and minimize WMI communication over WAN;receiving from the elements a plurality of AD server login events which contain user IP address, user AD ID and user name;converting these login events to IP-to-user events and transmitting these to the ISM service in the controller.using the ISM service to map these IP-to-user events to the right spoke site;sending these events with enriched information comprising a group ID for the user to the element in the spoke site; andusing the enriched IP to user event at the spoke site to enforce policy based on user and group IDs and to enrich flow and application statistics with user and group information. A method comprising:obtaining instructions from a central multi-tenant controller's IDM service to discover a plurality of AD domains and AD servers in an enterprise using DNS ROLE;transmitting information to a multi-tenant controller IDM indicative of changes to network attributes selected from the group consisting of AD domains, additions and subtractions of AD servers and changes in an IP address of AD servers;receiving credentials from the multi-tenant controller to communicate with added AD servers using WMI;monitoring, using the WMI role, the AD servers security login events comprising an IP address, a user AD ID and a user name; andconverting the login events to IP-to-user events and transmitting the login events to an ISM service in a controller. A method comprising:receiving enriched IP to user event at a spoke site from a multi-tenant controller's IP to Site Mapping Service to enforce policy based, at least in part, on a user and a group ID; andenriching flow statistics and application statistic with user and group information. Any of the clauses above further utilizing a network identity type. With traditional networks, changes to the network may comprise a major event. Typically, effort is made to implement planned changes during planned “outage” windows. Further, changes are often made blindly wherein personnel ‘pray’ for things to be acceptable. Regardless, such changes are necessary. For example, a new application needs to be rolled out and the network has to be adjusted to allow, accommodate, control and route the application's traffic. Change planning and execution in a network is more art than science due to (1) the complexity of the network, (2) the lack of end-end view and control of the network and (3) the lack of understanding of the application's needs and demands. Such network planning is not new. However, such planning is typically beyond the capabilities of the network management tools that manage the devices in the network. Such planning is typically performed in isolation from the real network and is complex, works with stale data and requires translation from a planned view to effecting the changes in the network. In accordance with exemplary and non-limiting embodiments, there is provided a multi-tenant controller122to centrally view and control the network topology, configurable device124in the topology and the application of policies to solve the operational needs of network administration. A multi-tenant controller122may be used to identify and map the topology of a network and use the network view to determine optimal policy and device usage of network assets. As described more fully herein, there is implemented a three part solution. First, the network528is simplified from a user interaction perspective and there are no routing protocols and routing translations to configure. Second, an end-end view of the controlled network528is always available. Lastly, the described solution is built, in part, on an understanding of the applications and thus enabling the network for an application is a native capability. In accordance with exemplary embodiments, the multi-tenant controller records data indicative of where all devices and applications are located. As described more fully below, this provides for global knowledge and allows for the provision of what-if scenarios. As a result, a customer may query the multi-tenant controller122directly to see which devices and applications are resident on the network704. For example, a network528supports 450 sites and wishes to add a 451stsite. How will this change affect routing topology and flow of traffic packages? In accordance with exemplary embodiments, when a new site is added, the changes to the network topology are updated at the multi-tenant controller122. Because the multi-tenant controller122has access to the HTTP flow, the multi-tenant controller122may be queried to, for example, show all paths between sites and simulate network function. The multi-tenant controller122may have access to bandwidth and links in its topology graph of the network. Utilization of each link can also be gathered and stored for access by the multi-tenant controller122. In accordance with exemplary and non-limiting embodiments, the multi-tenant controller122may simulate the effects of changes to a network528. The multi-tenant controller122may keep track of all configurations (devices, interfaces, capacities, policies and more) for all the devices in the network528. Configuration management may be in real or near-real time to ensure minimal data latency. The multi-tenant controller122may also keep large amounts of monitoring data on traffic types, traffic load and application statistics. The multi-tenant controller122may also allow for the creation of ‘empty’ or hypothetical sites, specifically, ones that are not real but are, for example, being planned. In an exemplary embodiment, the effect of adding a new ‘site’ or device to the network528may be simulated through the multi-tenant controller122with the ability to view projected traffic flow, capacity needs on links and DC devices, and business impact through traffic offload from traditional high cost network connectivity. In accordance with such embodiments, the network administrator may provide (a) the type of site, (b) planned connectivity to the site, (c) planned policies for the site, and (d) an anticipated number of end users168at the site. As used herein, the type of site refers, generally, to a spoke (remote branch152), hub (data center128) or service (where certain network services exist but these sites are unlike a traditional DC or branch152). Using these inputs and the additional data to which the multi-tenant controller122has access from the network528, the multi-tenant controller122may provide an integrated view of the new planned site plus all the existing sites already in the network528as well as assess the routing effects, traffic flow effects and the policy effects of the new planned site as if it were already in the network528. To estimate an impact on applications and their traffic, the multi-tenant controller122may use any available existing statistics and monitoring data collected from similar sites in the customer's network already controlled and managed. Examples of such data include, but are not limited to, (1) the type and nature of applications, (2) the number of users168or other data instead of users168such as, for example, the number of flows/sec, total simultaneous application sessions and flows, (3) the peak and average amount of data entering and exiting such a site and periodicity of such metric, (4) the measured application performance, network connectivity latencies and packet losses seen by applications from a site with geographical and connectivity similarity and (5) the configured and measured available bandwidth on links/connectivity, diurnal or other periodic variables on available bandwidth on such links from similar providers (since a customer typically involves the same provider across many sites). As described more fully below, application profiles are accessible based on information collected from various types of sites. Application profile includes bandwidth, latency and jitter requirements. In addition, there may be stored and accessible user (group) profiles based on information collected from various sites. User profiles may include information such as (a) which applications does a typical user168belonging to a group consume and (b) what are the typical times at which a user168accesses which applications. Using this information, one may estimate what the connectivity needs will be for a new site if the administrator can tell what type of user (groups) are going to be at the site, and how many of them there are likely to be and the like. The effect of adding or changing a policy in the network can be simulated using the multi-tenant controller122with the ability to view projected traffic flow, capacity needs on links and DC devices, and business impact through traffic offload from traditional high cost network connectivity. When a policy is planned to be added or changed, the network administrator provides (a) the policy to be added/changed and (b) the sites that he/she intends to apply the change to. With these inputs and the additional data the multi-tenant controller122has from the network528, the multi-tenant controller122may provide an integrated view of the new policy plus all the existing policies and the sites they apply to and assess the routing effects and traffic flow effects as if it were already in the network528. To estimate the impact on applications and their traffic, the multi-tenant controller122may use any available existing statistics and monitoring data collected from similar sites in the customer's network already controlled and managed. In addition to the statistics described above, the multi-tenant controller122may further consider an estimate of how moving flows away from or to a link or destination frees up, congests the links and removes or adds to the load at a site. For example, if traffic to office365.com is going through a DC and then to the Internet142, adding a site with a policy to directly send such traffic to office365.com without sending it via the DC may illustrate how load and congestion on the links at the DC can be altered. In accordance with other exemplary embodiments, the effect of adding new bandwidth/capacity with additional network connectivity (like a new Internet link) can be simulated through the multi-tenant controller122with the ability to view projected traffic flow, capacity needs at DC devices, and business impact through traffic offload from traditional high cost network connectivity. When a new link or network capacity is planned to be added to a site or set of sites, the network admin provides (a) the site and (b) the network links that will be added to the site. With these inputs and the additional data it has from the network528, the multi-tenant controller122may provide an integrated view of the new planned network connectivity plus all the existing links already available at the site and assess the routing effects, traffic flow effects and the policy effects of the new planned network connectivity as if it were already at the site. To estimate impact on applications and their traffic, the multi-tenant controller122may use any available existing statistics and monitoring data collected from similar sites in the customer's network already controlled and managed. As described herein with reference to various exemplary embodiments, estimation may be performed to predict the impact of network changes on applications and related network traffic. In all of these embodiments, various inputs may be utilized. For example, a first input that may be utilized includes the topology of a network. As described elsewhere herein, the topology may be synthesized at the multi-tenant controller122. The topology of a network may provide a tree-graph representation of configurable devices124, connectivity through MPLS144(Private WAN), Internet142(Public WAN) and IPSEC based VPN methods over Private WAN and Public WAN. The multi-tenant controller122may also discern reachability to other parts of network (where configurable devices124may not exist) based on information exported from the configurable device124at the branch. A second input may comprise a type of network, e.g., Private WAN, Public WAN, Internet VPN and Private VPN. A third input may comprise application definitions—both system-provided and customer-defined—maintained at the multi-tenant controller122. Identity/User definitions described elsewhere herein may be part of definitions used in policies. User/Identity definitions may be received and processed and interpreted at the multi-tenant controller122as specified elsewhere herein. A fourth input may comprise policies that prescribe which application is allowed to use which type of connectivity at a specific site. A fifth input may comprise monitoring data (telemetry) from existing active sites or sites in monitor mode on some or all applications. The telemetry from each site for every application may include the following: (a) bandwidth consumed over time over every network type including average and peak data, (b) data transferred over time over every network type, (c) number of simultaneous flows over time and (d) maximum flows/sec at the site and application level. A sixth input may comprise monitoring data (telemetry) from existing active sites or sites in monitor mode on media applications. The telemetry from each site for any media application present include the following additional items: (a) jitter seen over time for audio and/or video including average and peak values, (b) packet loss seen over time for audio and/or video including average and peak values and (c) an MOS Score over time for audio and/or video. A seventh input may comprise site parameters such as: (a) a type of site, e.g., branch152or DC, (b) network types and number of such connectivity, (c) a list of applications or pointer to another site that is similar, (d) a policy choice of pointer to another site that has the needed policy and (e) a number of users168or pointer to another site that is similar. In accordance with the exemplary embodiments described herein, what-if use cases and processing may be performed. When determining which paths a specific application should take from a site, one may take the topology of the network, apply the policy for the application, and reduce the topology to allowed connectivity based, at least in part, on a policy. The result is a set of network links that can carry the proposed application traffic. When seeking to determine which paths traffic from a specific user168and/or application take from a site, one may take the topology of the network and apply the policy for the application and user168and reduce the topology to allowed connectivity based, at least in part, on a policy. The result is a set of network links that can carry the application traffic for a specific user168or user group When seeking to determine what a new branch152may look like in terms of device and network capacity, one may proceed as follows. First, the multi-tenant controller122may calculate normalized average and peak bandwidth utilization and maximum flows/sec at a site for every application on a per site-size basis, categorized appropriately. Site-size may be decided by simultaneous flows at existing sites. Then, the multi-tenant controller122may derive applications and policies at the new site from user input or by reference to another site. The multi-tenant controller122may derive simultaneous flows at the new site by reference to another site or by taking the number of users168and multiplying by, for example, 20 with a floor value of 200. The multi-tenant controller122may then build a sample “site model” that is a mix of estimated bandwidth (BW) utilization and estimated maximum flows/sec as follows:Average-BWest=Average of sum of every app's average BW scaled to new site's sizePeak-BWest=Peak of every app's peak BW scale to new site's sizeMax-FLOW/SECest=Maximum flows/sec at site scale to new site's sizeSimul-FLOWSest=Direct value as described above. The resulting Simul-FLOWSest, and Max-FLOW/SECestmay be used to recommend a device type and capacity while Average and Peak-BWest may be used to suggest network type changes to accommodate the average and peak bandwidth needs. When seeking to determine how an application's traffic may change in a network due to a new proposed site, one may proceed as follows. First, a multi-tenant controller122may derive applications and policies at the proposed site from user input or by reference to another site. Next, one may take the topology of the network and add the proposed site along with its network connectivity and types. Next, one may apply the policy for the application and reduce the topology to allowed connectivity based on policy. For every DC site, the multi-tenant controller122may estimate the contribution of traffic (BW-INCRest) from sites similar to the proposed site. Similarity may be estimated by site-size and policy similarity to account for the amount and type of application traffic that comes in to the DC from the proposed site. The result is a set of network links that can carry the traffic for every application from and to the proposed site with a bandwidth increase of BW-INCRestat each DC site due to the addition of the proposed site. In accordance with exemplary embodiments, data center128applications may include an array of applications including, but not limited to, Exchange, SFDC and SAP. Such methods and systems as described herein may be utilize and be utilized by all manner of data center applications. In accordance with exemplary embodiments, such methods and systems for controller-based network topology identification may be applied to application sessions as described above. Further, such methods and systems may be applied to various branch152types as described herein. The following are illustrative clauses demonstrating non-limiting embodiments of the inventions described herein: A method comprising:receiving information describing an addition of a first site comprising at least one application to an existing network wherein the information is selected from the group consisting of type of site, planned connectivity to the site and planned policies for the site; andestimating an impact on the operation of the at least one application and associated network traffic using statistical analysis of monitored data collected from a second site similar to the first site. A method comprising:receiving information describing a change to a policy for an existing network wherein the information is selected from the group consisting of the policy to be changed and the network sites to which the policy is to apply; andestimating an impact on the operation of the at least one application and associated network traffic using statistical analysis of monitored data collected from a second site similar to the first site. A method comprising:receiving information describing a new network link to be added to an existing network site wherein the information is selected from the group consisting of the site and the network link to be added; andestimating an impact on the operation of the at least one application and associated network traffic using statistical analysis of monitored data collected from a second site similar to the first site. Any of the clauses above wherein the application analysis is performed on a per session basis. Any of the clauses above wherein a branch type is selected from the group consisting of retail store bank, POS device, and distributed computing site. Any of the clauses above wherein at least one application comprises a data center application. Any of the clauses above wherein applications are selected from the group consisting of transactional applications and file transfer applications. Any of the clauses above further comprising performing modeling and analytics selected from the group consisting of Pareto and Weibull. Any of the clauses above wherein a network type is selected from the group consisting of wired and wireless. Any of the clauses above further utilizing parameters to categorize data flows. In traditional networking, SKUs are often used to account for product features, scale and capacity. For example, a 100-node network might be handled by one SKU while a 1000 node network might need a “bigger” SKU. The availability of network may be provided by deployment of redundant units. For example, a reliable network may have key points in the network protected by two units of the same SKU so that if one should fail the other one takes over. This is referred to as “vertical scaling.” In a web services architecture, every server160or a unit of system component is equally capable and can handle a certain number of load or transactions. A load balancer, typically sitting in front of these components, distributes the load to the one or more instances of the system component, called a cluster. This model may provide both redundancy and scale: A single instance failure means that instance is out of the cluster and the other instances take on the load. As the load on the system increases, adding more instances may increase the capacity of the cluster. This is referred to as “horizontal scaling.” There is a need for methods and systems enabling high network availability by running an assignment algorithm at a multi-tenant controller122in order to assign branch sites152(and associated branch devices152) to at least two configurable devices124at each data center128within a network, with branch sites152mapped to hub pairs. At each data center128between the hub pair assigned to a branch device152, a configurable branch device152may select one hub device as primary and one hub device as secondary. Traffic may be preferentially sent to the primary hub device. Upon failure of the primary hub device, the configurable branch device152may switch data flows to the second of the devices in the hub device pairing and stop traffic over the failed primary hub device. There may be several hub devices in a given data center128. This hub array in the data center128may be horizontally-scaled and the mapping of a pair of hub devices to a branch site152may be dynamically changed by a multi-tenant controller122. This change may be fully automated depending on dynamically increasing or decreasing number of branch sites152and/or based on the dynamic addition or reduction of capacity in terms of hub devices in the data center128. According to the methods and systems of the present invention described herein, a horizontal scaling solution may be used to reduce or eliminate the network scaling problem. In embodiments, each device used at the data center128(the “hub device) may handle a certain network traffic load coming from remote offices/branches. Hub devices may be clustered together and the data load may be balanced/distributed across the elements of the cluster through the multi-tenant controller122. In this network architecture a stateless hub device is enabled, and if one hub device fails, another hub device is able to take over without requiring any state exchange between the devices. Any hub device designated by the controller may be able to serve the traffic load needs of the remote offices/branches152network up to the hub device's load limits. The multi-tenant controller122may compute and estimate the load from devices at each of a network's remote offices or branches152that send traffic to a hub device cluster. In embodiments, the multi-tenant controller122can assign hub devices to serve the load coming from remote offices/branches152, and dynamically add or delete hub devices from the cluster. A network employing a multi-tenant controller122, as described herein, may include one or more of remote offices or branches152that have one or two branch devices152each. The network may also have one or more data centers128that have one or more hub devices in a cluster, and a multi-tenant controller122either in the data center128or in the cloud that controls and orchestrates the network. Referring toFIG.11, when a new remote office is created and branch devices152are added to the network by configuring them at the multi-tenant controller122, the multi-tenant controller122may run an “assignment algorithm” and assign the remote office devices to two specific hub devices at every data center128. The algorithm may consider in its derivation of load and assignment if a remote office device was ever used in monitor mode. Monitor mode refers to a special mode for a branch device152where the device is able to see all flows moving in and out of the branch152but doesn't alter anything on the flows. Data transfer statistics information collected from the branch152may be used as an input to estimate traffic load. Internet link bandwidth estimation conducted on each branch device152may be used as input to estimate traffic load. The algorithm may further consider the current known capacity limits of each hub device the current allocated branches152to each hub device in the cluster and the sum of their estimated load on the hub device, the current allocated branches152to each hub device in the cluster and the actual load on the hub device, the availability of total capacity of the hub cluster, the network reachability of the new branch152to the all the elements of the hub cluster, or some other factor. In embodiments, the branch devices152may establish Secure IPSec/VPN Tunnels to each of its assigned hub devices at the data center128. A branch device152may choose one of the hub devices at the data center128(of the two that were allocated to the device) as the primary. If the Secure IPSec/VPN Tunnel to the primary hub device fails, the branch device152may switch its traffic destined towards the data center128to the secondary hub device at that data center128. In an embodiment, the branch device152may perform a seamless transfer from a primary hub device to a secondary hub device in the following manner: In the IPSec/VPN control plane communication, a branch device152may issue, as an example, a “START” command to the secondary hub device. The secondary hub device may acknowledge the START command and update the local routing information at the data center128. On receiving the acknowledgement, the branch device152may start sending the traffic to the secondary hub device on its IPSec/VPN tunnels. The branch device152may stop sending traffic over the IPSec/VPN tunnels to the primary hub device and attempt to send a “STOP” command to the primary hub device. The primary hub device, if alive, may stop advertising routes at the data center128for the branch152on one or more of the following i) on receiving the “STOP” command, ii) after a certain time of non-receipt of any traffic from a branch152, and/or iii) if bidirectional forwarding detection (BFD) detects loss of communication to the branch152. If the primary hub is not alive, routes advertised by the primary hub device may be automatically aged out at the data center128. Since a hub device's handling of traffic from a branch device152is decided by the branch device152and its view of the IPSec/VPN Tunnel states to the hub device, there is no need for hub devices to keep any state and exchange states between them. As capacity needs on the hub cluster increase (e.g., due to more and more branches152coming online), new hub devices may be introduced to the cluster and the multi-tenant controller122may start using them for the new branches152. If a hub device is permanently removed from the cluster (e.g., due to permanent failure), the multi-tenant controller122may allocate a new or replacement hub device to the affected branch devices152that were previously using the failed hub device. If the load generated by a branch device152starts increasing and is no longer safely within the parameters originally used (e.g., in the “assignment algorithm”) for allocating the branch152to a specific pair of hub devices in the cluster, the multi-tenant controller122may re-allocate a new pair of hub devices that can handle the revised load from the branch152. In such re-assignments, the controller may i) allocate a new pair of hub devices, ii) update the branch device152of a new secondary hub device (while keeping the current active hub device unchanged; hub devices may be connected in a series), iii) force the branch device152to “failover” or “switchover” to the new secondary hub device that has more capacity, and iv) update the branch device152of a new primary hub device. The result may be to successfully move the branch152from an older pair of hub devices to new pair of hub devices that can handle higher loads. In accordance with exemplary and non-limiting embodiments, methods and systems for hub high availability and network load and scaling may utilize various hub device types and spoke devices as described herein and may be enabled, at least in part, by one or more data center128types as described herein. The following are illustrative clauses demonstrating non-limiting embodiments of the inventions described herein: A method comprising:receiving at a branch device an assigned first hub device and an assigned second hub device associated with a data center;establishing a VPN data tunnel to the assigned first and second hub devices;designating the first hub device as a primary device;designating the second hub device as a secondary device; andswitching traffic destined for the primary device to the secondary device based, at least in part, on a cost of a link. A method comprising:receiving at a branch device an assigned first hub device and an assigned second hub device associated with a data center;establishing a VPN data tunnel to the assigned first and second hub devices;designating the first hub device as a primary device;designating the second hub device as a secondary device;switching traffic destined for the primary device to the secondary device if a failure of the VPN data tunnel to the primary device is detected; andswitching traffic to the primary device and the secondary device in a revertive mode and non-revertive mode. A networked branch device configured to:receive at a branch device an assigned first hub device and an assigned second hub device associated with a data center;establish a VPN data tunnel to the assigned first and second hub devices;designate the first hub device as a primary device;designate the second hub device as a secondary device; andswitch traffic destined for the primary device to the secondary device if a failure of the VPN data tunnel to the primary device is detected. Any of the clauses above further comprising switching traffic destined for the primary device to the secondary device based, at least in part, on a cost of a link. Any of the clauses above wherein the first hub device and second hub device are assigned via an algorithm based, at least in part, on:data transfer statistics information collected from the branch device used as an input to estimate traffic load.internet link bandwidth estimation performed at the branch device used as input to estimate traffic load.the current known capacity limits of each hub device.the current allocated branch devices to each hub device in a cluster and the sum of their estimated load on the hub device.the current allocated branch devices to each hub device in a cluster and the actual load on the hub device.availability of total capacity of a hub cluster.network reachability of the new branch device to the all the elements of the hub cluster.ability to use all branch circuits when communicating with hub device.ability to use lower cost circuits when communicating with hub device. Any of the clauses above wherein configurable devices comprise spoke devices. Any of the clauses above wherein a data center type is selected from the group consisting of private cloud, scientific communities and co-location centers. In traditional branch routing architectures, typically multiple branch routers162are deployed in parallel to provide high availability towards both LAN and WAN side. On the WAN side, private WAN links or internet links are distributed on the multiple branch routers. On the LAN side, the switches158are connected to the multiple branch routers162involving complex cabling schemes. On top of the complex cabling, various L2 (HSRP/VRRP) and L3 (OSPF/EIGRP) protocols are used to realize high availability with the parallel deployment. Further, complex traffic flows also happen where the packets first go to one of the routers, and then get routed back on the LAN side to another router, which is both inefficient and hard to troubleshoot when problems happen. In accordance with various exemplary and non-limiting embodiments, there is disclosed a branch routing architecture in which configurable devices124are serially-placed in order to enable the bypass of data flows around a failed device to maintain access to all the available WAN circuits that terminate at a branch152of interest. As illustrated with reference toFIG.12, two or more devices124are deployed in series. Both switches connect to a single configurable device1202which connects to a second configurable device1204in series. Private WAN and internet circuits terminate on the second configurable device1204. As a result, cabling and L2/L3 topologies remain the same and simple whether there are one configurable device124or two. This allows the customers to switch between one (simple topologies) or two configurable devices124(highly available topologies) with out making any change to the switch158or to the WAN connectivity, while at the same time being able to use the capacity on all the available WAN links. As illustrated, deploying two configurable devices1202,1204in series protects against a failure of a single configurable device124at a branch152and provides high availability. In accordance with various exemplary and non-limiting embodiments, configurable devices124deployed in series communicate with each other using a redundancy protocol to exchange state information comprising (1) who is active and who is in a standby mode of operation and (2) characteristics of various paths sent by active an active configurable device124to a standby configurable device124using a redundancy protocol. This protocol is automatically employed by the configurable devices124with out any action needed from the customers. When an active configurable device fails, the standby configurable device may recognize the failure via one or more protocol keepalives and switch to an active state. A keepalive (KA) is a message sent by one device to another device to check that the link between the two is operating. When the configurable device124fails or becomes inactive, the configurable device124automatically short circuits (called ‘bypass’) the interface connections such that the other (active) configurable device124has direct physical connection148to the circuits. In accordance with exemplary embodiments, when both devices are operating in a healthy manner, they may self select which one will be active (based on an algorithm, e.g., lower serial number or controller given initial directive). If the self selected active device now has a failure and goes into bypass mode, then the backup device will become the new active device as it will detect loss of heartbeat. As a result, the current active configurable device may directly handle the private WAN and Internet connections without any routing protocols and other complexities. Further, in accordance with the state syncing described above, the standby configurable device is already primed to become active and therefore is able to continue traffic forwarding functions within a short amount of time. Note that, unlike traditional networks, the links are available to both configurable devices124. With the hardware ‘bypass’ feature described above, the configurable device124that became active after the failure of the other can direct access to all the circuits. In accordance with various exemplary embodiments, the described bypass functionality may be implemented in either software or hardware or a combination thereof. In accordance with various exemplary embodiments, both the active and standby configurable devices124may exhibit physical connectivity148as described elsewhere herein. Further, such methods and systems as described herein may be applied to various branch types as described herein as well as various branch components152including, for example, switches and routers162. The configurable devices124may further comprise spoke devices as described herein. The methods and systems described herein may further employ various switch types as described herein. The following are illustrative clauses demonstrating non-limiting embodiments of the inventions described herein: A method comprising:deploying in series a plurality of configurable devices configured to communicate with one another via a protocol for exchanging state information wherein at least one of the plurality of configurable devices is in an active state and at least one of the plurality of devices is in a standby state;detecting, by the at least one of the plurality of configurable devices in a standby state, a failure of a configurable device in an active state via a protocol; andswitching the at least one configurable device in a standby state to an active state. A system comprising:a plurality of configurable devices deployed in series and configured to communicate with one another via a protocol for exchanging state information wherein at least one of the plurality of configurable devices is in an active state and at least one of the plurality of devices is in a standby state; wherein the at least one of the plurality of configurable devices in a standby state is adapted to detect a failure of a configurable device in an active state via a protocol and in response thereto switch the at least one configurable device in a standby state to an active state. A method comprising:deploying in series a plurality of configurable devices configured to communicate with one another via a protocol for exchanging state information wherein at least one of the plurality of configurable devices is self-selected to be in an active state and at least one of the plurality of devices is in a standby state;detecting, by the at least one of the plurality of configurable devices in a standby state, a failure of a configurable device in an active state via a protocol; andswitching the at least one configurable device in a standby state to an active state. Any of the clauses above wherein connectivity is physical Any of the clauses above wherein branch components are selected from the group consisting of switches and routers. Any of the clauses above wherein configurable devices comprise spoke devices. Any of the clauses above wherein switch types are selected from the group consisting of managed switch, unmanaged switch and stackable switch. Internet Protocol Security (IPSEC) is a protocol suite for securing Internet Protocol (IP) communications by authenticating and encrypting each IP packet of a communication session. Enterprise Site to Site IPSEC Virtual Private Network (VPN) over Wide Area Networks (WAN) uses a suite of encapsulation, data encryption and data authentication for data path tunnels (e.g. encapsulating security payload (ESP) and Authentication Header (AH)) and a separate control channel protocol such as, for example, internet key exchange (IKE) and IKEv2 for the derivation of key exchange and for decisions related to what traffic to encrypt between the two gateways in each site. Existing control methods typically bring in significant complexity for Information Technology (IT) with regards to the management of and the configuration for key exchange. For example, for a set of N sites there can be potentially on the order of N2IPSEC tunnels between the N number of sites. In order to provide for the strongest levels of security, IT generally is required to manually configure rules for key generation on a per peer basis, which substantially increases configuration overhead. In instances where N is in the thousands, as when a networked environment comprises thousands of nodes, this may become a prohibitive task. As a result, IT typically will not deploy full mesh networks even though such is required to enable branch-to-branch communication. For the strongest level of security, each gateway at each site has N−1 tunnels to each peer site and is required to perform extremely CPU intensive and expensive key exchange computations so that both gateways can derive a common shared secret, which is then used to derive cryptographic session keys to secure the tunnel. In order to maintain perfect forward secrecy, this key exchange is typically repeated periodically for each tunnel resulting in thousands of key exchange operations. In several scenarios, multiple tunnels are needed between the same pair of sites (e.g. VPN transport high availability, e.g. multiple WAN and Internet circuits, e.g. secure segmentation on the WAN using Virtual Wan (VxWAN))204. In such instances, the number of key exchanges may be increased by a factor of 2 to 10 times. However, using software only methods, the number of such key exchange operations is limited and may not be able to meet the required scale. The results often times in limited mesh configurations wherein IT cannot build fully meshed networks with thousands to tens of thousands tunnels per site. Even the utilization of existing hardware based solutions may be unable to scale to the number of tunnels that are needed for full mesh and for full meshes per VxWAN based WAN segment. Existing methods such as GROUP VPN offer scalable implementations that address some of these concerns but compromise security as they use the same session keys for communication between all devices across all encrypted data channels. In such instances, even a single compromise may expose the entire network. In accordance with exemplary and non-limiting embodiments, there is provided a method to replace traditional key exchange for Internet Protocol Security (IPSEC) data encryption by using a multi-tenant controller122to create multiple time-limited shared secrets, where the multi-tenant controller122generates the shared secrets for each IPSEC encrypted channel and sends them over a bi-directional certificate authenticated channel to the two IPSEC data channel end points. The end points then use the shared secret and nonces to derive session keys. The shared secret and session key may be refreshed at an interval specified by a policy. The multiple shared secret mechanism ensures that even when faced with loss of communication with the multi-tenant controller122the IPSEC end points can continue to refresh the time limited shared secrets and ensure perfect forward secrecy. In accordance with exemplary and non-limiting embodiments, there is disclosed a multi-tenant controller122driven key exchange model that is a highly scalable and a granular Kerberos-like model for per pair scalable key generation and distribution over unsecure networks. Such a model provides a zero configuration model for key exchange requiring little or no manual configuration by IT. In addition, there is provided a multi-tenant controller122generated separate shared secret per tunnel whereby granularity may be increased up to and including a separate shared secret for each encrypted VXWAN segment. There is further provided the ability to horizontally scale in order to generate hundreds of millions of shared secrets needed for fully meshed branch networks. In addition, embodiments disclose a software only solution that does not require the use of custom hardware to achieve necessary scale. Further, there is provided a solution that utilizes several sources of entropy for random number generation, which increases the strength of the shared secret key material generated by the multi-tenant controller122. Methods for random number generation may be selected and employed based, at least in part, on available processing power. As described more fully below, such a solution utilizes a shared secret from the multi-tenant controller122along with random information from each communicating element for a given data tunnel to generate session keys for encryption and authentication of the data tunnel. With regard toFIG.13, there is illustrated a flowchart according to an exemplary and non-limiting embodiment. First, at step1300, when the multi-tenant controller122makes a determination that a link should be established between two sites, the multi-tenant controller122generates a common shared secret using following algorithm. First, the multi-tenant controller122generates X bits using, for example, a National Institute of Standards and Technology (NIST) approved cryptographically secure pseudo-random number generator (PRNG). The value of X may vary depending on the type of NIST PRNG that is used. The random number generator may be seeded with a high level of entropy using one or more of following mechanisms, (1) using operating system/dev/random, which relies on several entropy pools, (2) using FIPS approved Intel RDRAND instructions and (3) by harvesting element generated random numbers. In accordance with various exemplary embodiments, continuous invocations of the PRNG are used to derive multiples of X bits which are concatenated to generate a shared secret from 512 to 8192 bits in length. Next, at step1302, the multi-tenant controller122sends the shared secret to each of the elements, such as, for example, IPSEC data channel end points, over a transport layer112security (TLS) secured (with data encryption and data authentication), bi-directional certificate authenticated channel that it has with each channel end points. Next, at step1304, for each shared secret, the multi-tenant controller122assigns a wall clock based start and end validity period. The multi-tenant controller122generates multiple (N) shared secrets at the same time and assigns to each of them consecutive wall lock validity periods. The multi-tenant controller122sends these N shared secrets at the same time to the channel end points. Next, at step1306, each channel end point may then generate 32-bytes of “nonce” comprising additional random data. The channel end points may exchange these nonce values and may then use, for example, a RFC 5996 specified key material generation method to generate four session keys per channel in order to (1) encrypt data to send to other element, (2) authenticate and attach a cryptographically secure message digest for data send to other channel end points, (3) decrypt data received from other channel end points and (4) validate that the message digest attached in the received data is correct. In accordance with various embodiments, the channel end points encrypt data using ciphers that are specified by the multi-tenant controller122. At a regular interval determined by the multi-tenant controller122the channel end points refresh the four session keys in a rekey operation using the given shared secret using the four steps described above. Next, at step1308, prior to expiration of the shared secret validity period, the channel end points may use the next shared secret (from the set of N shared secrets sent by the multi-tenant controller122) and follow the four steps described above to generate a new session key. Even if the multi-tenant controller122is not reachable for the entire duration of the N shared secrets validity period, the freshness of the shared secret is maintained. Once a single shared secret has expired, the channel end points continuously receive new shared secrets from the multi-tenant controller122and the multi-tenant controller122issues up N total fresh secrets to the two channel end points. In accordance with the preceding exemplary embodiments, perfect forward secrecy may be achieved by making the rekey interval the same as the shared secret validity period. Doing so helps to ensure that a fresh shared-secret is used in each rekey. In various embodiments, stored shared secrets are sufficient to ensure network operation for a default period of time without refresh of new ones from the controller122. A typical default period of such a time is, for example, three days. In exemplary embodiments, customers or users168of the network may set the default time period. In accordance with one exemplary embodiment, session keys may be rotated to configurable devices124, such as channel end points, on an hourly basis using the same shared secret but new nonces for added security. In accordance with an exemplary embodiment, the multi-tenant controller122may operate as a certificate authority. When a configurable device124is manufactured, it may connect to the multi-tenant controller122for issuance of a manufacturing certificate. The configurable devices124may use this certificate for a bidirectional authenticated TLS channel for secure communication with the multi-tenant controller122. When a configurable device124is shipped to a customer and the customer manually authorizes the configurable device124, an additional claim certificate may be installed by the multi-tenant controller122on the configurable device124. Post authorization, the configurable device124may use this certificate for bidirectionally authenticated TLS based communication with the multi-tenant controller122. The shared secrets may be sent securely over this channel from the multi-tenant controller122to the two channel end points. In accordance with various embodiments, encryption algorithms may be changed or updated at the multi-tenant controller122without impacting the end points. In addition, the multi-tenant controller122may revoke a device's certificate and tell other configurable devices124and entities not to trust it. In accordance with various exemplary embodiments, the generation of shared secrets need not be either entirely software or hardware based. Rather, a hybrid software/hardware process may perform the generation of shared secrets. The following are illustrative clauses demonstrating non-limiting embodiments of the inventions described herein: A method comprising:generating at a multi-tenant controller a common shared secret for establishing a link between a first site and a second site;transmitting the shared secret to each of the first site and the second site over a secured channel;assigning a wall clock based start and end validity period for the shared secret;sending the shared secret with a future validity to allow secure link communication to continue if one or more elements in both sites cannot communicate with the multi-tenant controller; andusing a separate shared secret per link per VXWAN A method comprising:receiving at a first channel end point a shared secret;generating a plurality of nonce values;exchanging the nonce values with a second channel end point;generating a plurality of session keys; andrefreshing the plurality of session keys at a predetermined time interval using the shared secret. A system comprising:a first site;a second site in communication with the first site via a network; anda multi-tenant controller configured to generate a common shared secret for establishing a link between the first site and the second site and to transmit the shared secret to each of the first site and the second site over a secured channel wherein the multi-tenant controller is configured to assign a wall clock based start and end validity period for the shared secret. Any of the clauses above wherein shared secret generation is performed by software, hardware and/or a software and hardware hybrid. In accordance with exemplary and non-limiting embodiments, there is provided a method by which the WAN may adjust to yield to the needs of applications in order to provide an optimal or near optimal experience to the users168. Such optimization may be based at least in part on a policy. A policy may be based at least in part on an application operating on the network, such as a policy relating to the handling of data flows to ensure proper functioning of the application, the routing of application flows on the most suitable link present at a branch office152, or some other criteria. Once an application flow is placed on an appropriate link, sufficient network resources may be guaranteed in accordance to a policy specification for the application flow. A policy string202may allow a specific application or identity to be specified that provides control over data flows occurring over the network. Control may be per-application, per-identity, a combination of application and identity, or based on some other parameter. In an embodiment, as part of a policy string202, a virtual extensible WAN (VXWAN)204may be specified that causes a device to encapsulate a specific set of flows identified by that policy in a unique VXWAN204. A VXWAN204may comprise an identifier or label that may be put on the wire as data/packets belonging to the flows captured by the policy are forwarded between devices. A VXWAN204may comprise an optional unique encryption afforded only to packets belonging to the flows captured by the policy. A VXWAN204may comprise a way to insert a VXWAN label at an entry point in the network and remove the label at the exit in the network for the flow in a specific direction. A VXWAN204may be bidirectional in that the same VXWAN label can be used for both directions of the traffic. A VXWAN name space may be global and allocated and managed by a multi-tenant controller122, as described herein. In an example, configurable spoke devices may perform traffic forwarding decisions, and be provided information, including but not limited to: application identification details and/or application traffic profiles. Configurable devices124, as described herein, may maintain path characterization information. Path characterization may include information on bandwidth availability on a link, latency profiles, traffic volume caps on a link and the like. The configurable device124may then choose a link meeting all the constraints that is most optimal or most nearly optimal for an application at that time instant. Rating of the links may be by the lowest network latency, or considering application level transaction latencies, or some other criterion. According to the methods and systems of the present invention described herein, a multi-tenant controller122may maintain a network topology at least in the form of a network topology database in which the entry and exit points for each device in a network are maintained. A multi-tenant controller122may also receive data from configurable devices124indicating which domains are associated with an anchor domain, and assign these domains to a SaaS application604so that a fuller set of entry points to the SaaS application604are known and stored by the multi-tenant controller122. A configured device may collect monitoring data, such as information relating to data flows that are associated with an application and/or domain that is associated with an application, such as an application in use at a network branch152. Such data flow records may carry important information relating to the entry and exit points on a device. Millions of these data flow records may be exported from devices throughout the network to the multi-tenant controller122. The multi-tenant controller122may store information, including, but not limited to, a network topology database that includes information on the entry and exit points of data flows occurring on each network device and how the data flows are interconnected, data flow records on a single data flow from multiple devices in the network, and anchor domains and other domains as described herein that are associated with the usage of an application on the network. In embodiments, the multi-tenant controller122may use the stored network topology508, domain and related data flow information to periodically run an “asymmetric detection” algorithm504that in essence identifies the existence of a plurality of data flows that exited a device through one path and return data flows that re-entered the device on a different path (i.e., asymmetry). Analytic techniques may also be used to characterize Layer 4 data flows that are associated with application usage. Configured spoke devices may collect and send to the multi-tenant controller122such Layer 4 level data flow. Data flow asymmetries that are associated with specific applications and/or anchor domains or other domains that are associated with an application may be detected and corrected, as described herein. This correction may be made on demand, and outside of the multi-tenant controller122. The change may be reported, for example by a spoke device, to the multi-tenant controller122. The multi-tenant controller122may identify asymmetric flows associated with application usage on the network, and see which data center128devices participated in the transmission of the flow and reports this. The multi-tenant controller122may identify the IP address ranges on which this asymmetry is happening, aggregate them and report them to a user168or administrator, so that the IP prefix configuration may be corrected on the sites. To prevent false positives, the algorithm may use additional weighting factors including other devices and different applications and their data flows, before declaring existence of asymmetry in the network. The multi-tenant controller122may read data flow tables in order to allow the multi-tenant controller122to determine how the data traffic egresses (e.g., data deriving from an anchor domain or other domain). The asymmetry may be detected, and the multi-tenant controller122may then notify configured devices and the configured devices may take routing actions to ameliorate the data flow asymmetry for a given application's usage. In accordance with exemplary and non-limiting embodiments, a multi-tenant controller122may be used to centrally view and control the network topology, configurable devices124in the topology and the application of policies to solve the operational needs of network administration. A multi-tenant controller122may record data indicative of where all devices and applications are located, and identify and map the topology of a network and use the network view to determine optimal policy and device usage of network assets. The multi-tenant controller122may, for example, monitor and record when a new site is added, the changes to the network topology may then be updated. Because the multi-tenant controller122has access to the HTTP flow, the multi-tenant controller122may be queried to, for example, show all paths between sites and simulate network function. The multi-tenant controller122may have access to bandwidth and links in its topology graph of the network. Utilization of each link can also be gathered and stored for access by the multi-tenant controller122. The multi-tenant controller122may simulate the effects of changes to a network. The multi-tenant controller122may keep track of all configurations (devices, interfaces, capacities, policies and more) for all the devices in the network. Configuration management may be in real or near-real time to ensure minimal data latency. The multi-tenant controller122may also keep large amounts of monitoring data on traffic types, traffic load and application statistics, and may also allow for the creation of ‘empty’ or hypothetical sites, specifically, ones that are not real but are, for example, being planned. A multi-tenant controller122may be utilized to algorithmically determine and establish a loop-free data-forwarding model that may include the creation and management of forwarding tables812. This algorithm may account for network failures and adjust the loop-free data-forwarding model to account for asset loss within the network. A multi-tenant controller122may maintain a network topology at least in the form of a network topology database in which the entry and exit points for each device in a network are maintained, and periodically running an algorithm using this network data to determine asymmetric data flows and the devices associated with such asymmetric data flows. As described herein, a centrally configured multi-tenant controller122may be both the creator and the source of (1) network segments and addresses and (2) information indicative of how the network segments are connected. The central multi-tenant controller122may then discover what external network segments are not controlled by the routing instance via a data center128or hub device and directly set up forwarding tables812on configured devices so that traffic is forwarded on to the correct interface, resulting in a loop-free forwarding model wherein various rules are enforced. In accordance with exemplary and non-limiting embodiments, a configured device may further collect monitoring data, such as information relating to data flows transmitting through a device. Such data flow records may carry important information relating to the entry and exit points on a device. The multi-tenant controller122may store information, including, but not limited to, a network topology database that includes information on the entry and exit points of data flows occurring on each network device and how the data flows are interconnected, and data flow records on a single data flow from multiple devices in the network. The multi-tenant controller122may identify asymmetric flows, including asymmetric flows that are associated with an application or anchor domain, as described herein, and see which data center devices participated in the transmission of the flow, and deduce that the flow is incorrectly going to, for example a hub device, and report this. The multi-tenant controller122may identify the IP address ranges on which this asymmetry is happening, aggregate them and report to them to a user168or administrator, so that the IP prefix configuration may be corrected on the sites. According to the methods and systems of the present invention described herein, a multi-tenant controller122may receive data from configurable devices124indicating which domains are associated with an anchor domain, and assign these domains to a SaaS application604so that a fuller set of entry points to the SaaS application604are known and stored by the multi-tenant controller122. This may have advantages in measuring, monitoring and routing the true data flows associated with a SaaS application604, whereas traditional techniques may underestimate the traffic associated with a SaaS application604by not recognizing the full set of domains that are actually generating traffic in association with an application. Domain types may include, but are not limited to, top-level domains608(e.g., “.com”), geographic domains (e.g., country domains), and infrastructure domains (e.g., “.arpa”). The domains identified to be associated with the anchor domain may be sent to the multi-tenant controller122, and at the multi-tenant controller122, further processing may be performed to eliminate the false positives and distill them to a corrected set of domains. A configurable device124within the network may detect an application's anchor domain and mark the source of the data and the time of detection. Following this time of detection, ongoing data flows from the same traffic source may be marked as belonging to the application that is associated with the anchor domain. The multi-tenant controller122may continue to receive data regarding the traffic source from the configurable devices124and classify the data flows. In accordance with exemplary and non-limiting embodiments, anchor domain and other domain data may be analyzed using statistical and data mining techniques, as described herein, for the characterization of application usage, by session, for performance and predictive planning purposes to improve network performance. Data mining techniques like centrality and classification may be applied to fine-tune models describing data flows deriving from an anchor domain and/or the usage of an application. The resulting models may be better able to predict the size and frequency of transactions for a typical application session, as well as the frequency of application sessions. The modeling and analytic methods used to characterize network activity associated with an anchor domain and/or an application session, and the corresponding network and bandwidth requirements, may be used to monitor a network in real time for bandwidth requirements at the sub-channel level (transactional and bulk application types) and identify and predict trends in bandwidth requirements. Based on anchor domain and application type detection, using real-time data flows, a multi-tenant controller122may analyze application session data that it receives from configurable devices124at the branch level to determine what amount of bandwidth is necessary for proper application performance. Application performance scores may be used to record and adjust the bandwidth profile for an application, where the profile states performance and bandwidth criteria needed (i.e., a rule) for the type(s) of data flows detected during application sessions. In embodiments, a shaper154, as described herein, may be used to limit data that is associated with, for example, a given anchor domain and/or application to a prescribed value. Shapers154may be added or deleted based on the level of business priority associated with the traffic, such as traffic that is associated with anchor domain and/or application session. This may be performed dynamically, for example, based on application demand, where the dynamic performance is carried out in compliance with a policy maintained at a multi-tenant controller122. Performance-based routing may be accomplished by estimating link capacity automatically and dynamically, in real-time or near-real-time. Congestion control algorithms, as described herein, may be used for bandwidth estimation. For idle links the estimation may be performed in a manner that is similar to an Internet speed test. For active links, link capacity may be determined through continuous monitoring and correction. In an example, links may have an ingress and egress shaper154, as described herein, that controls how much traffic can be put on the link or received from the link. Any traffic exceeding the shaper's154limits may be dropped. A shaper's154limit may be set by bandwidth estimation logic. In embodiments, network traffic may be classified based at least in part on a business priority, and bandwidth allocation to a plurality of applications running on a network may be determined based at least in part on assigned business priorities. A network administrator may create multiple business priority classes and assign each a priority order and/or ranking. A configured device may first detect the network type of each application data flow, for example, media, bulk or transactional, and create multiple queues for each business priority, for example strict priority814, fair818and anomaly820, as described herein. A shaper154may be used in front of the anomaly queues to ensure that the anomaly flows do not impact bandwidth available for other flows. A multi-tenant controller122may be used for automated prefix discovery and propagation. A typical network may be made of many network segments, with each segment represented by a range of Internet Protocol (IP) addresses typically represented by a network address mask like so: 10.10.10.0/24 where the “/24” is the mask. This range may represent about 254 IP addresses. When such network segments are presented by a summarizing router device162, these ranges may be aggregated to a higher “mask”—for example two segments 10.10.10.0/24 and 10.10.10.1/24 can be aggregated to 10.10.10.0/23. Sometimes a router device162will do this automatically but many times, these may be done manually by configuration. In embodiments, a multi-tenant controller122may gather network segment information and thus be able to programmatically find the most aggregated higher-level network address and mask. This process may be called supernetting and may be instrumental to scale in one disclosed non-limiting embodiment, and improve the automatic learning and control of the network. For example, in case of data plane learning, once a flow arrives at configurable device124, source IP information may be extracted from the packet header and sent to the multi-tenant controller122along with interface Virtual Local Area Network (VLAN) information on which the packet was received by the device. Data packets, however, may not carry subnet mask information. Therefore, a multi-tenant controller122may not be able to immediately identify exact length of the prefix if it receives information about a single or a handful of flows from clients512on branch local segments. In this case, the multi-tenant controller122may try to find the best-fit length of the prefix. For example: if the multi-tenant controller122receives information about two flows with source IP addresses, let's say 192.168.1.1 and 192.168.1.2 belonging to VLAN 10, it may generate a prefix 192.168.1.0/30 to start with and advertise to other sites. Over time, if it receives information about another packet with source address of, 192.168.1.29, for example, on the same segment; it may refine the prefix length to be 192.168.1.0/27. Note that in data plane learning method, the configurable device124may choose to aggregate the source IPs and partly perform what the multi-tenant controller122does, to aid the multi-controller122in converging quickly. The refinement process may continue until a multi-tenant controller122determines the exact length of the prefix as it learns information about rest of the prefixes. Note that there may be no traffic loss during the learning process. Also, if the local segment goes down and is detected by the device, it may notify the multi-tenant controller122so that routes may be withdrawn from all the remote sites. The process of what should be the minimum length of the prefix or how frequently the multi-tenant controller122should generate the prefix and the like may be user168configurable. Data plane learning may be referred to as slow learning as the multi-tenant controller122may have to wait until it has enough information to determine the exact prefix. Unlike data plane learning, device may learn the exact prefix length much faster if there are routing devices behind the configurable device124, whereby exact prefix information may be extracted from the control plane traffic before sending the info to the multi-tenant controller122. Automated learning of the network prefixes without routing protocols and complex configurations for zero touch deployment of devices may be advantageous. Automatic removal of prefixes and withdrawal of routes with those prefixes when local segment is down may be advantageous. Configurable devices may be used for the automated routing protocol-less network prefixes discovery. In traditional networks, routing protocols may be used to exchange routing information between devices. Local networks, however, may be required to be manually configured via network statements or redistribution for each routing protocol so that prefixes can be advertised to other routing peers. Configurable devices124in the branch152may not run any routing protocols. Prefix discovery instead may be automated via data path learning at each branch site152. Each branch device152may send the prefix information to a multi-tenant controller122which systematically may find the most aggregated higher level network address and mask via supernetting algorithm as mentioned in herein. A configurable device124may be configured to learn prefix information passively for the local network segments. A prefix may be learned via multiple ways such as Dynamic Host Configuration Protocol (DHCP), Address Resolution Protocol (ARP), snooping routing protocol traffic, actual data plane traffic and the like. In traditional networks, typically routing protocols are used to exchange reachable prefixes between routing peer devices. Source of the reachable prefixes is either through manual configuration via network statements or through redistribution from one routing protocol to another. For seamless deployment of configurable devices124in the data centers128, a multi-tenant controller122may learn about the local data center prefixes automatically without requiring an administrator to manually configuring the prefixes and distribute those prefixes through out the network for use in traffic forwarding. In the data center128, configurable hub devices may border gateway protocol (BGP) peer with the WAN-E router178and may learn all the prefixes including local data center prefixes as well as remote branch routes learned via private WAN. Configurable devices124may send all the prefixes learned from the private WAN to multi-tenant controller122. A multi-tenant controller122may run a local algorithm to identify and separate the local prefixes from the remote. BGP-learned prefixes carry AS-PATH information, which is an ordered list of ASes and tell the receiving BGP peer about the ASes a given prefix has traversed. The multi-tenant controller122knows the BGP AS numbers configured for a local configurable device124and the traditional router178within the data center128that it is peered with. In one embodiment, the multi-tenant controller122may identify the prefixes that contain the only the AS numbers present in the data center128, and mark those as prefixes as local to the data center128. Multi-tenant controller122may employ other methods like BGP Autonomous System (AS) filtering to further refine the identified local prefixes in the data center. Note that, if routes are withdrawn or any changes are made, a local device may detect these withdrawn routes via BGP updates and update the multi-tenant controller122accordingly. Multi-tenant controller122may apply the algorithms on the updated information and adjust the local prefixes in the data center as necessary, allowing the network to correct and react to any changes automatically. This may simplify the deployment of configurable device124in the data center and may save network administrators time to configure and remove prefixes any time any change is made within the data center128. The following examples inFIGS.14and15illustrate how prefix x (local) and prefix y (remote) may be learned with AS-PATH information. Note that even when there are multiple ASes within data center128, the order of AS numbers in the AS path list that hub device is peering with (AS 2 and AS 3 in the examples) may provide direction and may appropriately identify whether the prefix is local or remote. Automated learning and withdrawal of the network prefixes for zero touch deployment of devices in the data center128may be advantageous. Multi-tenant controller122-associated overlay networks may be integrated with legacy systems. Using private MPLS144WAN, enterprise network sites may be connected to the MPLS Service provider network in a point to cloud type of model. Using the L3VPN service provider terminology, a customer edge (CE) router180may be connected to a provider edge (PE) router176. All the remote site routers162may access the applications located in the data center128via the CE device180in the data center128. It may therefore be critical that there is no interruption to existing network especially the data center core178and the CE (aka WAN-E or WAN Edge) device180in the data center128for seamless integration. Referring toFIG.16, a configurable device124may be deployed off path in the data center128without interrupting the physically connectivity of the existing network as illustrated herein. A configurable device124may be physically or logically connected to the CE device180in the data center128via separate interfaces. A configurable device124may BGP peer with Core178and CE180routers162and the like. Peering with CE180may be done in such a way that a configurable device124may only receive the routes but may not send any routes to CE. This may ensure no disruption to the existing operation of CE device. Peering with core routers178may be done in such a way that, configurable hub device may selectively advertise only those remote branch routes. Route advertisement to core routers178may be done by attaching a “no-advertise”, well-known community value. A well-known community value may instruct the core router178not to advertise configurable hub-injected routes to any other peers thereby requiring no special filters to prevent accidental advertisement of routes to any other peers within the network. This may ensure that a configurable device124attracts only select traffic destined to branches152. This may also ensure that there is no impact on the traffic destined to non-controller associated sites improving the deployment and integration with an existing network. Seamless integration without any sort of disruption to CE router180is advantageous. No disruption to non-controller associated sites is advantageous. Controlled traffic handling for non-controller associated sites is advantageous. No filtering requirements are advantageous. A multi-tenant controller122-associated environment may include zero touch multicast. In the traditional network, enterprise customers may buy multicast VPN service at a premium from MPLS providers so that they may send multicast service between different sites in the network. In addition, a customer may have back up VPN via Internet142. Typically, enterprises either choose not to send multicast traffic across Public VPN due to performance issues and configuration complexity or deploy proprietary solutions, investing in expensive equipment. However, such customers are not able to use both the private and public VPN based on performance criterion. As illustrated inFIG.17, configurable devices124may intercept multicast traffic, specifically Internet Group Management Protocol (IGMP) traffic, passing through the branch device152while simultaneously forwarding the traffic to private WAN towards the next hop routers162, e.g. CE. A configurable device124in the branch152may forward the IGMP report information to the multi-tenant controller122. A multi-tenant controller122may maintain a multicast group database based on the received IGMP protocol traffic, and may distribute the group member information to the configurable hub device located in the data center128. Upon receiving the multicast group member info from the multi-tenant controller122, it may create a multicast state and place the VPN links to the branches152that are members of the group. A hub device may then reconstruct the IGMP join and send it to the core router178, which may add the interface to the configurable hub in its forwarding table812. When a source located in the DC starts sending packets, a configurable hub device may receive a copy of the packet but replicate the packet on the VPN links to the various remote branches152that are members of the multicast group, as shown inFIG.17. Note that the source of multicast traffic may also be present at a branch152, and in such cases, the multi-tenant controller122sends the multicast group information to the configurable device124present at the branch152, and this device performs the packet replication to the various member configurable device124over the VPN links. The configurable device124may also choose to use the private WAN for multicast traffic if multicast is supported in the private WAN instead of replicating traffic on the individual VPN links based on performance. Zero configuration multicast traffic delivery may be advantageous. Performance based path selection for multicast traffic may be advantageous. A configurable hub device may provide a bridge/transit point between legacy and hybrid WAN. In the traditional enterprise using private IP/MPLS WAN, remote branches152and data centers128may be connected to each other via a service provider MPLS144network. For redundancy, enterprise customers may choose two service providers and dual home their critical sites to two providers. In case of failure in one MPLS provider, the sites may continue to reach each other via the 2ndprovider network. However, dual-homing all the sites may not be cost effective. Therefore, enterprises may choose to dual home only critical sites and leave the other sites as single homed when possible. This, however, may create operational challenges for network administrator to manage the routing and related policies to ensure traffic from single homed sites is allowed to transit through the appropriate dual homed site(s) in steady state as well as upon failure conditions. Routing policies related to making any given site a transit point in the traditional networks may not be trivial and may require careful planning and complex configuration to avoid routing loops. For example, inFIG.18, Site3may be single homed and advertise a prefix which may be propagated to all the sites connected to MPLS provider1. Site1and Site2may be dual homed to both providers. Site2may advertise prefix to MPLS provider2and the prefix may arrive at Site1. Site1may have appropriate filters not to advertise prefix back to MPLS provider1, which otherwise may create loops especially if the AS path information was lost due to route redistribution at any transit site. Similarly, Site2may not advertise the prefix if received from MPLS provider2back to MPLS provider1. Additionally, appropriate configuration may need to be in place to allow the traffic transit through site2, when site1loses the connectivity to provider1when traffic from Site3is destined to Site1. A multi-tenant controller122based approach may provide seamless integration with the enterprise's existing network infrastructure. Enterprise customers may not need any complex configuration or routing policies to ensure proper traffic flow between sites during steady state or failure conditions. Upon failure of MPLS WAN144, a configurable hub may be used as a transit point to seamlessly repair the part of a broken WAN. Similarly, during the initial deployment phase, when not all of the sites may have been migrated to the multi-tenant controller122associated environment, a configurable hub may provide a transit point to facilitate seamless deployment in the rest of the network as illustrated inFIG.19. With the deployment model, configurable hub and branch devices152may build a secure and full-mesh overlay across a broadband Internet connection142. Configurable hub BGP may peer with core and wan-edge router178in the enterprise data center128and may receive the entire network routes, including the routes from non-controller associated sites. For example, as shown inFIG.20, site1may reach site2only via VPN due to losing connectivity to an MPLS provider144. When site1tries to send traffic to site3, where the multi-tenant controller122may not be deployed yet, traffic may be sent to a configurable hub at site2first. Since a configurable hub has the knowledge of the entire network, it may do an IP lookup and determine the site is not reachable via the multi-tenant controller122, hence it may forward the traffic towards a WAN-edge router178, which then routes the traffic to the appropriate destination. A configurable hub may provide an easy point of transition between VPN and a legacy network without requiring complex routing policies. Seamless routing between VPN and legacy networks without explicit and complex routing may be advantageous. High availability for single homed sites upon failure may be advantageous. A multi-tenant controller122may build an end-to-end topology map combining MPLS WAN144topology using various BGP attributes including but not limited to BGP community and provide a complete private WAN topology view to network administration via a Graphical User interface. In a traditional network, a common choice of protocol between Enterprise and MPLS Service provider, a.k.a PE-CE routing protocol, may be BGP. Enterprises typically may run a different BGP Autonomous System number (AS number) per site. As a result, when routes are exchanged from one site to another via MPLS service provider network, an AS path attribute containing all the ASes transited during the route propagation may be carried along with the prefix. In the central hub location, a configurable device124may peer with the enterprise WAN router178via BGP and may receive the remote site routes with BGP path information intact. A hub site may forward this information to a multi-tenant controller122. The topology builder component of the multi-tenant controller122may leverage this information to identify local hub site (e.g. data center) routes and remote site routes and may build a graphical topological map. In addition, the multi-tenant controller122may also have full knowledge of the VPN topology. Combining the two, the topology builder may produce a complete end-to-end view of the entire enterprise WAN network. The multi-tenant controller122associated approach may provide a dynamic and complete view of the enterprise network combining all the possible WAN in the network as well as ability to identify local vs. remote routes and the like. State transitions that may result from a failure in the MPLS WAN144and/or BGP route withdrawal on the private network may be reflected in real time. Bidirectional forwarding may be used for the detection of private and public link failures. Using private MPLS WAN144, enterprise network sites may be connected to the MPLS Service provider network in a point-to-cloud type of model. Using the L3VPN service provider terminology, a customer edge (CE) router180may be connected to a provider edge (PE) router176. To meet customer Service Level Agreements (SLAs) for faster convergence upon failure, BFD may be used between PE-CE link along with a routing protocol. However, using BFD in this manner may allow failure detection only for the local link or node between a customer site and a provider edge router176. Hence, a BFD session may be configured at every PE-CE link in the enterprise network. Due to the fact that BFD may be running only on the local segment, it may not be sufficient to detect end-to-end failures. An enterprise network may not detect any soft or hard failures within a service provider network, which could take hours and in some cases days to troubleshoot and pin point the root cause of the problem, possibly increasing the Mean Time To Repair (MTTR) and reducing the availability of the enterprise network. In addition, Provider Edge (PE) routers typically may support only a limited number of BFD sessions and may not necessarily be capable of supporting BFD on each and every PE-CE link. As a result, BFD may be available either only to high profile customers or may be offered as a premium service to the enterprise customers. Some enterprises may resort to establishing an overlay across the MPLS network144and running BFD over the overlay network to detect brown out and other failures within the service provider network. This may add complexity and defeat the purpose of MPLS VPN144that may have been meant to outsource the WAN network while simplifying the network operation for enterprise network administrator. BFD may be leveraged in a novel fashion to monitor the health of the paths end-to-end between any two given sites without requiring complex configuration. As a result, the end-to-end traffic convergence upon failures and especially in case of brownout may be very quick and increase the overall availability and the user experience in the enterprise network. In traditional networks, enterprise customers may desire to use BFD between PE-CE links along with BGP or one of the Interior Gateway Protocols (IGP) protocols including STATIC, Routing Information Protocol (RIP), Open Shortest Path First (OSPF) and Enhanced Interior Gateway Routing Protocol (EIGRP) and the like for fast failure detection. Failure detection by itself may be only a small part of the problem with an ultimate goal to recover the traffic as soon as possible. Therefore, to accelerate the route convergence and restore traffic on other available paths, routing protocols such as BGP, OSPF and Intermediate System to Intermediate System (ISIS) and the like may register with BFD process. Upon link failure detection, routing protocols may be immediately notified about such failures so that they can recalculate their routing table and find alternative paths to reach the desired destinations. Since routing protocols are the consumers of the BFD, the existence of routing protocols may be a pre-requisite for BFD to be effective. The multi-tenant controller122associated approach leverages BFD to detect end-to-end failures across private IP/MPLS WAN144in the enterprise network. Instead of the traditional process of registering BFD with routing protocols, this approach may establish end-to-end BFD session between configurable devices124at every enterprise site across private WAN. Each configurable device124may actively monitor the reachability to every other site via BFD without explicitly establishing end-to-end BGP or configuring any other routing protocol across private WAN. Upon detecting anomalies along the path, a configurable device124dynamically steers the traffic away from the affected path without manual intervention. This may provide the ability to detect not only local PE-CE local link of node failures quickly but also may enable enterprise customers to detect brown out and other failures in the service provider network. The multi-tenant controller122associated approach leverages BFD to detect end-to-end failures across private MPLS WAN144in the enterprise network. Instead of the traditional process of registering BFD with routing protocols, this approach may establish an end-to-end BFD session independently between two given enterprise sites across MPLS WAN144and upon detecting failures, dynamically re-routes traffic without a need to notify routing protocols to re-compute the alternative paths. Unlike traditional BFD, the effectiveness of this may not be limited to local PE-CE link/node failure but may be much broader in scope. It may enable enterprise customer networks to automatically detect brown out and other failures in the service provider network. Furthermore, this approach may alleviate a need for the service provider to offer PE-CE level BFD there by leveraging the valuable resources on the PE routers176for other critical functions. Hub high availability and switchover may be directed by multi-factor spoke devices. Each spoke site device may be assigned two hub elements in each hub site for the purpose of HA to ensure that the spoke site may continue to have VPN tunnels to the hub site in the failure of a hub device. Using BGP or OSPF primary hub device may inject prefixes belonging to the spoke into the core switch and specify the primary hub device as the preferred next hop for traffic in the hub site to spoke site direction. The determination of which of the hub devices to utilize may be decided by the spoke site device. This invention describes how the spoke site may decide which hub element is its primary hub. A spoke element may use multiple factors to determine the preferred hub element within the hub element pair that should be primary hub for the spoke site. The spoke may have multiple private and public WAN circuits. These circuits often have differing pricing models—the pricing is bandwidth based or volume based. Volume-based pricing may follow a cell phone plan like pricing model with some amount of allowed volume per month for a fixed monthly cost and for volume exceeding the allowance a cost per megabyte of consumption. For each private and public WAN circuit, a spoke device may establish an Internet Protocol Security (IPSEC or IPSec) VPN Tunnel to both hub elements in the hub element HA pair assigned to it. The spoke device may use an algorithm to decide which hub element in the pair should be its primary. The algorithm may take as input the bandwidth per private and public WAN circuit, the pricing model per circuit, the health of the individual IPSEC VPN tunnels to both the hub sites over each of the public and private WAN circuits, the routing reachability of the hub devices to the hub core router178and the like. The goal of the algorithm may be to optimize application performance while keeping volume-based costs low. The algorithm may allow for a default HA behavior and also may allow for an administrator to specify rules about which of the circuits in branch may be allowed to be default active, which ones are backup except on failure of default active circuits and when backup circuits may not be used. The algorithm may use the notion of protection groups to achieve its results. The spoke device124may decide to pick one Hub Device178as its primary from the HA hub pair assigned to it as it is able to use a lower cost branch circuit. If the primary hub device is now only reachable over higher cost volume based circuit but the spoke can reach the second device in the HA pair using the lower cost circuit then it may make the second hub device as primary. Spoke driven HA decisions for hub site may be advantageous. Multi-factor determination of primary HA element based on application performance, cost considerations and over-ride rules may be advantageous. Device authentication for device security may be provided prior to the creation of secure tunnels, where such device authentication is enterprise-specific and/or specific to a configurable network utilizing a multi-tenant controller122. Enterprise Site to Site IPSEC VPN over WAN networks may require that the two gateways or elements establishing a tunnel are mutually authenticated based on Enterprise requirements. The requirements may ensure that only trusted and authenticated devices may exchange private Enterprise data over private or public networks using IPSEC VPN. Existing authentication methods may follow one of two approaches: Use of manually configured pre-shared keys. This approach may require IT to provision a separate pre-shared key per peer. For 1000 peers this may mean manually generating 1000 pre-shared keys and configuring them 1 million times for a fully meshed network. The alternate option is to use a single key, but the compromise of even a single router162may expose the entire network, leading to weaker security. In lieu of pre-shared key customers may use PKI (Public Key Infrastructure). This may require them to have a full PKI infrastructure to manage certificates, keys and use the offline CRL (Certificate Revocation List) distribution mechanism to manage certificate revocation. They may have to ensure high security and availability for their CA (Certificate Authority) server(s). They may have to still manually provision each gateway device to use PKI. A VPN authentication disclosed non-limiting embodiment may use inbuilt PKI with tiered levels of certificate authentication which may eliminate nearly all manual configuration, may provide very high authentication security and may still give administrators control over authentication of the elements. The disclosed non-limiting embodiment may provide for one or more of the following:Three-tier, PKI-based authentication that may upgrade authentication in stages for higher security.First tier may be a Manufacturing Installed Certificate (MIC) and private key that may provide for device authenticity when shipped from authorized manufacturing sites.Manual claim by a customer which may ensure IT is authorizing each device before it is admitted in the network and may participate in establishing VPNs and activating policySecond tier may be a per-tenant Claim Installed Certificate (CIC) which may bind the element to a Tenant following manual authorizationInbuilt and fully automated PKI infrastructure with separate Certificate Authorities for MIC and CIC and which may manage full lifecycle of the certificatesCustomer-controlled device authorization revocation (declaim).A customer may also control whether a given claimed device can or cannot participate in VPN with other devices.Integration with an existing customer PKI infrastructure by allowing a customer to also install a Tenant Installed Certificate (TIC), which may have been issued by customer's own Certificate Authority. At manufacturing time, prior to an element leaving a trusted manufacturing site, the device may be installed with a signed certificate and private key. The private key may be kept in a TPM (Trusted Platform Module) module and the like and the certificate may be signed by a proprietary manufacturing only Certificate Authority (termed as MIC CA). During manufacturing, the element may contact the multi-tenant controller122using a secure TLS encrypted channel and the like to get its certificate signed. The multi-tenant controller122may check to ensure that the request is coming from a manufacturing location and that the hardware ID/serial number is valid. The multi-tenant controller122may bind the hardware ID to a default tenant in the certificate and may sign the certificate. An element that has been manufactured but not allocated to any tenant may be marked as “available”. The use of the TPM and signed certificate may ensure that when tenant receives element it is a genuine element and can be trusted. The multi-tenant controller122may host the MIC CA, which may issue signed certificates. The multi-tenant controller122may also maintain the binding of the certificate to the specific hardware element. It also may maintain a revocation list of which hardware elements are no longer trusted and the like. Hardware elements that are reported to have been compromised may be marked as revoked by the CA. The CA may use this information to ensure that a tenant may claim only non-revoked trusted elements. When hardware is shipped to a tenant it may be marked “allocated” to that specific tenant. When an element, which only has a MIC and has been marked as “allocated” contacts the multi-tenant controller122, the element may be allowed to be only “claimed” by a tenant. The element may not be allowed to setup VPNs. The communication between multi-tenant controller122and element may be TLS encrypted and PKI authenticated. The element to multi-tenant controller122communication may use bi-directional certificate authentication, with the element using the MIC as client certificate, and both parties may validate that their certificates are signed by the proprietary CA. When a tenant's administrator logs into the multi-tenant controller122they may see all “available” elements allocated to them that are currently online. The administrator may validate that the element hardware ID matches the one that they had ordered and that the element may be installed in the right customer location. They then manually “claim” the element. This provides multi-tenant controller122the authorization from the administrator that this element may now fully participate in the network and can establish VPNs. The multi-tenant controller122may issue a new Claim Installed Certificate to the element, an element ID separate from the hardware ID and it may bind the element to the tenant in the Claim Installed Certificate (CIC) and sign the certificate. The multi-tenant controller122may use a separate CIC CA for signing this certificate. The multi-tenant controller122and element may then break the existing communication channel based on the MIC and the like and establish a new communication channel with the CIC. This may be a TLS encrypted and bi-directional certificate authenticated channel, with the element using the CIC as the client certificate, where both sides may validate that their certificates are signed by a trusted CA. On validating that the element is in communication with a CIC certificate, the multi-tenant controller122may allow the element to establish a VPN with elements at other sites in the customer network. Thus only a manually authorized element with a tenant specific claim certificate may do VPN. This may represent the second PKI tier. After an element has been claimed, if a tenant has their own PKI infrastructure, then on an administrator's request the multi-tenant controller122may use signed certificates from the tenant's CA and it may install them on the element. These signed certificates may be either manually provisioned by the administrator through their login or the multi-tenant controller122may use protocols such as SCEP (Simple Certificate Enrollment Protocol) to acquire them. These certificates may be called a “Tenant Installed Certificates” (TIC) and may be signed by a TIC CA and the like which is part of the tenant's own PKI infrastructure. After installation of TIC the element and multi-tenant controller122again may break the existing communication channel established with the CIC communication and then re-establish it with the element again using the CIC or TIC as the client certificate. At this time they both may do certificate validation similar to the CIC-based communication as described herein. In addition, the multi-tenant controller122validates that the element (if it connects with a CIC) may also have a signed TIC, which may have been signed by the tenant's TIC CA. This may represent the third PKI tier. In addition to the MIC, CIC and TIC PKI based authentication and authorization, the tenant administrator also may have controls to temporarily disable a particular claimed element from participating in VPN. This may not revoke the certificates but may only suspend the ability of that element to do VPN. The administrator may enable a suspended element to re-establish VPN at time in the future. If a MIC or CIC are near expiry or are damaged then the multi-tenant controller122may transparently allocate new signed certificates for them without administrator intervention. For TIC certificates the multi-tenant controller122may transparently use SCEP or other protocols to get new signed certificates on behalf of the element. Customer may also manually install TIC certificates in lieu of old expiring or damaged ones. When a customer no longer wants to use an element and wants to dispose it or return it for RMA (Return Merchandise Authorization) reasons, they may “declaim” the element. When an administrator declaims an element the multi-tenant controller122may instruct the element to wipe its TIC and CIC and associated private keys. The multi-tenant controller122may also mark the element as being declaimed and no longer allocated to the customer. The multi-tenant controller122further may mark the CIC as revoked and instruct the Tenant CA to mark the TIC as revoked. The element thus enters “available” state and may no longer be able to participate in establishing VPN, since it does not have CIC and is not in “claimed” state. This three-tier authentication and authorization may provide in-depth security and a PKI infrastructure for the first two tiers that may be multi-tenant and may manage a full certificate lifecycle for the first two tiers. The third tier may have the ability to integrate with existing PKI infrastructure in a tenant's environment, acquire signed certificates from a tenant's CA and use them for authorization for VPN establishment. This may give administrators full control and ability to manually authorize an element (claim the element) before it is admitted to the network and also de-authorize the element when they want to dispose of it and the like. The conversion of this authorization to a new certificate/key may be bound to tenant or removed if the tenant bound certificate/private key is handled transparently. For a claimed element further granular VPN level authorization may be also provided in an easy to use manner. Site-to-site networking model and underlying VPN links/tunnels creation may be based on dynamic rules. The multi-tenant controller122based networking model may use logical links that may then be realized using IPSEC tunnels (also called as VPNLinks or vpnlinks) over public WAN (same as Internet142) networks or private WAN (same as MPLS144) networks for communication between sites. The multi-tenant controller122may use a rules-based approach to dynamically bring up and tear down links between sites. The rules based approach may use one or more of the following to decide when to bring up and tear down tunnels:Knowledge of site type (hub or spoke)Amount of data volume between spoke sitesApplication performance between spoke sitesTenant administrator given directives When a Site is associated with a WAN Network, the multi-tenant controller122may create a logical “Site WAN Interface” and the like. For every pair of Site-WAN Interfaces belonging to two separate sites (whether hub or spoke) the multi-tenant controller122may create a link which may be a logical connection between the two sites. A link may be put into categories such as:Hub to Spoke (H2S) linksSpoke to Spoke (S2S) links [DYNAMIC RULE 1—DEFAULT] For every Hub to Spoke (H2S) link the multi-tenant controller122may instruct the right elements in both the sites to establish an IPSEC tunnel to activate the link. In the spoke, the right element may be the currently active element in a pair. In the hub the right element may be the designated primary hub element for that spoke site. [ADMINISTRATOR RULE 1] The multi-tenant controller122may provide control to the administrator to disallow a specific H2S link to be established. The multi-tenant controller122may use this directive and override the default rule. [DYNAMIC RULE 2] The multi-tenant controller122may monitor total volume of traffic between each pair of spoke sites that is travelling either over MPLS144between the sites or via a hub site which is acting as a transit site for traffic between the two spokes over IPSEC tunnels (i.e. traffic goes from Spoke1to Hub Site A over a IPSEC tunnel and from Hub Site A to Spoke2over a second IPSEC tunnel) and the like. If the traffic volume exceeds a certain threshold over a certain time period based on a formula, the multi-tenant controller122may decide that the link between the spokes should be activated. It may instruct the elements in the two spoke sites to directly establish an IPSEC tunnel between them so that the traffic between the spoke sites can use the direct IPSEC tunnel between them. [DYNAMIC RULE 3] For links activated using v. [DYNAMIC RULE 2], the multi-tenant controller122may continue to monitor total volume of traffic between the two spoke sites. If the total volume over a certain time period drops below a particular threshold then the multi-tenant controller122may decide to de-activate the link. It may instruct the two elements in the two sites to destroy the IPSEC tunnel. [DYNAMIC RULE 4] The multi-tenant controller122may monitor application performance between each pair of spoke sites for applications whose data is travelling either over MPLS144between the sites or via a hub site which is acting as a transit site for traffic between the two spokes over IPSEC tunnels (i.e. traffic goes from Spoke1to Hub Site A over a IPSEC tunnel and from Hub Site A to Spoke2over a second IPSEC tunnel) and the like. If the application performance is poor, then based on historical latency and bandwidth information of IPSEC VPN directly between the two sites, or based on modeling and predictive analysis and the like, the multi-tenant controller122may decide to activate the link between the two spoke sites. [ADMINISTRATOR RULE 2] The multi-tenant controller122may provide control to the tenant administrator to permanently allow a specific Spoke-to-Spoke link to be established. The multi-tenant controller122may use this directive as an override to the rules. [ADMINISTRATOR RULE 3] The multi-tenant controller122may provide control to the tenant administrator to disallow a specific Spoke-to-Spoke link to be established. The multi-tenant controller122may use this directive as an override to the rules. Traditional IPSEC VPNs may have one of following approaches:A manually-configured spoke specified on each router162to indicate which other routers162it should VPN with; orSpecify that the spoke router162may contact a hub router162for all traffic and if the hub router162may decide that if the traffic is meant for another spoke site then it may tell the originating spoke router162to establish an IPSEC VPN directly with the destination spoke router162. The current disclosed non-limiting embodiment may provide one or more of the following new functionalities:Provides a multi-tenant controller122driven central control for the establishment of IPSEC across all configurable devicesProvides dynamic establishment of spoke to hub IPSEC tunnels as the default with zero configurationProvides an application performance driven IPSEC tunnel establishment between sitesProvides a data volume driven IPSEC tunnel establishment between sitesProvides for tenant administrator given directives approach to permanently allow two sites to always communicate with each using IPSEC tunnel over public WAN or private WANProvides for tenant administrator given directives approach to permanently disallow two sites from communicating with each using IPSEC tunnel over public WAN or private WAN Existing centralized key generations (Kerberos and GROUP VPN and the like) may require that the multi-tenant controller122be available at all times. These may operate within a highly reliable LAN or WAN environment due to which the network reachability of the central key generator may be assured with a very high SLA. However for a centralized multi-tenant controller122that may be only reachable over Internet circuits, it may be possible that due to multiple ISPs there may be a disruption in the multi-tenant controller122reachability. These outages may happen due to natural disasters, breaking of intercontinental submarine cables, equipment failure at core peering points and the like. These disruptions may last from a period of few hours to a few days and the like. In such a scenario the key generation may not provide high availability and resiliency for the encryption data channels. According to the methods and systems of the present disclosure, VPN high availability may be provided upon a failure to reach a multi-tenant controller122. In the multi-tenant controller122driven model with centralized generation of shared secrets by a multi-tenant controller122for two sites to establish IPSEC tunnels, it may be possible that due to network connectivity issues explained earlier or a customer internal network that the multi-tenant controller122may not be reachable for certain extended periods. Since shared secrets may expire, the multi-tenant controller122may generate new shared secrets on demand by a set of elements, however if the elements cannot contact the multi-tenant controller122then they may not get new shared secrets which will eventually cause the IPSEC tunnel to be brought down, thereby disrupting data path traffic. This disclosed non-limiting embodiment provides a mechanism to allow the data path to operate in a headless (devices actively functioning while unable to communicate with the multi-tenant controller122) mode for extended period of time. The multi-tenant controller122may allocate a fixed validity period based on wall clock for a shared secret. For each IPSEC data tunnel the multi-tenant controller122may issue multiple shared secrets with start and end validity periods in the future. The number of shared secrets issued may cover the full headless period. If two elements that have established an IPSEC tunnel cannot reach the multi-tenant controller122, they still may have sufficient shared secrets to rekey the IPSEC data tunnel over the headless period. Based on the security considerations and standardized best practices, a multi-tenant controller122may determine a maximum validity period for using a specific shared secret. When the multi-tenant controller122decides to activate a link using an IPSEC Data tunnel, the multi-tenant controller122may generate a shared secret for the two elements and set the shared secret's start and end validity times based on wall clock. The start time may be a time before the current wall clock time and the end time may be a time beyond current wall clock. The determination of the start time may be based on a specific distribution algorithm that tries to get a fair distribution of start times. The multi-tenant controller122may issue new shared secrets for millions of VPN links and the fair distribution ensures that the multi-tenant controller122may be evenly loaded and does not suffer peaks of re-issue load. The multi-tenant controller122may have a default headless period value. The value may be changed by a tenant administrator within a multi-tenant controller122specified minimum and maximum bound and the like. The multi-tenant controller122may use the headless period value and divide it by the maximum shared secret validity period to obtain the maximum number of shared secrets it should generate to cover headless expiry period. The multi-tenant controller122may send to both the elements the full list of pre-allocated shared secrets. The elements may use the first shared secret and when a certain period before it is to expire, the elements notify the multi-tenant controller122to request a new shared secret to ensure that they continue to have filled pipeline of shared secrets covering headless period. The multi-tenant controller122may respond to the first element that contacts it and issue a single new shared secret to both elements that have the IPSEC data tunnel. The multi-tenant controller122may generate multiple shared secrets to ensure that the total period from current time to maximum headless period is fully covered with sufficient number of shared secrets. If the element cannot reach the multi-tenant controller122to send notification, it may still continue to use the next available shared secret from the pre-allocated list. If the element's peer element can notify the multi-tenant controller122then the multi-tenant controller122may generate a new shared secret. If both elements cannot reach multi-tenant controller122then the multi-tenant controller122may not issue a new shared secret. If both elements cannot reach multi-tenant controller122and multiple shared secrets have expired and one or both of them are able to reach the multi-tenant controller122after that, then the multi-tenant controller122may follow the same method and may issues multiple shared secrets to cover headless period from current time. If only one element can reach multi-tenant controller122, it may continue to request new shared secrets from the multi-tenant controller122. If one (or both elements) cannot reach multi-tenant controller122and it has (or they have) consumed all shared secrets, it means that the element (or both elements) may be operating beyond the allowed maximum headless period. In this condition the element may bring down the IPSEC tunnel. The element's peer if it is able to talk with the multi-tenant controller122may continue to request new shared secrets from the multi-tenant controller122even though it is not able to use them. Thus the shared secrets refresh mechanism may operate as long as any one element can reach the multi-tenant controller122. This disclosed non-limiting embodiment may allow for the use of a Cloud based multi-tenant controller122only reachable via Internet142that issues centralized keys but still ensures that in cases of temporary disruption that in reaching the multi-tenant controller122that there is zero impact to existing encrypted data channels (IPSEC), while ensuring that the security of the encryption is preserved. Centrally managed shim IP address may be used to optimize IPSEC security policy. A spoke with multiple IPSEC VPN Tunnels over multiple public WAN or private WAN interfaces may need to decide which egress VPN tunnel it should use based on policy and the like. The spoke may maintain a database of currently active IPSEC VPN tunnel Security Associations (SA) and the like. A tunnel may consist of a pair of SAs one in inbound and another in outbound direction. Each tunnel may be identified in the kernel using one or more of the following parameters—a 32-bit index called as Security Parameter Index (SPI), and the destination IPv4 or IPv6 address of the IPSEC peer and the like. Each application flow may be mapped to the pair IPSEC SAs and this mapping may be achieved in the kernel using a Security Policy Database (SPD). This database may consist of classification rules which may have fields based on header fields of the application flow. For each packet of an application flow the fields may be extracted to form a key and a lookup done against the classification rules in the SPD. The number of entries in the SPD may become very large. As an example for a spoke with 1000 users each with two devices each having 20 to 30 application flows may result in 40,000 to 60,000 entries in the SPD. This may slow down the SPD lookup and may add latency to packet processing. Furthermore, each time there is a new application flow or when an existing application flow dies, the SPD may have to be modified, and while it is being modified it may not be used for packet processing which may further delay packet processing. Addition or deletion of application flows may be a frequent activity. This may be a problem even for smaller branches152with a few thousand SPD entries. The disclosed non-limiting embodiment may provide the use of an additional shim IP header which may use a shim IP address per IPSEC tunnel. This shim IP header may be attached to the application flow before the SPD lookup is done. The use of the shim IP header may considerably reduce the number of entries in the SPD. Since the shim IP are tied to the IPSEC tunnel and the creation/deletion of tunnels may happen at a significantly lower speed compared to application flow creation/deletion the SPD may be updated at a very low rate. Both these factors may improve packet processing performance. Each IPSEC tunnel may be given a Path-ID which is a 64-bit number and, for each such Path-ID, the multi-tenant controller122may assign a pair of shim IP addresses for each side of the IPSEC tunnel. The data path after doing a policy lookup may determine a Path-ID and then may attach a shim IP header including the pair of shim IP addresses for the Path-ID. The security policy data base may then be looked up using only the Shim IP address. For a spoke with 1000 IPSEC tunnels this may result in a SPD which has only 1000 entries compared to 40,000 to 60,000 for a branch152with 1000 users. For smaller branches152with 50 users and 2000 to 3000 flows the rate of change IPSEC tunnels may be lower by factor X compared to the rate of change of application flows. This may result in a very low rate of update of the SPD compared to what will happen with SPD based on application flow header fields. The multi-tenant controller122may use a database driven algorithm to manage the IP address space. The algorithm may be highly scalable and may manage the shim IPs for hundreds of millions of flows across tens of thousands of tenants. The use of a shim IP header to improve SPD lookup performance may be advantageous. The use of a multi-tenant controller122driven highly scalable multi-tenant shim IP address management may be advantageous. Inline BFD may be used for IPSEC VPN tunnel health management and control negotiation. Traditional IPSEC control protocol may use a method called “Dead Peer Detection” (DPD) which may be a control channel method of detecting if a peer is down. Since traditional IPSEC control channel may be out of band, the control channel may be healthy but the data channel may still be down. For example, the control channel may run on UDP port500which may be allowed by a firewall but the data channel IPSEC ESP may be blocked. If dead peer detection detects that the control channel is down it may tear down the data channel and the peers may use IKE or IKEv2 and the like to establish a new data channel. Thus control channel signaling may be used to determine the health of a data channel. However, it may be possible that the data channel is working well and the issue is only control channel related. This may cause unnecessary disruption in data channel traffic. Instead of using DPD, the current disclosed non-limiting embodiment may use in-band signaling within the data channel by sending Bidirectional Forwarding Detection (BFD) messages between two ends of the IPSEC VPN tunnel. This may ensure that data tunnel failures are quickly detected. If BFD indicates a data tunnel is down it then may trigger the control channel to begin negotiation to determine the cause of failure. The control channel may not immediately destroy the IPSEC SAs but instead keeps them up until hard expiry. This may ensure that if the disruption in data tunnel is temporary the tunnel may not be destroyed but may be used as soon as the disruption goes away. The multi-tenant controller122associated approach may treat each IPSEC data tunnel over public WAN or private WAN as a direct path between two elements. BFD may be used to ensure that this path is healthy and available. This may be similar to how routing would use BFD to detect health of a physical circuit that directly connects to adjacent routers162and the like. The two sides use BFD as an application that may send BFD messages on both sides using the Shim IP addresses as the source and destination IP addresses. When the IPSEC VPN control daemon establishes an IPSEC data tunnel, it may signal the BFD daemon to create a BFD session. It may provide the source and destination IPs for the BFD packets and also the Path-ID for that session. The BFD daemon may send BFD packets per RFC 5880. The BFD message interval may be kept as 750 ms. This may result in a 2.25 second failure detection period. If the BFD daemon detects that the tunnel is down, it may signal the IPSEC VPN control daemon and provide it the Path-ID of the tunnel. The IPSEC control daemon may then start control channel negotiation with a relevant peer to find the reason for the failure and to take corrective action. The BFD daemon may continue to send echo packets and if it finds that the data tunnel is up then it may again signals the VPN control daemon using the Path-ID to indicate that the IPSEC tunnel is up. The use of BFD as in band signaling to indicate health of IPSEC data tunnel may be advantageous. The use of data tunnel health failure to drive IPSEC control channel negotiation instead of other way round may be advantageous. According the method and systems of the present disclosure, VPN tunnel health management may be optimized for bandwidth. As explained herein the configurable device124may use BFD for monitoring the health of IPSEC data tunnels. In order to support fast failover from an active to a backup IPSEC data tunnel and minimize traffic disruption, in one disclosed non-limiting embodiment, the configurable device124may send BFD echo packets as frequently as 750 ms. When there is a large number of IPSEC tunnels (such as in thousands) from a branch152to other data center128and branches124, the amount of BFD traffic can be large. In order to reduce the bandwidth consumption of BFD traffic but still do quick failure detection, this invention may use the encrypted application traffic received on a tunnel as an indication of the tunnel being up and suppress the BFD echo packets. In the absence of application traffic, BFD packets may be used to check the health of the tunnel. In a variant of this approach, application traffic packets may also be used to carry the BFD signaling information. NAT may be integrated for VPN tunnels and IP address change management for VPN tunnels with and without NAT. The devices in two sites that need to establish a VPN link may be behind a Network Address Translator (NAT). The NAT can be a destination IP NAT or a source NAT with PAT (Port Address Translation) type of NAT and the like. The NAT devices could restart and, under reboot, they may also undergo an IP address change of their WAN facing interfaces. Only one side of the IPSEC tunnel may be behind a NAT. The configurable devices124themselves may have Static IP addresses or DHCP IP addresses that may undergo change. The NAT devices may be a Firewall NAT with rules which can move from allowing IPSEC to disallowing IPSEC traffic and vice versa. If they disallow traffic then they may keep a cached deny entry which needs to timeout, otherwise even if the policy is moved to allow, IPSEC sessions may still get denied due to a cached deny IPSEC entry. The disclosed non-limiting embodiment described herein may ensure that the IPSEC data tunnel is kept up in most of the aforementioned scenarios and is required only as long as if there is a NAT device before one of the configurable devices124. If there is a NAT device in front of both configurable devices124then one of them should be a Destination NAT device. This disclosed non-limiting embodiment may use the notion of Initiator and Responder and the use of multi-tenant controller122to tell each side the IP address of the peer. If there is Destination NAT in front of one of the parties then it may require the multi-tenant controller122to specify the IP address of only that NAT as the peer destination IP. The disclosed non-limiting embodiment described herein may use NAT Traversal. For example it may send IPSEC packets over UDP. Both parties may send request messages in which they may carry specific payloads about the original IP address and port number of the originating device and the IP address and port to which it may be sending the packet to. The receiving device may compare this information with the outermost NAT Traversal UDP/IP header's IP and port. Using this each party finds out whether it may be behind a static or dynamic NAT or whether the peer may be behind a static or dynamic NAT and the like. Based on this they both may derive their role as initiator or responder. Once this role is established they may complete their control channel communication to establish an IPSEC data tunnel. The two parties may also track dynamic changes of IP addresses of either side and transparently update their remote addresses while minimizing traffic disruption. If due to change of IP or NAT role, NAT reboots, NAT policy changes and the like the data tunnel may come down then the control tunnel may try to re-establish the notion of initiator and responder and may allow for the roles changing in order if necessary to keep the control channel up. If there is Firewall NAT device which denies the control channel communication due to a cached denial entry whose timeout does not expire because of continuous control channel negotiation packets then the control channel on configurable devices124may use an exponential back off timeout method to decide when to transmit next control channel packet. Using this they may wait for few seconds to few minutes before sending next control channel packet. This may allow the firewall NAT to expire their cached denial entry. Updated policy on the firewall NAT may allow the control channel communication and once the cached denial entry is deleted due to expiry a new allow entry may be inserted by the Firewall NAT. Due to this control channel communication between configurable devices124may eventually succeed. The ability to track change of IP Address of the configurable devices124with and without NAT for site to site IPSEC VPN may be advantageous. The ability to track change of IP address of source NAT devices may be advantageous. The ability to use exponential back-off times to send next control packet may be advantageous. Applications may be detected across tenants and across sites within a single tenant. As described in the other embodiments above, the methods and systems described herein may bring innovative approaches to detecting applications dynamically. The application identification information that is dynamically learned from one or more sites may also be learned by the multi-tenant controller122and be pushed to other sites of the same tenant. Further, since the multi-tenant controller122is delivered as a cloud-delivered, multi-tenant service, the information learned at the multi-tenant controller122may be aggregated and sanitized, and this aggregated information may be distributed to the sites of other tenants as well. Both of the above may allow for the collective betterment of the application identification across sites and tenants. Without this, the information learned at one site may remain localized at the same site, and every site may have to go through the learning process independently. This collection and dissemination may allow for very quick convergence of the application definition. For example, if a device at site learned that srv1.lync-int.com is associated with office365, all the other sites come to know about this very quickly. Spoke devices may detect applications, more specifically the SaaS applications604using anchor domains based approach, described in the embodiments herein. In all the mentioned approaches, the applications definitions may get constantly refined as the traffic starts to flow through the spoke devices. These refinements are pushed to the multi-tenant controller122, where they may be aggregated and sanitized through analytics platform, and may be distributed to the other sites of the same tenant and those of other tenants. This is another example of a slow feedback loop that involves the devices and the multi-tenant controller122. Currently available solutions for WANs involve appliances and routers162that may be either manually provisioned or provisioned through a central mgmt. station which stops at just managing the devices and the like. The disclosed non-limiting embodiment described herein provides for a new multi-tenant controller122based approach towards learning and dissemination of application definitions toward the wide area networking space. The multi-tenant controller122associated approach may in one disclosed non-limiting embodiment, involve all the HTTP domain names that the spoke devices were not able to associate with an application and may identify, analyze and categorize them to enhance the application definitions, in order, for example, to determine relevancy and irrelevancy. Configurable devices124may inspect all the traffic going in and out of a branch152, and collect the set of HTTP domain names that tries to associate them with the known applications dynamically, as stated in the above embodiments. All the domain names that have not been found to be related to a known application may be gathered and sent to the multi-tenant controller122for further analysis. At the multi-tenant controller122, these domain names may be analyzed using data mining techniques and the like, and they may get further categorized into one or more of the following categories:New applications, leading to new application definitionsExisting applications, resulting in the enhancement of existing applicationsUninteresting, and therefore ignored. This may help in improving the overall system's effectiveness in identifying applications. The disclosed non-limiting embodiment herein describes a WAN network that may constantly and automatically evolve and adapt based on the traffic observed on a customer's network. As described herein, a multi-tenant controller122allows for planning and visualizing changes to the network before the changes are actually made (i.e., the versioning). In such a “preview” mode, the network administrator is able to make proposed changes and see its effects in different ways. When such changes and the effects are deemed satisfactory for the administrator user, he/she would want to commit the changes to the network and thus converting the planned changes to committed changes. This may allow the “network” to be treated like a database—collection of assets, configurations, states and statistics. Databases are versioned, which allows tracking changes to the database as a sequence of commit transactions. Just like a Source Code Control & Versioning System that allows its various software components to be versioned and tracked through a series of commits and merges, which may allow the ‘network’ to be versioned. The administrator user ‘previews’ his changes and its effects based on a version of the network, and once satisfied and she is ready to commit the changes, the multi-tenant controller122creates a new version of the network. More than one version of the network can be tracked and kept by the multi-tenant controller122so that if any change causes harm to the network, the administrator can go back to a previous ‘safe’ version of the network. The multi-tenant controller122may allow for two different versions to be compared, which will highlight the configuration and asset inventory changes. The multi-tenant controller122may keep all the configuration, assets inventory and state information, in addition to statistics collected from the network. A version of the network is a snapshot of configuration, assets inventory and state at the time of creation of the version. For statistics, versioning is a version record added to the time series of statistics data collected from the network. To compare two versions, the multi-tenant controller122does a ‘diff’ of the two snapshots representing the two versions. Statistics are not compared in the same way. Instead, statistics may be analyzed in the context of the versions and their respective times of existence. As mentioned in other embodiments, a multi-tenant controller122is delivered from the cloud and administrative users get to interact with the system at one central place, and may comprise a semi-distributed control plane cache (with corresponding statistics). However this doesn't mean that all the elements deployed in a customer's network need to come to this single central place to retrieve their provisioning info and perform their functions. Instead the multi-tenant controller122functions are dis-aggregated and some of the latency-sensitive functions are distributed across the globe to yield better response times to the elements. Elements in a customer's network may be deployed across wide geographies. These elements may need to fetch their provisioning info and push the fine granular statistics to the multi-tenant controller122. If all of the multi-tenant controller122functions are centralized like the network portal and API gateway, the elements may suffer high latencies in communicating with the multi-tenant controller122and effectively performing their functions. To mitigate this, some of the key latency sensitive multi-tenant controller122functions are dis-aggregated and distributed across the globe while the rest remain centralized. All this happens in the backend, and the customer is unaware of all of this, and they continue to come to one place to interact with the system. Distribution of these functions across wide geographies, while still functioning as a centralized multi-tenant controller122from customer's perspective may be advantageous. Multi-stage, multi-factor classification and policy enforcement for performance, path selection, SERVICE, and VXWAN selection may be provided at the data path layer. The policy may use higher level constructs related to Application, Identity, Performance, Path, SERVICE and VXWAN based isolation. The information that is available in incoming flow packets, though, is at lower network layer114. Incoming flows and packets may need to be classified by the data path at both a network layer and at a higher policy layer112to find which policy rule they match. This involves use of a two-stage classification. Following this two-stage classification, the data path may need to select the best path for the application based on VXWAN204, SERVICE, Performance and Path directives in policy. The data path may use a specific mechanism that ensures that policy is satisfied with the best possible performance108for the application and reachability110of the application. A policy set may consist of policy rules. Each policy rule may specify classification columns including an Application and an Identity. An application may be specified using higher-level L7 constructs and potentially multiple L7 regular expression based rules. An application may itself include a group of applications. Similarly, identity may be specified using a string-based name. Each identity can specify an individual user168or device, or can specify a group users168or devices and in some cases the groups may identity other groups. Each policy rule may also specify the action to take if the traffic matches the classification columns. The action is in terms of performance, path directive, SERVICE and VXWAN Isolation. There may be multiple policy rules, which form part of a policy set. A policy set may apply to multiple sites. A multi-tenant controller122sends all the policy rules of a policy set to the data path of the element in the site. In each policy rule a multi-tenant controller122may convert the application specification into an Application ID or List of Application IDs and the identity specification into a list of Identity IDs. The multi-tenant controller122may convert the path directives into a list of Path IDs, SERVICE into SERVICE ID, VXWAN204into network-isolation identifier and Performance into a performance ID. It then may form the policy rule using these IDs and send the policy rule to the element data path. It also provides an ID for each policy rule. A data path may use multi stage classification method to classify traffic. In the first stage it may take application traffic and identify traffic as belonging to a specific application ID or a list of application IDs and as belong to an identity ID or a list of identity IDs. As part of the first stage of classification a data path may use a set of helper modules that map the application and identity IDs to network level constructs such as IP address, TCP and UDP ports, IP DSCP (Differentiated Services Code Point) fields, layer 2 VLAN Tag, layer 2 VLAN priority, etc. In the second stage it may take the IDs coming out of the first stage and does another classification lookup to identify the policy rule that the traffic matches. The data path may use a specific algorithm for dis-ambiguity if multiple policy rules match the incoming traffic. It may use both the application and identity fields as well as Path, SERVICE, VXWAN204and Performance fields for dis-ambiguity. After finding a policy rule match, the data path may use the action fields in the policy in order to enforce policy. For this, it may use helper modules to map the path ID to data path level next hop interfaces or tunnels, to map the performance ID to data path level BW control queue and to map the network-isolation identifier to network path level encapsulation shim. The data path may use a specific algorithm that orders the path selection based on VXWAN directive, SERVICE directive, path directive rules and performance directive needs. Once a final policy rule has matched, the data path may add a flow entry for the incoming flow and subsequent packets belong to the flow use the flow entry rather using undergoing full multi-stage classification. While an application and/or identity has been used at data path layer, for the purpose of security the use of both application and identity for path selection for application performance and isolation at data path level may be advantageous and the selection of the best path based on a combination of path directives, isolation needs and performance needs may be advantageous. The methods and systems described herein may be deployed in part or in whole through a machine that executes computer software, program codes, and/or instructions on a processor. The processor may be part of a server160, client512, network infrastructure, mobile computing platform, stationary computing platform, or other computing platform. A processor may be any kind of computational or processing device capable of executing program instructions, codes, binary instructions and the like. The processor may be or include a signal processor, digital processor, embedded processor, microprocessor or any variant such as a co-processor (math co-processor, graphic co-processor, communication co-processor and the like) and the like that may directly or indirectly facilitate execution of program code or program instructions stored thereon. In addition, the processor may enable execution of multiple programs, threads, and codes. The threads may be executed simultaneously to enhance the performance of the processor and to facilitate simultaneous operations of the application. By way of implementation, methods, program codes, program instructions and the like described herein may be implemented in one or more thread. The thread may spawn other threads that may have assigned priorities associated with them; the processor may execute these threads based on priority or any other order based on instructions provided in the program code. The processor may include memory that stores methods, codes, instructions and programs as described herein and elsewhere. The processor may access a storage medium through an interface that may store methods, codes, and instructions as described herein and elsewhere. The storage medium associated with the processor for storing methods, programs, codes, program instructions or other type of instructions capable of being executed by the computing or processing device may include but may not be limited to one or more of a CD-ROM, DVD, memory, hard disk, flash drive, RAM, ROM, cache and the like. A processor may include one or more cores that may enhance speed and performance of a multiprocessor. In embodiments, the process may be a dual core processor, quad core processors, other chip-level multiprocessor and the like that combine two or more independent cores (called a die). The methods and systems described herein may be deployed in part or in whole through a machine that executes computer software on a server160, client512, firewall, gateway, hub, router162, or other such computer and/or networking hardware. The software program may be associated with a server160that may include a file server, print server, domain server, internet server, intranet server and other variants such as secondary server, host server, distributed server and the like. The server160may include one or more of memories, processors, computer readable media, storage media, ports (physical and virtual), communication devices, and interfaces capable of accessing other servers160, clients512, machines, and devices through a wired or a wireless medium, and the like. The methods, programs or codes as described herein and elsewhere may be executed by the server160. In addition, other devices required for execution of methods as described in this application may be considered as a part of the infrastructure associated with the server160. The server160may provide an interface to other devices including, without limitation, clients512, other servers160, printers, database servers, print servers, file servers, communication servers, distributed servers and the like. Additionally, this coupling and/or connection may facilitate remote execution of program across the network. The networking of some or all of these devices may facilitate parallel processing of a program or method at one or more location without deviating from the scope of the invention. In addition, all the devices attached to the server160through an interface may include at least one storage medium capable of storing methods, programs, code and/or instructions. A central repository may provide program instructions to be executed on different devices. In this implementation, the remote repository may act as a storage medium for program code, instructions, and programs. The software program may be associated with a client512that may include a file client, print client, domain client, internet client, intranet client and other variants such as secondary client, host client, distributed client and the like. The client512may include one or more of memories, processors, computer readable media, storage media, ports (physical and virtual), communication devices, and interfaces capable of accessing other clients, servers160, machines, and devices through a wired or a wireless medium, and the like. The methods, programs or codes as described herein and elsewhere may be executed by the client512. In addition, other devices required for execution of methods as described in this application may be considered as a part of the infrastructure associated with the client512. The client512may provide an interface to other devices including, without limitation, servers160, other clients512, printers, database servers, print servers, file servers, communication servers, distributed servers and the like. Additionally, this coupling and/or connection may facilitate remote execution of program across the network. The networking of some or all of these devices may facilitate parallel processing of a program or method at one or more location without deviating from the scope of the invention. In addition, all the devices attached to the client512through an interface may include at least one storage medium capable of storing methods, programs, applications, code and/or instructions. A central repository may provide program instructions to be executed on different devices. In this implementation, the remote repository may act as a storage medium for program code, instructions, and programs. The methods and systems described herein may be deployed in part or in whole through network infrastructures. The network infrastructure may include elements such as computing devices, servers160, routers162, hubs, firewalls, clients512, personal computers, communication devices, routing devices and other active and passive devices, modules and/or components as known in the art. The computing and/or non-computing device(s) associated with the network infrastructure may include, apart from other components, a storage medium such as flash memory, buffer, stack, RAM, ROM and the like. The processes, methods, program codes, instructions described herein and elsewhere may be executed by one or more of the network infrastructural elements. The methods, program codes, and instructions described herein and elsewhere may be implemented on a cellular network having multiple cells. The cellular network may either be frequency division multiple access (FDMA) network or code division multiple access (CDMA) network. The cellular network may include mobile devices, cell sites, base stations, repeaters, antennas, towers, and the like. The methods, programs codes, and instructions described herein and elsewhere may be implemented on or through mobile devices. The mobile devices may include navigation devices, cell phones, mobile phones, mobile personal digital assistants, laptops, palmtops, netbooks, pagers, electronic books readers, music players and the like. These devices may include, apart from other components, a storage medium such as a flash memory, buffer, RAM, ROM and one or more computing devices. The computing devices associated with mobile devices may be enabled to execute program codes, methods, and instructions stored thereon. Alternatively, the mobile devices may be configured to execute instructions in collaboration with other devices. The mobile devices may communicate with base stations interfaced with servers160and configured to execute program codes. The mobile devices may communicate on a peer to peer network, mesh network, or other communications network. The program code may be stored on the storage medium associated with the server160and executed by a computing device embedded within the server160. The base station may include a computing device and a storage medium. The storage device may store program codes and instructions executed by the computing devices associated with the base station. The computer software, program codes, and/or instructions may be stored and/or accessed on machine readable media that may include: computer components, devices, and recording media that retain digital data used for computing for some interval of time; semiconductor storage known as random access memory (RAM); mass storage typically for more permanent storage, such as optical discs, forms of magnetic storage like hard disks, tapes, drums, cards and other types; processor registers, cache memory, volatile memory, non-volatile memory; optical storage such as CD, DVD; removable media such as flash memory (e.g. USB sticks or keys), floppy disks, magnetic tape, paper tape, punch cards, standalone RAM disks, Zip drives, removable mass storage, off-line, and the like; other computer memory such as dynamic memory, static memory, read/write storage, mutable storage, read only, random access, sequential access, location addressable, file addressable, content addressable, network attached storage, storage area network, bar codes, magnetic ink, and the like. The methods and systems described herein may transform physical and/or or intangible items from one state to another. The methods and systems described herein may also transform data representing physical and/or intangible items from one state to another. The elements described and depicted herein, including in flow charts and block diagrams throughout the figures, imply logical boundaries between the elements. However, according to software or hardware engineering practices, the depicted elements and the functions thereof may be implemented on machines through computer executable media having a processor capable of executing program instructions stored thereon as a monolithic software structure, as standalone software modules, or as modules that employ external routines, code, services, and so forth, or any combination of these, and all such implementations may be within the scope of the present disclosure. Examples of such machines may include, but may not be limited to, personal digital assistants, laptops, personal computers, mobile phones, other handheld computing devices, medical equipment, wired or wireless communication devices, transducers, chips, calculators, satellites, tablet PCs, electronic books, gadgets, electronic devices, devices having artificial intelligence, computing devices, networking equipments, servers160, routers162and the like. Furthermore, the elements depicted in the flow chart and block diagrams or any other logical component may be implemented on a machine capable of executing program instructions. Thus, while the foregoing drawings and descriptions set forth functional aspects of the disclosed systems, no particular arrangement of software for implementing these functional aspects should be inferred from these descriptions unless explicitly stated or otherwise clear from the context. Similarly, it will be appreciated that the various steps identified and described above may be varied, and that the order of steps may be adapted to particular applications of the techniques disclosed herein. All such variations and modifications are intended to fall within the scope of this disclosure. As such, the depiction and/or description of an order for various steps should not be understood to require a particular order of execution for those steps, unless required by a particular application, or explicitly stated or otherwise clear from the context. The methods and/or processes described above, and steps thereof, may be realized in hardware, software or any combination of hardware and software suitable for a particular application. The hardware may include a dedicated computing device or specific computing device or particular aspect or component of a specific computing device. The processes may be realized in one or more microprocessors, microcontrollers, embedded microcontrollers, programmable digital signal processors or other programmable device, along with internal and/or external memory. The processes may also, or instead, be embodied in an application specific integrated circuit, a programmable gate array, programmable array logic, or any other device or combination of devices that may be configured to process electronic signals. It will further be appreciated that one or more of the processes may be realized as a computer executable code capable of being executed on a machine readable medium. The computer executable code may be created using a structured programming language such as C, an object oriented programming language such as C++, or any other high-level or low-level programming language (including assembly languages, hardware description languages, and database programming languages and technologies) that may be stored, compiled or interpreted to run on one of the above devices, as well as heterogeneous combinations of processors, processor architectures, or combinations of different hardware and software, or any other machine capable of executing program instructions. Thus, in one aspect, each method described above and combinations thereof may be embodied in computer executable code that, when executing on one or more computing devices, performs the steps thereof. In another aspect, the methods may be embodied in systems that perform the steps thereof, and may be distributed across devices in a number of ways, or all of the functionality may be integrated into a dedicated, standalone device or other hardware. In another aspect, the means for performing the steps associated with the processes described above may include any of the hardware and/or software described above. All such permutations and combinations are intended to fall within the scope of the present disclosure. While the invention has been disclosed in connection with the preferred embodiments shown and described in detail, various modifications and improvements thereon will become readily apparent to those skilled in the art. Accordingly, the spirit and scope of the present invention is not to be limited by the foregoing examples, but is to be understood in the broadest sense allowable by law.
303,140
11943095
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS In general, application or network reactions is triggered in the NF consumer when analytics is received. When there are multiple NF consumers for the same analytics filter information and analytics target period, potentially multiple NF consumer reactions may be triggered in parallel in an uncoordinated way for the same event. In this disclosure, “coordination” of actions of NF consumers can be defined as the methods and tools that allow several NF consumers to cooperate towards a common goal. Uncoordinated reaction in multiple NF consumers is not always desirable, as the effect of such an incoordination may not be foreseen, with potential undesired effects both in the network and in the application. The invention therefore addresses the issue of coordinating multiple NF consumers subscribing to the same analytics. The coordination of multiple NF consumers according to the invention is based on an inventive list of NF consumers subscribing to an analytics event and their respective notification status associated with the analytics event. The list of NF consumers is managed by a first network node and a second network node and enables the second network node to suppress an analytics event in notifications related to the analytics event for specific NF consumers. FIG.1shows a first network node100according to an embodiment of the invention. In the embodiment shown inFIG.1, the first network node100comprises a processor102, a transceiver104and a memory106. The processor102is coupled to the transceiver104and the memory106by communication means108known in the art. The first network node100may be configured for both wireless and wired communications in wireless and wired communication systems, respectively. The wireless communication capability may be provided with an antenna or antenna array no coupled to the transceiver104, while the wired communication capability may be provided with a wired communication interface112coupled to the transceiver104. That the first network node wo is configured to perform certain actions can in this invention be understood to mean that the first network node100comprises suitable means, such as e.g. the processor102and the transceiver104, configured to perform said actions. According to embodiments of the invention the first network node100is configured to transmit a get list request message502to a second network node300, wherein the get list request message502indicates a request for a list of NF consumers subscribing to an analytics event. NF consumers subscribing to an analytics event can be understood to mean NF consumers which have subscribed to the analytics events from the second network node300. The first network node100is further configured to receive a get list response message504from the second network node300, wherein the get list response message504indicates a list of NF consumers and their respective notification status associated with the analytics event, wherein each NF consumer in the list is set to a first status in which the NF consumer receives a first notification type indicating an action is needed to be performed by the NF consumer in respect of the analytics event or to a second status in which the NF consumer receives a second notification type inhibiting an action to be performed by the NF consumer in respect of the analytics event. The order of the NF consumers in the list of NF consumers corresponds to the order in which the second network node300transmits notifications to the NF consumers. FIG.2shows a flow chart of a corresponding method200which may be executed in a first network node100, such as the one shown inFIG.1. The method200comprises transmitting202a get list request message502to a second network node300, wherein the get list request message502indicates a request for a list of NF consumers subscribing to an analytics event. The method200further comprises receiving204a get list response message504from the second network node300, wherein the get list response message504indicates a list of NF consumers and their respective notification status associated with the analytics event, wherein each NF consumer in the list is set to a first status in which the NF consumer receives a first notification type indicating an action is needed to be performed by the NF consumer in respect of the analytics event or to a second status in which the NF consumer receives a second notification type inhibiting an action to be performed by the NF consumer in respect of the analytics event. FIG.3shows a second network node300according to an embodiment of the invention. In the embodiment shown inFIG.3, the second network node300comprises a processor302, a transceiver304and a memory306. The processor302is coupled to the transceiver304and the memory306by communication means308known in the art. The second network node300may be configured for both wireless and wired communications in wireless and wired communication systems, respectively. The wireless communication capability may be provided with an antenna or antenna array310coupled to the transceiver304, while the wired communication capability may be provided with a wired communication interface312coupled to the transceiver304. That the second network node300is configured to perform certain actions can in this invention be understood to mean that the second network node300comprises suitable means, such as e.g. the processor302and the transceiver304, configured to perform said actions. According to embodiments of the invention the second network node300is configured to receive a get list request message502from a first network node100, wherein the get list request message502indicates a request for a list of NF consumers subscribing to an analytics event. NF consumers subscribing to an analytics event can be understood to mean NF consumers which have subscribed to the analytics events from the second network node300. The second network node300is further configured to determine a list of NF consumers and their respective notification status associated with the subscribed analytics event, wherein each NF consumer in the list is set to a first status in which the NF consumer receives a first notification type indicating an action is needed to be performed by the NF consumer in respect of the analytics event or to a second status in which the NF consumer receives a second notification type inhibiting an action to be performed by the NF consumer in respect of the analytics event. Furthermore, the second network node300is configured to transmit a get list response message504to the first network node100, wherein the get list response message504indicates the determined list of NF consumers and their respective notification status associated with the subscribed analytics event. FIG.4shows a flow chart of a corresponding method400which may be executed in a second network node300, such as the one shown inFIG.3. The method400comprises receiving402a get list request message502from a first network node100, wherein the get list request message502indicates a request for a list of NF consumers subscribing to an analytics event. The method400further comprises determining404a list of NF consumers and their respective notification status associated with the subscribed analytics event, wherein each NF consumer in the list is set to a first status in which the NF consumer receives a first notification type indicating an action is needed to be performed by the NF consumer in respect of the analytics event or to a second status in which the NF consumer receives a second notification type inhibiting an action to be performed by the NF consumer in respect of the analytics event. Furthermore, the method400comprises transmitting406a get list response message504to the first network node100, wherein the get list response message504indicates the determined list of NF consumers and their respective notification status associated with the subscribed analytics event. FIG.5shows a communication system500according to embodiments of the invention. The communication system500comprises the first network node100and the second network node300configured to communicate with each other. The first network node100and the second network node300may e.g. exchange information related to NF consumers and their notification status with each other. The second network node300is further configured to provide network data analytics to NF consumers such as e.g. QoS sustainability analytics. In the embodiment shown inFIG.5, the communication system500further comprises a first NF consumer320, a second NF consumer340, and a third NF consumer360. The NF consumers320,340,360may request and/or subscribe to the same network data analytics or different network data analytics. In embodiments, the first network node100may be a policy control function (PCF) and the second network node300may be a network data analytics function (NWDAF) as defined in the 3GPP standard. In this case, the NF consumers may e.g. be session management functions (SMFs), access and management functions (AMFs), policy control functions (PCFs), application functions (AFs), radio access networks (RANs), or user equipments (UEs) as defined in the 3GPP standard. Some of those NF Consumers may not yet have a specified interface to the NWDAF. For example, the RAN currently does not have a reference point to the NWDAF for requesting or subscribing to analytics. Furthermore, some NF consumers such as the AF may not have a direct interface to the NWDAF. The AF may instead request or subscribe to analytics via a network exposure function (NEF). When the first network node100is a PCF and the second network node300is a NWDAF, there is no need of any additional interface/reference point for the solution according to the invention. The relevant services on the first network node100can be exposed via the existing Npcf reference point for the PCF and the services on the second network node300can be exposed via the existing Nnwdaf reference point for the NWDAF. The NF consumers may subscribe to one or more analytics event from the second network node300. An analytics event is an event that triggers the second network node300to send a notification to NF consumers that have subscribed to the analytics event.FIG.6shows an example of triggering of analytics events for QoS sustainability analytics. The analytics event is related to a threshold value for a QoS key performance indicator (KPI). An analytics event is triggered for every specific interval in which the QoS KPI crosses the threshold value in a predefined critical direction. The critical direction may depend on the QoS KPI. For example, throughput critical direction is when the QoS KPI is lower than the threshold value, while latency critical direction is when latency is higher than the threshold value. Over an analytics target period there may be several analytics events for a specific threshold. With reference toFIG.6, the predicted QoS KPI (illustrated with a dashed line) is compared to a first threshold Th1 during an analytics target period Ta and an analytic event is triggered when the predicted QoS KPI is lower than the first threshold Th1. Thus, two analytics events are triggered during the analytics target period Ta, one at a first time instance t1and one at a second time instance t2. According to embodiments of the invention procedures for suppressing an analytics event in the notification related to the analytics event for specific NF consumers are introduced. The procedures are performed between the first network node wo and the second network node300, where the first network node wo may be seen as a controlling function and the second network node300may be seen as a controlled function. The procedures are based on a list of NF consumers subscribing to an analytics event and their respective notification status associated with the analytics event. Depending on the order and status of the NF consumers in the list of NF consumers, the second network node300may transmit or suppress an analytics event in notifications associated with the analytics event to the NF consumers. FIG.7shows a flow chart of a method700according to embodiments of the invention. The method700may be performed in the second network node300with a list of NF consumers subscribing to an analytics event and their respective notification status associated with the analytics event. In step702, the second network node300detects an analytics event. Upon detecting the analytics event, the second network node300checks in step704the notification status associated with the analytics event for the NF consumer from the list of NF consumers. The notification status may be a first status in which the NF consumer receives a first notification type indicating an action is needed to be performed by the NF consumer in respect of the analytics event, or a second status in which the NF consumer receives a second notification type inhibiting an action to be performed by the NF consumer in respect of the analytics event. If the status of the NF consumer is the first status, the second network node300transmits the first notification type to the NF consumer in step706. On the other hand, if the status of the NF consumer is the second status, the second network node300transmits the second notification type to the NF consumer in step708. The method700is repeated for each NF consumer in the list of NF consumers. Thus, the second network node300transmits the first notifications and the second notifications to the NF consumers according to the list of NF consumers. In this way, the NF consumers set to the first status in the list of NF consumers receives the first notification type indicating an action is needed to be performed by the NF consumer in respect of the analytics event, while the NF consumers set to the second status in the list of NF consumers receives the second notification type inhibiting an action to be performed by the NF consumer in respect of the analytics event. In embodiments, the first notification type may indicate the analytics event and the second notification type may not indicate the analytics event, i.e. the analytics event is suppressed in the second notification type. Both the first notification type and the second notification type may further comprise additional information such as e.g. analytics information as defined in the 3GPP standard. According to embodiments of the invention the NF consumers in the list may further be set to a third status. Each NF consumer in the list may hence be set to the first status, the second status, or the third status. In the third status the NF consumer may receive the first notification type and may perform or initiate an action upon an acknowledgement of an action performed by a previous NF consumer in the list of NF consumers. The acknowledgement of the action performed by the previous NF consumer may be indicated in a message received by the second network node300from the previous NF consumer. With the third status a sequential and conditional order of actions among the NF consumers can be achieved. For example, a NF consumer set to the third status may receive the first notification type if a previous NF consumer provides an acknowledgement and may receive the second notification type if the previous NF consumer provides no acknowledgement or a negative acknowledgement. In embodiments, the NF consumers may be ordered in a chronological order in the list of NF consumers for receiving notifications. The sequential order of notifications and actions may hence be determined according to the chronological order in the list of NF consumers. In contrast the first status may be seen as a parallel mode, i.e. the first type of notifications is sent to all the NF consumers which are set to the first status at essentially the same time. FIG.8shows signaling between the second network node300, a first NF consumer320, and a second NF consumer340according to embodiments where the first NF consumer320is set to the first status and the second NF consumer340is set to the third status. In step Ia and Ib inFIG.8, the first NF consumer320and the second NF consumer340subscribes to an analytics event, respectively. The first NF consumer320and the second NF consumer340subscribes to the same analytics event. In step II inFIG.8, the second network node300derives the analytics and monitors for the analytics event. Upon detecting an analytics event, the second network node300transmits a first notification N1to the first NF consumer320, as shown in step III inFIG.8. The second network node300transmits the first notification N1based on the status of the first NF consumer in the list of NF consumers, i.e. based on that the first NF consumer320is set to the first status. The first notification N1indicates that an action is needed to be performed by the first NF consumer320in respect of the analytics event. The first NF consumer320receives the first notification N1indicating the action that the first NF consumer320needs to perform. Based on the correct reception and processing by the first NF consumer320of the first notification N1, the first NF consumer320transmits an acknowledgement ACK to the second network node320in step IV. Upon receiving the acknowledgement ACK from the first NF consumer320, the second network node300checks the content of the acknowledgement ACK and transmits a first notification N1to the second NF consumer340, as shown in step V inFIG.8. The second network node300transmits the first notification N1based on the status of the second NF consumer340in the list of NF consumers, i.e. based on that the second NF consumer340is set to the third status, and further based on the acknowledgement ACK from the first NF consumer320. If the acknowledgement ACK indicates that an action is needed by a further NF consumer, the first notification N1may be of the first notification type and hence indicates that an action is needed to be performed by the second NF consumer340in respect of the analytics event. If the acknowledgement ACK indicates that no action is needed by any further NF consumers, the first notification N1may be of the second notification type and hence inhibits an action to be performed by the NF consumer in respect of the analytics event. In the same way as for the first NF consumer320, the second NF consumer340receives the first notification N1and transmits an acknowledgement ACK to the second network node300based on the correct reception and processing by the second NF consumer340of the first notification N1, as shown in step VI inFIG.8. FIG.9shows notifications related to QoS sustainability analytics from the second network node300to a first NF consumer320and a second NF consumer340according to embodiments of the invention. The first NF consumer320and the second NF consumer340have subscribed to the second network node300for the same analytics event. In the embodiment shown inFIG.9, the analytics event is based on a single threshold Th related to a QoS KPI, where an analytic event is detected when the QoS KPI is predicted to be below the threshold Th. However, only the first NF consumer320is supposed to react to the analytics event. Therefore, the first NF consumer320is set to the first status, while the second NF consumer340is set to the second status. The status and the order of the NF consumer in the list of NF consumers may be set by the first network node100, e.g. using the signaling described with reference toFIG.13, or by the second network node300according to a policy provided by the first network node100. With reference toFIG.9, the QoS KPI is monitored or predicted during an analytics target period Ta. At a first time instance t1, the second network node300determines that the QoS KPI will be below the threshold Th. Based on the statuses of the NF consumers320,340in the list of NF consumers, the second network node300notifies the first NF consumer320about the detected analytics event at the first time instance t1but suppresses the detected analytics event in the notification to the second NF consumer340, as illustrated inFIG.9where the QoS KPI reported to the first NF consumer320and the second NF consumer340, respectively, is shown. The second network node300may hence send analytics such as the predicted QoS KPI to both the NF consumers320,340but the notification of the detected analytics event only to the first NF consumer320. FIG.10shows notifications related to QoS sustainability analytics from the second network node300to a first NF consumer320and a second NF consumer340according to embodiments of the invention where multiple thresholds are used. A QoS KPI is monitored or predicted against a first threshold Th1, a second threshold th2, and a third threshold Th3 during an analytics target period Ta. The third threshold Th3 is further configured as the standard threshold. InFIG.10, an analytics event is detected when the QoS KPI is detected or predicted to cross any of the defined thresholds (Th1, Th2 or Th3) in the critical direction during the analytics target period (or time interval) Ta. Thus, an analytics event is detected at a first time instance t1inFIG.10. The second network node300transmits a notification of the analytics event depending on the status of the NF consumers in the list of NF consumers. If the status of the NF consumer is set to the first status, the second network node300sends the first notification type to the NF consumer in which the analytics event is not suppressed. Therefore, the QoS KPI is reported to the NF consumer to be below the third threshold Th3 and above the second threshold Th2, as it is actually predicted or detected by the second network node300. If the status of the NF consumer is set to the second status, the analytics event is suppressed in the notification that is sent to the NF consumer by the second network node300, i.e. the second network node300sends the second notification type to the NF consumer. Therefore, the QoS KPI is reported to be within the third threshold Th3 (in the critical direction), which is configured as the standard threshold. Procedures for establishing and updating the list of NF consumers between the first network node100and the second network node300are introduced.FIG.11shows signaling between the first network node100and the second network node300for establishing a list of NF consumers according to embodiments of the invention. In step I inFIG.11, the first network node100transmits a get list request message502to a second network node300. The get list request message502indicates a request for a list of NF consumers subscribing to an analytics event. The analytic event may be indicated with an analytics ID, analytics filter information, and/or other information such as e.g. parameters associated with an analytics subscription. The analytics ID may indicate the type of analytics In case of QoS sustainability analytics, the analytics filter information may indicate information such as e.g. any one or more of QoS requirements, location information, observation period, threshold(s), and single network slice selection assistance information (S-NSSAI). The second network node300receives the get list request message502indicating the request for the list of NF consumers subscribing to an analytics event from the first network node100. Based on the get list request message502, the second network node300determines a list of NF consumers and their respective notification status associated with the subscribed analytics event in step II inFIG.11. As previously described, each NF consumer in the list may be set to the first status, the second status, or the third status. The NF consumers in the list may further be ordered in a chronological order for receiving notifications. The second network node300may further include a unique identifier for each NF consumer in the list. The unique identifier be used by the first network node100to determine both the type of NF consumer and also the specific NF instance. In step III inFIG.1i, the second network node300transmits a get list response message504to the first network node100. The get list response message504indicates the determined list of NF consumers and their respective notification status associated with the subscribed analytics event. The first network node100receives the get list response message504from the second network node300and may hence obtain the list of NF consumers and their respective notification status associated with the analytics event indicated in the get list response message504. From the list of NF consumers the first network node100may obtain the order in which the NF consumers receives the notifications according to their respective status. In embodiments the list of NF consumers may comprise NF consumers who have subscribed to the second network node300with different analytics filter information but with overlapping analytics event sets. An event set may be defined as a class of analytics events that may originate during the lifetime of an analytics subscription of an NF consumer.FIG.12shows two such overlapping analytics event sets, a first event set A and a second event set B. Two NF consumers have subscribed to QoS sustainability analytics with the same QoS requirements, S-NSSAI, and threshold(s), but with different locations and observation periods. Two NF consumer has subscribed to a first event set A and one NF consumer has subscribes to a second event set B. The first event set A is associated with a first location L1and a first observation period T1and the second event set B is associated with a second location L2and a second observation period T2. With reference toFIG.12, the second location L2overlap the first location L1and the second observation period T2overlap the first observation period T1such that the first event set A is a subset of the second event set B. Table 1 shows further details related to the overlapping first event set A and second event set B. TABLE 1ObservationEvent setperiodLocationNF ConsumersAT1L12BT2(excludingL11interval of T1)T2(excludingL2(excluding region1interval of T1)marked as L1)T12 (excluding region1marked as L1) According to embodiments of the invention the first network node100may modify the list of NF consumers for a specific analytics event, e.g. identified with a specific set of analytics filter information. The first network node100may for each NF consumer change the status and/or the order in the list of NF consumers. However, the first network node100may in embodiments not remove and/or add NF consumers from and/or to the list of NF consumers as this should be done only via the subscription mechanism between the second network node300and the NF consumers. FIG.13shows signaling between the first network node100and the second network node300for modifying a list of NF consumers according to embodiments of the invention. In step I inFIG.13, the first network node100transmits a set list request message506to the second network node300. The set list request message506may indicate a chronological order of NF consumers in the list for receiving notifications and their respective status. The set list request message506may further indicate the status of the NF consumers in the list. As for the get list request message502, the set list request message506may indicate the analytics event with an analytics ID and/or analytics filter information which allows the second network node300to identify the analytics event. In embodiments, the first network node100may compile the list of NF consumers based on a list of NF consumers and their respective notification status associated with the analytics event indicated in a get list response message504previously received from the second network node300. The NF consumers in the set list may in this case be identified using identifiers that where retrieved from the get list response message504. The second network node300receives the set list request message506from the first network node100and hence may obtain the chronological order of NF consumers for receiving notifications and their respective status indicated in the set list request message506. In step II inFIG.13, the second network node300determines the list of NF consumers based on the set list request message506and further determines an acknowledgement or a negative acknowledgement response based on the determined list of NF consumers. The second network node300may further determines the list of NF consumers based on the subscription received from the NF consumers. The second network node300may determine a negative acknowledgement if the list of NF consumers received in the set list request message506is invalid, e.g. if the list of NF consumers received in the set list request message506does not match the list of NF consumers in the second network node300. A potential reason for such a mismatch could be that one or more of the NF consumers in the list provided by the first network node100has cancelled its subscription, or new NF consumers had subscribed for the same or overlapping analytics event before the first network node100transmitted the set list request message506. In step III inFIG.13, the second network node300transmits a set list response message508to the first network node100. The set list response message508indicates the acknowledgement or the negative acknowledgement response to the set list request message506. Thus, the first network node100receives the set list response message508from the second network node300, wherein the set list response message508indicates an acknowledgement or a negative acknowledgement response to the set list request message506. If the first network node100receives the negative acknowledgement response, the first network node100may update the NF consumers list from the second network node300. According to embodiments of the invention a policy is used which allows the second network node300to update the list of NF consumers automatically. The policy is associated with a list of NF consumers and may be a set of rules that the second network node300uses to set the order of the NF consumers in the list and their respective notification status. Table 2 shows an example of a structure of a policy associated with a list of NF consumers. TABLE 2The firstnetwork node100 permittedInformationto modify innameDescriptionCategoryan event setScopeNFDetermines theMandatoryYesEventconsumerorder of the NFsettypeconsumer in theprecedencelist of NFconsumers.NFDetermines theMandatoryYesEventconsumerstatus of the NFsettype statusconsumer in thelist of NFconsumers.Time windowThe time windowOptionalYesEventwhere the policysetis applicable.LocationThe locationOptionalYesEventCriteriawhere the policysetis applicable. FIG.14shows signaling between the first network node100and the second network node300for establishing a policy in the second network node300according to embodiments of the invention. In the embodiment shown inFIG.14, the second network node300initiates the procedure for establishing the policy. In step I inFIG.14, the second network node300determines to establish a policy associated with a list of NF consumers. The second network node300may determine to establish the policy e.g. if the second network node300has not yet obtained a policy for the list of NF consumers or the policy for the list of NF consumers is no longer valid. The determination may be based on local policies in the second network node300. In step II inFIG.14, the second network node300transmits a policy request message510to the first network node100, wherein the policy request message510indicates a request for a policy associated with the list of NF consumers. In embodiments, the policy request message510may include at least one of analytics ID and analytics filter information. In this way, specific policies for an analytics ID or for a specific analytics event set according to the supplied analytics filter information can be requested. If the policy request message510does not contain analytics ID and analytics filter information a system wide policy may be requested. The first network node100receives the policy request message510from the second network node300, wherein the policy request message510indicates the request for a policy associated with the list of NF consumers. Based on the received policy request message510, first network node100establishes a policy associated with the list of NF consumers in step III inFIG.14. In step IV inFIG.14, the first network node100transmits a policy response message512the second network node300, wherein the policy response message512indicates the established policy associated with the list of NF consumers. The policy response message512may comprise a policy association ID and policy information associated with the list of NF consumers. The second network node300receives the policy response message512from the first network node100, wherein the policy response message512indicates the policy associated with the list of NF consumers. In step V inFIG.14, the second network node300determines the list of NF consumers and their respective notification status based on the policy associated with the list of NF consumers. The second network node300may further consider the subscriptions received from the NF consumers when determining the list of NF consumers and their respective notification status. In embodiments, an established policy associated with a list of NF consumers may be updated using an update mechanism. The update mechanism may be initiated by the first network node100or the second network node300. FIG.15shows signaling between the first network node100and the second network node300for updating of a policy, when the updating of the policy is initiated by the first network node100. In step I inFIG.15, the first network node100determines to update a policy associated with a list of NF consumers. The first network node100may determine to update the policy based on a local decision that the policy should be modified. For example, the first network node100may determine to update the policy because the operator has defined new information that may trigger a policy update or because of internal triggering in the first network node100. The update of the policy may e.g. update rules to determine the relative priority/order of NF consumers in receiving the analytics notification, as well as rules to determine to which status the NF consumers can be set. In step II inFIG.15, the first network node100transmits a policy update message514to the second network node300, wherein the policy update message514indicates the updated policy associated with the list of NF consumers. The policy update message514may indicate the updated policy with a policy association ID and may further indicate policy information associated with the list of NF consumers. The second network node300receives the policy update message514from the first network node100, wherein the policy update message514indicates the updated policy. In step III inFIG.15, the second network node300determines the list of NF consumers and their respective notification status based on the updated policy. The second network node300may further consider the subscriptions received from the NF consumers when determining the list of NF consumers and their respective notification status. The policy may refer to one or more analytics events and the policy update may trigger updating of all the lists in all the impacted analytics events. FIG.16shows signaling between the first network node100and the second network node300for updating of a policy, when the updating of the policy is initiated by the second network node300. In step I inFIG.16, the second network node300determines to update a policy associated with a list of NF consumers. The second network node300may determine to update the policy based on a local decision that the policy should be modified. For example, the second network node300may determine to update the policy because a new type of NF consumer subscribes to a specific analytics event and the second network node300needs to know the priority for serving the new type of NF consumer in relation to other NF consumers, as well as if the relevant analytics events shall be suppressed or not for the new type of NF consumer for proper coordination. In step II inFIG.16, the second network node300transmits a policy update request message516to the first network node100, wherein the policy update request message516indicates a request for an updated of the policy associated with the list of NF consumers. The policy update request message516may indicate the policy which the second network node300requests to be updated with a policy association ID. The first network node100receives the policy update request message516from the second network node300, wherein the policy update request message516indicates the request for the policy update. Based on the received policy update request message516, the first network node100updates the policy associated with the list of NF consumers, in step III inFIG.16. The update of the policy may e.g. trigger a change in the priority of different types of NF consumers or their status in the list of NF consumers. In step IV inFIG.16, the first network node100transmits a policy update response message518to the second network node300, wherein the policy update response message518indicates the updated policy associated with the list of NF consumers. The second network node300receives the policy update response message518from the first network node100, wherein the policy update response message518indicates the updated policy associated with the list of NF consumers. In step V inFIG.16, the second network node300deploys the policy. This may trigger updating of the list of NF consumers and their respective notification status based on the updated policy. The second network node300may further consider the subscriptions received from the NF consumers when determining the list of NF consumers and their respective notification status. According to embodiments of the invention an established policy associated with a list of NF consumers may be further be terminated.FIG.17shows signaling between the first network node100and the second network node300for termination of a policy. In step I inFIG.17, the second network node300determines to terminate a policy associated with a list of NF consumers. The second network node300may determine to terminate the policy e.g. based on that the first network node100intends to create a new policy, some analytics events, that a NF consumer type is no longer supported, or that the analytics service is no longer provided. In step II inFIG.17, the second network node300transmits a terminate policy request message520to the first network node100, wherein the terminate policy request message520indicates termination of the policy associated with the list of NF consumers. The terminate policy request message520may indicate the policy which the second network node300requests to be terminated with a policy association ID. The first network node100receives the terminate policy request message520from the second network node300, wherein the terminate policy request message520indicates termination of the policy associated with the list of NF consumers. Based on the received terminate policy request message520, the first network node100terminates the policy associated with the list of NF consumers in step III inFIG.17. The termination may include the first network node100removing the policy association with the analytics event. In step IV inFIG.17, the first network node100transmits a terminate policy response message522to the second network node300, wherein the terminate policy response message522indicates the termination of the policy associated with the list of NF consumers. The terminate policy response message522may e.g. be an acknowledgment indicating that the termination of the policy in the first network node100has been successful. The second network node300receives the terminate policy response message522from the first network node100, wherein the terminate policy response message522indicates termination of a policy associated with the list of NF consumers. Based on the received terminate policy response message522, the second network node300terminates the policy associated with the list of NF consumers, in step V inFIG.17. The termination may include the second network node300removing the policy association for the relevant analytics events and further deleting the subscription to detected analytics events requested for that policy association. The first network node100herein may be denoted as a policy control function (PCF) or a policy control and rule function (PCRF). The PCF or PCRF may be a function configured for communication in 3GPP related LTE and LTE-Advanced, in WiMAX and its evolution, and in fifth generation wireless technologies, such as new radio (NR). The second network node300herein may be denoted as a network data analytics function (NWDAF). The NWDAF may be a function configured for communication in 3GPP related LTE and LTE-Advanced, in WiMAX and its evolution, and in fifth generation wireless technologies, such as new radio (NR). The NF consumer herein may be denoted as a session management function (SMF), an access and management function (AMF), a policy control function (PCF), an application function (AF), a radio access networks (RAN), or a user equipment (UE). These functions or nodes may be configured for communication in 3GPP related LTE and LTE-Advanced, in WiMAX and its evolution, and in fifth generation wireless technologies, such as new radio (NR). Furthermore, any method according to embodiments of the invention may be implemented in a computer program, having code means, which when run by processing means causes the processing means to execute the steps of the method. The computer program is included in a computer readable medium of a computer program product. The computer readable medium may comprise essentially any memory, such as a ROM (Read-Only Memory), a PROM (Programmable Read-Only Memory), an EPROM (Erasable PROM), a Flash memory, an EEPROM (Electrically Erasable PROM), or a hard disk drive. Moreover, it is realized by the skilled person that embodiments of the first network node100and the second network node300comprises the necessary communication capabilities in the form of e.g., functions, means, units, elements, etc., for performing the solution. Examples of other such means, units, elements and functions comprise: processors, memory, buffers, control logic, encoders, decoders, rate matchers, de-rate matchers, mapping units, multipliers, decision units, selecting units, switches, interleavers, de-interleavers, modulators, demodulators, inputs, outputs, antennas, amplifiers, receiver units, transmitter units, DSPs, MSDs, TCM encoder, TCM decoder, power supply units, power feeders, communication interfaces, communication protocols, etc. which are suitably arranged together for performing the solution. Especially, the processor(s) of the first network node100and the second network node300may comprise, e.g., one or more instances of a Central Processing Unit (CPU), a processing unit, a processing circuit, a processor, an Application Specific Integrated Circuit (ASIC), a microprocessor, or other processing logic that may interpret and execute instructions. The expression “processor” may thus represent a processing circuitry comprising a plurality of processing circuits, such as, e.g., any, some or all of the ones mentioned above. The processing circuitry may further perform data processing functions for inputting, outputting, and processing of data comprising data buffering and device control functions, such as call processing control, user interface control, or the like. Finally, it should be understood that the invention is not limited to the embodiments described above, but also relates to and incorporates all embodiments within the scope of the appended independent claims.
44,304
11943096
The figures herein are for illustrative purposes only and are not necessarily drawn to scale. These and other aspects, objects, features, and advantages of the example embodiments will become apparent to those having ordinary skill in the art upon consideration of the following detailed description of example embodiments. Overview The technology may be utilized in environments such as a SAN environment that includes fiber channel switches, host bus adapters, storage arrays, and fiber channel cable used to connect the components. Components of the network perform functions as transceivers that transmit and receive light signals via the fiber cable. Each transceiver has sensors to measure the light received via the cable. The system receives data from the transceivers related to the light measurements received. The system stores the received data. When data is recognized as a fault, failure, or other disruption in the normal communication process, an event is identified. The system logs each received event, such as alarm light levels, alarm light level missing FLOGI, warning light levels warning light level missing FLOGI, or other events. The system converts all measurements into a common unit of measurement so that all data sources may be compared and analyzed. The system analyzes aggregated data from all of the transceivers to detect patterns and trends. For example, the system may use a series of algorithms, thresholds, or data management software applications to determine when a fiber cable, a transceiver, or other system component is in danger of developing a fault. For example, the algorithm may recognize that a light level is flapping or fluctuating. The algorithm may recognize that the flapping is increasing in frequency or magnitude and determine that the flapping is likely to result in a fault. Any other determination may be made from the analyzed data. In another example, data may be fed as inputs into a machine learning process to monitor the received data and provide real time fault predictions. For example, the machine learning process may recognize subtle trends or patterns in the data, and use that recognition to predict potential failures. The system uses the received data to create a graphical user interface (“GUI”) that represents the health of the network. The GUI may present data, such as a number of events for each type of fault or other event. The GUI may be color coded to provide an overview of the system that allows a user to instantly recognize a current or potential fault. The overall health of the network may be captured in an overview visual of the system. When faults or potential faults are recognized, a user or technician may be directed to make repairs or changes to the network. In conventional systems, a light measurement tool requires a technician to tap into a fiber cable to intercept light signals. To get an accurate optic power level reading, the external tool must disrupt the signal. The improved method is non-intrusive and is less disruptive to the network. Further, this non-intrusive approach costs less in labor and equipment than conventional systems. This invention represents an advance in computer engineering and a substantial advancement over existing practices. The monitoring of the power levels allows faults to be detected in real time to prevent downtime. Further, the analysis of the aggregated data allows the system to predict potential faults to allow repairs or adjustment to the system to prevent faults before they are realized. This method and system provides a non-conventional, technical, real-world output and benefit that is not obtainable with conventional systems. All computing devices on the system operate more efficiently and with greater usage time when the optic power levels are measured and monitored. Humans could not perform the processes described herein because the amount of data received in real time from every device in a network could not be processed in the mind. Further, the analysis of the data is performed by algorithm, such as a machine learning algorithm, and the subtle inferences that identify potential faults are not performable in a human mind. Example System Architectures Turning now to the drawings, in which like numerals represent like (but not necessarily identical) elements throughout the figures, example embodiments are described in detail. FIG.1is a block diagram depicting a system100to provide optic power level monitoring. A user associated with a user computing device110installs an application, or makes a feature selection to obtain the benefits of the techniques described herein. As depicted inFIG.1, the system100includes network computing devices/systems110,120,130,140, and150that are configured to communicate with one another via one or more networks105or via any suitable communication technology. Each network105includes a wired or wireless telecommunication means by which network devices/systems (including devices110,120,130,140, and150) can exchange data, such as a fiber optic cable network. For example, each network105can include any of those described herein such as the network2080described inFIG.9or any combination thereof or any other appropriate architecture or system that facilitates the communication of signals and data. Throughout the discussion of example embodiments, the terms “data” and “information” are used interchangeably herein to refer to text, images, audio, video, or any other form of information that can exist in a computer-based environment. The communication technology utilized by the devices/systems110,120,130,140, and150may be similar networks to network105or an alternative communication technology. Each network computing device/system110,120,130,140, and150includes a computing device having a communication module capable of transmitting and receiving data over the network105or a similar network. For example, each network device/system110,120,130,140, and150can include any computing machine2000described herein and found inFIG.9or any other wired or wireless, processor-driven device. In the example embodiment depicted inFIG.1, the network devices/systems110,120,130,140, and150are operated by users, optic power monitoring system operators, storage array operators, server operators, and/or fiber channel switch operators, respectively. The user computing device110can use the communication application112, which may be, for example, a web browser, application, or a stand-alone module, to view, download, upload, or otherwise access documents or web pages through a user interface via the network105. The user computing device110can interact with the web servers or other computing devices connected to the network105, including the optic power monitoring system120or any other component of the network105. In another example embodiment, the user computing device110communicates with devices in the network105via any other suitable technology, including the example computing system described below. In an example, the communication application112is an application for receiving analyzed data from the optic power monitoring system120, such as a recommendation to repair a network fault. For example, the communication application112may be used to review faults, display graphical user interfaces, communicate with the optic power monitoring system120, interrogate the server140, or perform any other suitable tasks. The user computing device110also includes a data storage unit113accessible by the communication application112, or other applications. The example data storage unit113can include one or more tangible computer-readable storage devices. The data storage unit113can be stored on the user computing device110or can be logically coupled to the user computing device110. For example, the data storage unit113can include on-board flash memory and/or one or more removable memory accounts or removable flash memory. In another example embodiments, the data storage unit113may reside in a cloud-based computing system. An example optic power monitoring system120comprises a data storage unit123and an optic power monitoring server125. The data storage unit123can include any local or remote data storage structure accessible to the optic power monitoring system120suitable for storing information. The data storage unit123can include one or more tangible computer-readable storage devices, or the data storage unit123may be a separate system, such as a different physical or virtual machine or a cloud-based storage service. In one aspect, the optic power monitoring server125performs any of the processing or communication functions associated with the optic power monitoring system120described herein. For example, the optic power monitoring server125communicates with the user computing device110, the server140, the fiber channel switch150, the storage array system130and/or any other computing system to acquire data or to transmit requested data. The data may include any suitable data used for any purposes described herein, such as detecting fraud or training a machine learning algorithm. The optic power monitoring server125may perform the functions described herein, such as to collect, log, aggregate, and analyze data received from the other components. The optic power monitoring server125may perform the analysis using rules, algorithms, software functions, machine learning, artificial intelligence, or any other suitable process to analyze the data. A storage array system130may include a data storage unit137. The storage array system130sends and receives data from the server140, the fiber channel switch150, or any other component or device via a fiber cable. The fiber cable may transmit light optically via the fiber cable to or from the storage array system130delivering packets or streams of data. The storage array system130communicates with the user computing device110and/or the optic power monitoring system120to request and receive data. The data may comprise measurements or other characteristics of the light signal received or transmitted along the fiber cable as described herein. The data storage unit137can include any local or remote data storage structure accessible to the fraud system130suitable for storing information. The data storage unit137can include one or more tangible computer-readable storage devices, or the data storage unit137may be a separate system, such as a different physical or virtual machine or a cloud-based storage service. The server140may be any device that performs functions for the network system to respond to user interactions. For example, the server140may be a “host” for applications or other functions accessed by users or customers. The server140may perform other functions of the network system, such as managing data flows, interacting with operators, receiving inputs of data from third parties, or performing any other suitable functions. The server140utilizes host bus adaptors141. The host bus adaptor141may be a circuit board or other type of hardware or software that connects a host system, such as server140, to a network105or to any components of the network105. A host bus adaptor141also provides input/output processing to reduce the load on the microprocessor of the server140when storing and retrieving data. The fiber channel switch150allows the creation of a fiber channel fabric, a core component of the SAN system. The fabric is a network of fiber channel components that allow for communications along the network. The fiber channel switch150are used to connect storage array systems130to servers140or other components. This connection provides point-to-point, switched and loop interfaces to deliver in-order and lossless raw block data. It will be appreciated that the network connections shown are examples, and other means of establishing a communications link between the computers and devices can be used. Moreover, those having ordinary skill in the art having the benefit of the present disclosure will appreciate that the user computing device110, optic power monitoring system120, the storage array system130, the server140, and the fiber channel switch150illustrated inFIG.1can have any of several other suitable computer system configurations. For example, a user computing device110embodied as a mobile phone or handheld computer may not include all the components described above. FIG.2is a block diagram depicting an example SAN system using fiber cable communications. This example SAN system illustrates the connections of a fiber cable system between servers140, a fiber channel switches150, and storage arrays120. The interconnecting lines represent fiber cables between the components of the SAN system. For example, the SAN switch A150is connected to both storage arrays120and both ESX servers140by fiber cables. The switch is thus able to create a fabric A that allows communications to flow from or to any of the components. In example embodiments, the network computing devices and any other computing machines associated with the technology presented herein may be any type of computing machine such as, but not limited to, those discussed in more detail with respect toFIG.9. Furthermore, any modules associated with any of these computing machines, such as modules described herein or any other modules (scripts, web content, software, firmware, or hardware) associated with the technology presented herein may by any of the modules discussed in more detail with respect toFIG.9. The computing machines discussed herein may communicate with one another as well as other computer machines or communication systems over one or more networks, such as network105. The network105may include any type of data or communications network, including any of the network technology discussed with respect toFIG.9. Example Processes The example methods illustrated inFIG.3are described hereinafter with respect to the components of the example architecture100. The example methods also can be performed with other systems and in other architectures including similar elements. Referring toFIG.3, and continuing to refer toFIG.1for context, a block flow diagram illustrates methods300to predict and diagnose optic power faults, in accordance with certain examples of the technology disclosed herein. In block310, the optic power monitoring system120receives a topology of a fiber optic network system and the components of the system. The topology may include the locations of each component, the connections between components, the functions of each component, the model numbers and manufacturers of each component, the types of communication technologies between each component, and any other suitable data. The components may include components described herein, such as an optic power monitoring system120, a storage array system130, a server140, and a fiber channel switch150, or any other suitable components of a network system105. The functions of the optic power monitoring system120may be performed by an optic power monitoring server125that is part of the optic power monitoring system120. The optic power monitoring system120may monitor communications or other interactions between the components to determine how the components are connected. The optic power monitoring system120may receive inputs from an operator that describe or illustrate the topology of the system. Any other process or method may be used to receive the topology. In block320, the optic power monitoring system120receives real time inputs from system components of light measurements in the fiber cables. Each component, such as server140, may have measurement devices or tools that monitor the power or level of light received over an input fiber cable. The fiber cable delivers data to the component via light sources transmitted over the fiber cable. In an example, when the level of light is low, the component may not be able to interpret the data. The measurement device on the component may be provided by the manufacturer of the component as an onboard function of the component. In other examples, the measurement device may be a separate tool that is attached to the component by the manufacturer or by an operator of the network105. The measurements from the components may include any related data, such as real time optic power levels, historical optic power levels, device types, units of measurement, device serial numbers, fiber cable types, fiber signal sources, or any other suitable data. The optic power levels may vary based on multiple factors. In certain examples, the optic power level may vary naturally based on the source of the signal. In other examples, problems or faults in the optic cable or other components of the network105may cause the optic power levels to vary or disappear. For example, the optic power level may be affected by distance. That is, the farther the signal travels, the more the signal may degrade, and the lower the power level at the receiving end of the fiber cable. In another example, splices and connectors can cause signal loss, especially if the splices and connectors are not professionally installed or maintained. In another example, bending of a fiber optic cable too tightly can cause the signal to weaken, which can lead to low power levels. In another example, dust and dirt can accumulate on the connector end faces, causing attenuation and reflection. In another example, a connector, such as a GBIC or transceiver on the end of the fiber optic cable, can become dirty or damaged, which can cause signal loss and low power levels. In another example, extreme temperatures can cause the fiber to expand or contract, causing signal loss and low power levels. In another example, the fiber cable may degrade over time, causing signal loss and low power levels. Any other condition may cause a weakening or loss of the optic power level of a signal. When the optic power monitoring system120receives the real time inputs from the components, the data may be in any suitable format or unit of measurement. For example, data from one component may be provided in one data packet format that is standard for the manufacturer of that component while another component provides data in a different data packet format. For example, one data packet may be provided in CSV format, while another is provided with a JSON format. The inputs may be received continuously, periodically, or at regular intervals. The inputs may be a reading of the optic power level entering or leaving the component along the fiber cable. An alarm signal generated by the component that recognizes a fault may also be communicated in the data packet. In one example, the data is received from a host bus adaptor, such as the host bus adaptor141that is illustrated in association with the server140. In certain examples of the technology, the host bus adaptor141does not provide optic power levels, but only distributed components of the network105, such as the storage array system130and the fiber channel switches150. In block330, the optic power monitoring system120logs events of low optical power levels including total events, alarm light levels, alarm light level missing Fabric Login (“Flogi”), warning light levels, or warning light level missing Flogi. Any other suitable events may be recorded, such as any event that indicates a loss or pending loss or interruption of the signal. The optic power monitoring system120may recognize the events or the events may be flagged by the component providing the data. For example, the component may include a notification that an event has occurred or is occurring, and the event is represented in the data. In another example, the optic power monitoring system120recognizes the event, such as a reading of optic power level below a configured threshold. In block340, the optic power monitoring system120converts all measurement into common units of measurement. In an example, the light level signal detected by certain switch ports is measured in dBm, while the light level detected from the storage ports is measured in Milliwatts. The units dBm represent decibels per milliwatt. Measurements of optical power may be expressed in units of dBm. The “m” in dBm refers to the reference power, which may be 1 mW. A milliwatt is a unit of power equal to one thousandth of a watt. A watt is used to specify the rate at which electrical energy is dissipated, or the rate at which electromagnetic energy is radiated, absorbed, or dissipated. In the example, a measurement received in milliwatts is converted to dBm to have consistent measurement across each component for storage and analysis. Any other units of measurement provided by a component may be converted to a standard unit. When measurements are stored in a single unit of measure, direct comparison of data may be performed during an analysis without a need for further programming of unit conversions. In block350, the optic power monitoring system120analyzes aggregated logged data to detect patterns and trends. The optic power monitoring system120may receive the data from any number of components of the network105. For example, in a large network, thousands of components may provide data, such as 1,000 or 10,000 components. Each component may provide data continuously or in discrete data packets. The data packets may include, for example, ten seconds or ten minutes of optic power levels. When communicating continuously, the data may be a continuous reading of the optic power level at the component. The optic power monitoring system120may aggregate the data for any number of the components, such as for all 10,000 of the components. For example, the optic power monitoring system120may place all of the data in a database, average all of the data, list all of the data in a spreadsheet format, or save the data in any suitable format. The optic power monitoring system120may compare inputs from similar devices. For example, the optic power monitoring system120may save the inputs to servers140that are being used in a particular application or are from the same manufacturer. The aggregated data from these similar components may provide additional information when compared or analyzed. The optic power monitoring system120analyzes the data. The analysis may be based on a set of rules or algorithms to detect patterns or trends, identify events or failures, or to create models of component behaviors. Each location, node, component, or other subsection of the network105may include a number of metrics that may be used to perform the analysis. For example, a list of metrics may include inventory data for each component such as vendor, model, type, serial number, name, and geographical location. The metrics may include a present measure of light expressed in decibels per milliwatts or milliwatts. The metrics may include a fiber channel status for the host paths, a list of servers140connected to the SAN infrastructure, a breakdown of the SAN paths, an application ID from an inventory catalog for servers140, a list of storage arrays130, and a list of switches150. For example, the optic power monitoring system120may map the network105and note the expected optic power levels at each location of the network105. By logging the optic power level at each location, the optic power monitoring system120may determine that levels are lower than expected, when the levels are trending downward, when the levels are cycling, or when the levels are displaying any other indications of a fault or a potential fault. Based on a history of optic power levels at each component, the optic power monitoring system120recognizes when a pattern that leads to a fault or failure is repeating. For example, if an optic power level at a particular server140is experiencing short periods with no signal, and the periods of no signal are becoming more frequent, then the optic power monitoring system120is able to predict when a failure is likely. The prediction may be based on data from previous failures that followed a similar pattern. In another example, the optic power monitoring system120identifies that an optic power level is cycling. That is, the optic power level is rising and falling throughout a given time period. The optic power monitoring system120is able to determine based on historic data that when optic power level cycles in this manner, a failure is likely to occur. In another example, the optic power monitoring system120identifies that an optic power level is drifting lower. The optic power monitoring system120is able to predict that the optic power level will continue to drift lower until a failure is realized. The optic power monitoring system120determines a threshold at which corrective actions must be taken. In an example, the analysis may be performed using one or more machine learning algorithms hosted by the optic power monitoring system120, such as on the optic power monitoring server125. As described in the Machine Learning section below, the data stored by the optic power monitoring system120is used to train the machine learning algorithm to create predictive models of the actions, systems, locations, devices, or any other characteristic of the fiber cable, the light signals being transmitted, and the components of the network105. The machine learning function imports or otherwise receives the data from the optic power monitoring system120and the components. The data is fed into the machine learning algorithm or otherwise provided as training data to the machine learning algorithm. As described herein, the data is processed and categorized by the machine learning algorithm. For example, optic power levels, device types, and system topology is used to model the network105and identify potential faults. Further, the machine learning algorithm may also extract non-trivial characteristics linking the data. For example, the input data to a neural network eventually becomes weights used by equations in nodes (e.g. one or more hidden layers transforms the input data into values to which the weights are applied and adjusted according to an output value). Therefore, the neural network “learns” indescribable patterns between inputs. These unseen connections, trends, and tendencies may not alone be indicative of an imminent failure, but a pattern that is not realistically observable by human analysis may provide a basis for a need for an investigation or alteration to the network105. In an example, the machine learning algorithm determines relationships between optical power levels, actions of the components, and failure events. As described herein, different machine learning algorithms may be tested to determine which type of machine learning algorithm provides the most accurate predictions or results. For example, the results may be tested by comparing predicted results of applying different rule sets to historical data in which fault events or failures were identified. The machine learning algorithm may deploy the model or algorithm selected for usage in real time based on the results of the testing. The results of the usage may be fed back to the machine learning algorithm to allow for adjustments to the selected algorithm. As additional results are received from user interactions, the model may be adjusted to better fit the data and thus make more accurate predictions. The details of the method to train the machine learning algorithm are described in greater detail in the Machine Learning section below. In the examples herein, different machine learning algorithms utilized may include, but are not limited to, support vector machines, naive Bayes, neural networks, gradient boosting machines, random forests, and an ensemble of multiple algorithms. These algorithms are discussed in greater detail below. In block360, the optic power monitoring system120creates a graphical user interface400with representations of the aggregated data. The graphical user interface400may present a dashboard providing an overview of the network105, and/or specific details of each component of the network105. The graphical user interface400may include a unique visual representation of the light/power levels to take preventive actions or make business decisions as to the effectiveness of the devices monitored. The graphical user interface400may display color codes to represent ranges of conformity or non-conformity to established thresholds. The graphical user interface400may display customized thresholds based on performance metrics from history data. The graphical user interface400may display daily counts of low optic power levels to help determine the persistency of issues. This display allows a user to prioritize issues according to their persistence because persistent issues or reports of low power levels may be indicative of a looming problem. The graphical user interface400display of daily counts of low optic power levels helps to avoid false alarms/warnings. The graphical user interface400provides a less disruptive means to troubleshoot a problem's root cause due to low optic power level. The graphical user interface400provides a graphical federated view of data collected from the various components and aspects of the SAN infrastructure from different vendors for each of the components. The graphical user interface400provides options for monitoring of data, patterns, and behaviors to predict future problems and viable solutions. The graphical user interface400allows a user an easy process to maintain and observe inventory data of storage arrays130and switches150. The graphical user interface400allows a user to monitor components from multiple different vendors in a consistent, uniform format. The different types of components may be displayed together on a single graphical user interface400. Examples of variations of the graphical user interface400are described below inFIG.4-FIG.7. In block370, the optic power monitoring system120recommends to a technician or user to troubleshoot a component or a fiber cable. Based on a determination from the analysis that a power level input is indicative of a failure or a likely failure, the optic power monitoring system120communicates a notification to a user via the graphical user interface400or any other suitable notification technology, such as a text or an email. In the example, when the analysis indicates that an optic power level is cycling, dropping, or providing any other indication of failure, the optic power monitoring system120may display the fault on the graphical user interface400in red to indicate the failure. The display may be on the dashboard of the graphical user interface400to provide the notice to the user in a location that is likely to be seen. The recommendation may be based on the optic power level actually being in a fault condition, such as being below a threshold. In another example, the recommendation may be based on a subtle inference recognized by an algorithm, such as the machine learning algorithm, that indicates that a failure may be likely or imminent, as described herein. Any other basis for the recommendation may be used. InFIG.4, an overview system is displayed on a dashboard of the graphical user interface400. Specific features of the dashboard are described in greater detail inFIG.5-FIG.9. FIG.5is an example graphical user interface500depicting fiber cable power events. The graphical user interface500includes a numerical representation of Total Events501, Alarm Light Level502, Alarm Light Level Missing Flogi503, Warning Light Level504, and Warning Light Level Missing Flogi505. Three of the event totals are displayed in a bar graph506. The bar graph506provides a display that is easy for a user to view and assess to allow the user to determine the state of the network105. Any other type of fault or potential fault may be displayed in the graphical user interface500. For example, events such as power drift, power cycling, or power spikes may be displayed. FIG.6is an example graphical user interface600depicting fiber cable power statuses. The graphical user interface600is displaying details of a status of five different devices. The device names601are displayed for identification. An indication of the optic power level of signals being received602by the device601and an indication of the optic power level of signals being transmitted603are displayed. The receiving signal level602is displayed in red to indicate that the device601input is in an alarm state. The transmitting signal level603is displayed in green to indicate that the device601output is not in an alarm state. A total number of daily events604is displayed to allow a user a visual indication of the number of events occurring on this day compared to previous days. Other data is displayed on graphical user interface600, such as the region of the device, the port used, the last connected device, the type of alarm issue, the date and time of the last event. Any other suitable display may be presented on the graphical user interface600. FIG.7is an example graphical user interface700depicting fiber cable power statuses. The graphical user interface700allows a user to specify the data in the display. For example, if a user desires to view only the servers connected to the network105, the display appears as inFIG.7. The status of the servers is displayed using a format as described with respect toFIG.6, except only the servers are displayed. FIGS.8aand8bare example flow diagrams depicting a method to determine display options. The flow chart800represents an example logic process to determine whether input data describes an instance of faults in the measured optic power. In process1, the method starts. In process2, the optic power monitoring system120gets light metrics from one or more of the network components, such as the storage array system130or the fiber channel switch150. In process3, the optic power monitoring system120converts from milliwatts (“mW”) to decibel-milliwatts (“dBm”). Process2and process3may be performed as described in the method300ofFIG.3. In process4, the optic power monitoring system120filters events with a threshold. Events that are above or below a threshold, or are between two thresholds, are filtered such that different actions can be taken with the data. In process5, the optic power monitoring system120pushes events to a web interface, such as the graphical user interface600, every two hours. Any suitable amount of time for data to be pushed to the graphical user interface600may be used, as described herein. In process6, the optic power monitoring system120filters by array name, serial number, event type, or other identifiers. As described in the method300inFIG.3, the optic power monitoring system120captures data from each component in the network and sorts the data to a storage location for that component or for the identified type of data. In process7, the optic power monitoring system120determines a color for the graphical user interface600to display for event totals. The optic power monitoring system120determines the statistics for the process, such as the key index and the time that has elapsed. The optic power monitoring system120counts the events that have occurred since the time period began. In examples, the time period is the previous 24 hours or the previous week. If the counted events total from 0-1, then the display is instructed to indicate GREEN. If the counted events total from 2-6, then the display is instructed to indicate YELLOW. If the counted events total a number greater than 6, then the display is instructed to indicate RED. Any suitable number of events may be used as the scale to indicate a color. For example, in a very large network with millions of nodes or users, then the optic power monitoring system120may indicate RED only when the total is over 100 or 1000 events. In process8, the optic power monitoring system120determines a color for the graphical user interface600to display for optic power levels being transmitted. The optic power monitoring system120logs the communicated measurements of optic power as the measurements are received. If the communicated measurements are in a particular range, then a color is assigned to the display on the graphical user interface600. In the example, when the communicated power level is below −13.00 dBm, then the display is instructed to indicate RED. If the communicated power level is between −12.99 and −9.00 dBm, then the display is instructed to indicate YELLOW. If the received power level is between −8.99 and −1.31 dBm, then the display is instructed to indicate GREEN. If the received power level is between −1.30 and 1.69 dBm, then the display is instructed to indicate YELLOW. If the communicated power level is greater than 1.69 dBm, then the display is instructed to indicate RED. Any suitable optical power level may be used as the scale to indicate a color. In process9, the optic power monitoring system120determines a color for the graphical user interface600to display for optic power levels being received by a component. In the example, when the communicated power level is below −15.90 dBm, then the display is instructed to indicate RED. If the communicated power level is between −15.89 and −11.90 dBm, then the display is instructed to indicate YELLOW. If the communicated power level is between −11.89 and −0.01 dBm, then the display is instructed to indicate GREEN. If the communicated power level is between 0.0 and 2.99 dBm, then the display is instructed to indicate YELLOW. If the communicated power level is greater than 2.99 dBm, then the display is instructed to indicate RED. Any suitable optical power level may be used as the scale to indicate a color. In process10, the optic power monitoring system120determines a color for the graphical user interface600to display for optic power levels being transmitted and received by a component in mW. In the example, when the communicated power level is below 0.05 mW, then the display is instructed to indicate RED. If the communicated power level is between 0.05 and 0.075 mW, then the display is instructed to indicate GREEN. If the communicated power level is greater than 0.075 mW, then the display is instructed to indicate RED. Any suitable optical power level may be used as the scale to indicate a color. In process11, the method800ends. Machine Learning Machine learning is a field of study within artificial intelligence that allows computers to learn functional relationships between inputs and outputs without being explicitly programmed. The term “Artificial Intelligence” refers to a quantitative method, system, or approach (“techniques”) that emulates human intelligence via computer programs. These can be used to make estimates, predictions, recommendations, or decisions in manners that go beyond classical, statistical, mathematical, econometric, or financial approaches. Machine learning is the subset of AI that derives representations or inferences from data without explicitly programming every parameter representation or computer step (for example, Random Forest or Artificial Neural Network based algorithm approaches). In contrast, AI techniques that are not members of the machine learning subset include techniques such as fuzzy logic, complex dependency parsing techniques for natural language processing. Machine learning involves a module comprising algorithms that may learn from existing data by analyzing, categorizing, or identifying the data. Such machine-learning algorithms operate by first constructing a model from training data to make predictions or decisions expressed as outputs. In example embodiments, the training data includes data for one or more identified features and one or more outcomes, for example using optic power levels for components of the network105to identify that a failure is likely. Although example embodiments are presented with respect to a few machine-learning algorithms, the principles presented herein may be applied to other machine-learning algorithms. Data supplied to a machine learning algorithm can be considered a feature, which can be described as an individual measurable property of a phenomenon being observed. The concept of feature is related to that of an independent variable used in statistical techniques such as those used in linear regression. The performance of a machine learning algorithm in pattern recognition, classification and regression is highly dependent on choosing informative, discriminating, and independent features. Features may comprise numerical data, categorical data, time-series data, strings, graphs, or images. In general, there are two categories of machine learning problems: classification problems and regression problems. Classification problems, also referred to as categorization problems, aim at classifying items into discrete category values. Training data teaches the classifying algorithm how to classify. In example embodiments, features to be categorized may include interaction data, which can be provided to the classifying machine learning algorithm and then placed into categories of, for example, interactions with provider X, requesting data type Y, from user computing device Z, at geolocation A, or using application B. Regression algorithms aim at quantifying and correlating one or more features. Training data teaches the regression algorithm how to correlate the one or more features into a quantifiable value. Embedding In one example, the machine learning module may use embedding to provide a lower dimensional representation, such as a vector, of features to organize them based off respective similarities. In some situations, these vectors can become massive. In the case of massive vectors, particular values may become very sparse among a large number of values (e.g., a single instance of a value among 50,000 values). Because such vectors are difficult to work with, reducing the size of the vectors, in some instances, is necessary. A machine learning module can learn the embeddings along with the model parameters. In example embodiments, features such as geolocation can be mapped to vectors implemented in embedding methods. In example embodiments, embedded semantic meanings are utilized. Embedded semantic meanings are values of respective similarity. For example, the distance between two vectors, in vector space, may imply two values located elsewhere with the same distance are categorically similar. Embedded semantic meanings can be used with similarity analysis to rapidly return similar values. In example embodiments, the methods herein are developed to identify meaningful portions of the vector and extract semantic meanings between that space. Training Methods In example embodiments, the machine learning module can be trained using techniques such as unsupervised, supervised, semi-supervised, reinforcement learning, transfer learning, incremental learning, curriculum learning techniques, and/or learning to learn. Training typically occurs after selection and development of a machine learning module and before the machine learning module is operably in use. In one aspect, the training data used to teach the machine learning module can comprise input data such as optic power levels for components of the network105. Unsupervised and Supervised Learning In an example embodiment, unsupervised learning is implemented. Unsupervised learning can involve providing all or a portion of unlabeled training data to a machine learning module. The machine learning module can then determine one or more outputs implicitly based on the provided unlabeled training data. In an example embodiment, supervised learning is implemented. Supervised learning can involve providing all or a portion of labeled training data to a machine learning module, with the machine learning module determining one or more outputs based on the provided labeled training data, and the outputs are either accepted or corrected depending on the agreement to the actual outcome of the training data. In some examples, supervised learning of machine learning system(s) can be governed by a set of rules and/or a set of labels for the training input, and the set of rules and/or set of labels may be used to correct inferences of a machine learning module. Semi-Supervised and Reinforcement Learning In one example embodiment, semi-supervised learning is implemented. Semi-supervised learning can involve providing all or a portion of training data that is partially labeled to a machine learning module. During semi-supervised learning, supervised learning is used for a portion of labeled training data, and unsupervised learning is used for a portion of unlabeled training data. In one example embodiment, reinforcement learning is implemented. Reinforcement learning can involve first providing all or a portion of the training data to a machine learning module and as the machine learning module produces an output, the machine learning module receives a “reward” signal in response to a correct output. Typically, the reward signal is a numerical value and the machine learning module is developed to maximize the numerical value of the reward signal. In addition, reinforcement learning can adopt a value function that provides a numerical value representing an expected total of the numerical values provided by the reward signal over time. Transfer Learning In one example embodiment, transfer learning is implemented. Transfer learning techniques can involve providing all or a portion of a first training data to a machine learning module, then, after training on the first training data, providing all or a portion of a second training data. In example embodiments, a first machine learning module can be pre-trained on data from one or more computing devices. The first trained machine learning module is then provided to a computing device, where the computing device is intended to execute the first trained machine learning model to produce an output. Then, during the second training phase, the first trained machine learning model can be additionally trained using additional training data, where the training data can be derived from kernel and non-kernel data of one or more computing devices. This second training of the machine learning module and/or the first trained machine learning model using the training data can be performed using either supervised, unsupervised, or semi-supervised learning. In addition, it is understood transfer learning techniques can involve one, two, three, or more training attempts. Once the machine learning module has been trained on at least the training data, the training phase can be completed. The resulting trained machine learning model can be utilized as at least one of trained machine learning module. Incremental and Curriculum Learning In one example embodiment, incremental learning is implemented. Incremental learning techniques can involve providing a trained machine learning module with input data that is used to continuously extend the knowledge of the trained machine learning module. Another machine learning training technique is curriculum learning, which can involve training the machine learning module with training data arranged in a particular order, such as providing relatively easy training examples first, then proceeding with progressively more difficult training examples. As the name suggests, difficulty of training data is analogous to a curriculum or course of study at a school. Learning to Learn In one example embodiment, learning to learn is implemented. Learning to learn, or meta-learning, comprises, in general, two levels of learning: quick learning of a single task and slower learning across many tasks. For example, a machine learning module is first trained and comprises of a first set of parameters or weights. During or after operation of the first trained machine learning module, the parameters or weights are adjusted by the machine learning module. This process occurs iteratively on the success of the machine learning module. In another example, an optimizer, or another machine learning module, is used wherein the output of a first trained machine learning module is fed to an optimizer that constantly learns and returns the final results. Other techniques for training the machine learning module and/or trained machine learning module are possible as well. Contrastive Learning In example embodiment, contrastive learning is implemented. Contrastive learning is a self-supervised model of learning in which training data is unlabeled is considered as a form of learning in-between supervised and unsupervised learning. This method learns by contrastive loss, which separates unrelated (i.e., negative) data pairs and connects related (i.e., positive) data pairs. For example, to create positive and negative data pairs, more than one view of a datapoint, such as rotating an image or using a different time-point of a video, is used as input. Positive and negative pairs are learned by solving dictionary look-up problem. The two views are separated into query and key of a dictionary. A query has a positive match to a key and negative match to all other keys. The machine learning module then learns by connecting queries to their keys and separating queries from their non-keys. A loss function, such as those described herein, is used to minimize the distance between positive data pairs (e.g., a query to its key) while maximizing the distance between negative data points. See e.g., Tian, Yonglong, et al. “What makes for good views for contrastive learning?” Advances in Neural Information Processing Systems 33 (2020): 6827-6839. Pre-Trained Learning In example embodiments, the machine learning module is pre-trained. A pre-trained machine learning model is a model that has been previously trained to solve a similar problem. The pre-trained machine learning model is generally pre-trained with similar input data to that of the new problem. A pre-trained machine learning model further trained to solve a new problem is generally referred to as transfer learning, which is described herein. In some instances, a pre-trained machine learning model is trained on a large dataset of related information. The pre-trained model is then further trained and tuned for the new problem. Using a pre-trained machine learning module provides the advantage of building a new machine learning module with input neurons/nodes that are already familiar with the input data and are more readily refined to a particular problem. See e.g., Diamant N, et al. Patient contrastive learning: A performant, expressive, and practical approach to electrocardiogram modeling. PLoS Comput Biol. 2022 Feb. 14; 18(2):e1009862. In some examples, after the training phase has been completed but before producing predictions expressed as outputs, a trained machine learning module can be provided to a computing device where a trained machine learning module is not already resident, in other words, after training phase has been completed, the trained machine learning module can be downloaded to a computing device. For example, a first computing device storing a trained machine learning module can provide the trained machine learning module to a second computing device. Providing a trained machine learning module to the second computing device may comprise one or more of communicating a copy of trained machine learning module to the second computing device, making a copy of trained machine learning module for the second computing device, providing access to trained machine learning module to the second computing device, and/or otherwise providing the trained machine learning system to the second computing device. In example embodiments, a trained machine learning module can be used by the second computing device immediately after being provided by the first computing device. In some examples, after a trained machine learning module is provided to the second computing device, the trained machine learning module can be installed and/or otherwise prepared for use before the trained machine learning module can be used by the second computing device. After a machine learning model has been trained it can be used to output, estimate, infer, predict, generate, produce, or determine, for simplicity these terms will collectively be referred to as results. A trained machine learning module can receive input data and operably generate results. As such, the input data can be used as an input to the trained machine learning module for providing corresponding results to kernel components and non-kernel components. For example, a trained machine learning module can generate results in response to requests. In example embodiments, a trained machine learning module can be executed by a portion of other software. For example, a trained machine learning module can be executed by a result daemon to be readily available to provide results upon request. In example embodiments, a machine learning module and/or trained machine learning module can be executed and/or accelerated using one or more computer processors and/or on-device co-processors. Such on-device co-processors can speed up training of a machine learning module and/or generation of results. In some examples, trained machine learning module can be trained, reside, and execute to provide results on a particular computing device, and/or otherwise can make results for the particular computing device. Input data can include data from a computing device executing a trained machine learning module and/or input data from one or more computing devices. In example embodiments, a trained machine learning module can use results as input feedback. A trained machine learning module can also rely on past results as inputs for generating new results. In example embodiments, input data can comprise optic power levels for components of the network105, and, when provided to a trained machine learning module, results in output data such as that a failure is likely. The output can then be provided to a user to inspect a component or fiber cable. Algorithms Different machine-learning algorithms have been contemplated to carry out the embodiments discussed herein. For example, linear regression (LiR), logistic regression (LoR), Bayesian networks (for example, naive-bayes), random forest (RF) (including decision trees), neural networks (NN) (also known as artificial neural networks), matrix factorization, a hidden Markov model (HMM), support vector machines (SVM), K-means clustering (KMC), K-nearest neighbor (KNN), a suitable statistical machine learning algorithm, and/or a heuristic machine learning system for classifying or evaluating whether one or more rules would be useful to identify a likely failure. The methods described herein can be implemented with more than one machine learning method. The machine learning system can use a combination of machine learning algorithms. The machine learning algorithms may be of the same type or of different types. For example, a first machine learning algorithm may be trained for a first type of result, while a second machine learning algorithm may be trained for a second type of result. In certain examples, the first type of result may be an input into the second machine learning algorithm, while in other examples, the two results are combined to produce a third result. In certain examples, the first and second types of results are both inputs into a third machine learning algorithm that produces the third result. Linear Regression (LiR) In one example embodiment, linear regression machine learning is implemented. LiR is typically used in machine learning to predict a result through the mathematical relationship between an independent and dependent variable. A simple linear regression model would have one independent variable (x) and one dependent variable (y). A representation of an example mathematical relationship of a simple linear regression model would be y=mx+b. In this example, the machine learning algorithm tries variations of the tuning variables m and b to optimize a line that includes all the given training data. The tuning variables can be optimized, for example, with a cost function. A cost function takes advantage of the minimization problem to identify the optimal tuning variables. The minimization problem preposes the optimal tuning variable will minimize the error between the predicted outcome and the actual outcome. An example cost function may comprise summing all the square differences between the predicted and actual output values and dividing them by the total number of input values and results in the average square error. To select new tuning variables to reduce the cost function, the machine learning module may use, for example, gradient descent methods. An example gradient descent method comprises evaluating the partial derivative of the cost function with respect to the tuning variables. The sign and magnitude of the partial derivatives indicate whether the choice of a new tuning variable value will reduce the cost function, thereby optimizing the linear regression algorithm. A new tuning variable value is selected depending on a set threshold. Depending on the machine learning module, a steep or gradual negative slope is selected. Both the cost function and gradient descent can be used with other algorithms and modules mentioned throughout. For the sake of brevity, both the cost function and gradient descent are well known in the art and are applicable to other machine learning algorithms and may not be mentioned with the same detail. LiR models may have many levels of complexity comprising one or more independent variables. Furthermore, in an LiR function with more than one independent variable, each independent variable may have the same one or more tuning variables or each, separately, may have their own one or more tuning variables. The number of independent variables and tuning variables will be understood to one skilled in the art for the problem being solved. In example embodiments, optic power levels for components of the network105are used as the independent variables to train a LiR machine learning module, which, after training, is used to estimate, for example, whether a failure is likely. Logistic Regression (LoR) In one example embodiment, logistic regression machine learning is implemented. Logistic Regression, often considered a LiR type model, is typically used in machine learning to classify information, such as optic power levels for components of the network105into categories such as whether a failure is likely. LoR takes advantage of probability to predict an outcome from input data. However, what makes LoR different from a LiR is that LoR uses a more complex logistic function, for example a sigmoid function. In addition, the cost function can be a sigmoid function limited to a result between 0 and 1. For example, the sigmoid function can be of the form ƒ(x)=1/(1+e−x), where x represents some linear representation of input features and tuning variables. Similar to LiR, the tuning variable(s) of the cost function are optimized (typically by taking the log of some variation of the cost function) such that the result of the cost function, given variable representations of the input features, is a number between 0 and 1, preferably falling on either side of 0.5. As described in LiR, gradient descent may also be used in LoR cost function optimization and is an example of the process. In example embodiments, optic power levels for components of the network105are used as the independent variables to train a LoR machine learning module, which, after training, is used to estimate, for example, whether a failure is likely. Bayesian Network In one example embodiment, a Bayesian Network is implemented. BNs are used in machine learning to make predictions through Bayesian inference from probabilistic graphical models. In BNs, input features are mapped onto a directed acyclic graph forming the nodes of the graph. The edges connecting the nodes contain the conditional dependencies between nodes to form a predicative model. For each connected node the probability of the input features resulting in the connected node is learned and forms the predictive mechanism. The nodes may comprise the same, similar or different probability functions to determine movement from one node to another. The nodes of a Bayesian network are conditionally independent of its non-descendants given its parents thus satisfying a local Markov property. This property affords reduced computations in larger networks by simplifying the joint distribution. There are multiple methods to evaluate the inference, or predictability, in a BN but only two are mentioned for demonstrative purposes. The first method involves computing the joint probability of a particular assignment of values for each variable. The joint probability can be considered the product of each conditional probability and, in some instances, comprises the logarithm of that product. The second method is Markov chain Monte Carlo (MCMC), which can be implemented when the sample size is large. MCMC is a well-known class of sample distribution algorithms and will not be discussed in detail herein. The assumption of conditional independence of variables forms the basis for Naïve Bayes classifiers. This assumption implies there is no correlation between different input features. As a result, the number of computed probabilities is significantly reduced as well as the computation of the probability normalization. While independence between features is rarely true, this assumption exchanges reduced computations for less accurate predictions, however the predictions are reasonably accurate. In example embodiments, optic power levels for components of the network105are mapped to the BN graph to train the BN machine learning module, which, after training, is used to estimate whether a failure is likely. Random Forest In one example embodiment, random forest (“RF”) is implemented. RF consists of an ensemble of decision trees producing individual class predictions. The prevailing prediction from the ensemble of decision trees becomes the RF prediction. Decision trees are branching flowchart-like graphs comprising of the root, nodes, edges/branches, and leaves. The root is the first decision node from which feature information is assessed and from it extends the first set of edges/branches. The edges/branches contain the information of the outcome of a node and pass the information to the next node. The leaf nodes are the terminal nodes that output the prediction. Decision trees can be used for both classification as well as regression and is typically trained using supervised learning methods. Training of a decision tree is sensitive to the training data set. An individual decision tree may become over or under-fit to the training data and result in a poor predictive model. Random forest compensates by using multiple decision trees trained on different data sets. In example embodiments, optic power levels for components of the network105are used to train the nodes of the decision trees of a RF machine learning module, which, after training, is used to estimate whether a failure is likely. Gradient Boosting In an example embodiment, gradient boosting is implemented. Gradient boosting is a method of strengthening the evaluation capability of a decision tree node. In general, a tree is fit on a modified version of an original data set. For example, a decision tree is first trained with equal weights across its nodes. The decision tree is allowed to evaluate data to identify nodes that are less accurate. Another tree is added to the model and the weights of the corresponding underperforming nodes are then modified in the new tree to improve their accuracy. This process is performed iteratively until the accuracy of the model has reached a defined threshold or a defined limit of trees has been reached. Less accurate nodes are identified by the gradient of a loss function. Loss functions must be differentiable such as a linear or logarithmic functions. The modified node weights in the new tree are selected to minimize the gradient of the loss function. In an example embodiment, a decision tree is implemented to determine optic power levels for components of the network105and gradient boosting is applied to the tree to improve its ability to accurately determine whether a failure is likely. Neural Networks In one example embodiment, Neural Networks are implemented. NNs are a family of statistical learning models influenced by biological neural networks of the brain. NNs can be trained on a relatively-large dataset (e.g., 50,000 or more) and used to estimate, approximate, or predict an output that depends on a large number of inputs/features. NNs can be envisioned as so-called “neuromorphic” systems of interconnected processor elements, or “neurons”, and exchange electronic signals, or “messages”. Similar to the so-called “plasticity” of synaptic neurotransmitter connections that carry messages between biological neurons, the connections in NNs that carry electronic “messages” between “neurons” are provided with numeric weights that correspond to the strength or weakness of a given connection. The weights can be tuned based on experience, making NNs adaptive to inputs and capable of learning. For example, an NN for optic power levels for components of the network105is defined by a set of input neurons that can be given input data such as optic power levels for components of the network105. The input neuron weighs and transforms the input data and passes the result to other neurons, often referred to as “hidden” neurons. This is repeated until an output neuron is activated. The activated output neuron produces a result. In example embodiments, user transaction histories and secondary user actions or data are used to train the neurons in a NN machine learning module, which, after training, is used to estimate whether a failure is likely. Convolutional Autoencoder In example embodiments, convolutional autoencoder (CAE) is implemented. A CAE is a type of neural network and comprises, in general, two main components. First, the convolutional operator that filters an input signal to extract features of the signal. Second, an autoencoder that learns a set of signals from an input and reconstructs the signal into an output. By combining these two components, the CAE learns the optimal filters that minimize reconstruction error resulting an improved output. CAEs are trained to only learn filters capable of feature extraction that can be used to reconstruct the input. Generally, convolutional autoencoders implement unsupervised learning. In example embodiments, the convolutional autoencoder is a variational convolutional autoencoder. In example embodiments, features from a optic power levels for components of the network105are used as an input signal into a CAE which reconstructs that signal into an output such as a whether a failure is likely. Deep Learning In example embodiments, deep learning is implemented. Deep learning expands the neural network by including more layers of neurons. A deep learning module is characterized as having three “macro” layers: (1) an input layer which takes in the input features, and fetches embeddings for the input, (2) one or more intermediate (or hidden) layers which introduces nonlinear neural net transformations to the inputs, and (3) a response layer which transforms the final results of the intermediate layers to the prediction. In example embodiments, optic power levels for components of the network105are used to train the neurons of a deep learning module, which, after training, is used to estimate whether a failure is likely. Convolutional Neural Network (CNN) In an example embodiment, a convolutional neural network is implemented. CNNs is a class of NNs further attempting to replicate the biological neural networks, but of the animal visual cortex. CNNs process data with a grid pattern to learn spatial hierarchies of features. Wherein NNs are highly connected, sometimes fully connected, CNNs are connected such that neurons corresponding to neighboring data (e.g., pixels) are connected. This significantly reduces the number of weights and calculations each neuron must perform. In general, input data, such as optic power levels for components of the network105, comprises of a multidimensional vector. A CNN, typically, comprises of three layers: convolution, pooling, and fully connected. The convolution and pooling layers extract features and the fully connected layer combines the extracted features into an output, such as whether a failure is likely. In particular, the convolutional layer comprises of multiple mathematical operations such as of linear operations, a specialized type being a convolution. The convolutional layer calculates the scalar product between the weights and the region connected to the input volume of the neurons. These computations are performed on kernels, which are reduced dimensions of the input vector. The kernels span the entirety of the input. The rectified linear unit (i.e., ReLu) applies an elementwise activation function (e.g., sigmoid function) on the kernels. CNNs can optimized with hyperparameters. In general, there three hyperparameters are used: depth, stride, and zero-padding. Depth controls the number of neurons within a layer. Reducing the depth may increase the speed of the CNN but may also reduce the accuracy of the CNN. Stride determines the overlap of the neurons. Zero-padding controls the border padding in the input. The pooling layer down-samples along the spatial dimensionality of the given input (i.e., convolutional layer output), reducing the number of parameters within that activation. As an example, kernels are reduced to dimensionalities of 2×2 with a stride of 2, which scales the activation map down to 25%. The fully connected layer uses inter-layer-connected neurons (i.e., neurons are only connected to neurons in other layers) to score the activations for classification and/or regression. Extracted features may become hierarchically more complex as one layer feeds its output into the next layer. See O'Shea, K.; Nash, R. An Introduction to Convolutional Neural Networks. arXiv 2015 and Yamashita, R., et al Convolutional neural networks: an overview and application in radiology.Insights Imaging9, 611-629 (2018). Recurrent Neural Network (RNN) In an example embodiment, a recurrent neural network is implemented. RNNs are class of NNs further attempting to replicate the biological neural networks of the brain. RNNs comprise of delay differential equations on sequential data or time series data to replicate the processes and interactions of the human brain. RNNs have “memory” wherein the RNN can take information from prior inputs to influence the current output. RNNs can process variable length sequences of inputs by using their “memory” or internal state information. Where NNs may assume inputs are independent from the outputs, the outputs of RNNs may be dependent on prior elements with the input sequence. For example, input such as is received by a RNN, which determines whether one or more rules would be useful to identify a likely failure. See Sherstinsky, Alex. “Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network.” Physica D: Nonlinear Phenomena 404 (2020): 132306. Long Short-Term Memory (LSTM) In an example embodiment, a Long Short-term Memory is implemented. LSTM are a class of RNNs designed to overcome vanishing and exploding gradients. In RNNs, long term dependencies become more difficult to capture because the parameters or weights either do not change with training or fluctuate rapidly. This occurs when the RNN gradient exponentially decreases to zero, resulting in no change to the weights or parameters, or exponentially increases to infinity, resulting in large changes in the weights or parameters. This exponential effect is dependent on the number of layers and multiplicative gradient. LSTM overcomes the vanishing/exploding gradients by implementing “cells” within the hidden layers of the NN. The “cells” comprise three gates: an input gate, an output gate, and a forget gate. The input gate reduces error by controlling relevant inputs to update the current cell state. The output gate reduces error by controlling relevant memory content in the present hidden state. The forget gate reduces error by controlling whether prior cell states are put in “memory” or forgotten. The gates use activation functions to determine whether the data can pass through the gates. While one skilled in the art would recognize the use of any relevant activation function, example activation functions are sigmoid, tan h, and RELU See Zhu, Xiaodan, et al. “Long short-term memory over recursive structures.” International Conference on Machine Learning. PMLR, 2015. Matrix Factorization In example embodiments, Matrix Factorization is implemented. Matrix factorization machine learning exploits inherent relationships between two entities drawn out when multiplied together. Generally, the input features are mapped to a matrix F which is multiplied with a matrix R containing the relationship between the features and a predicted outcome. The resulting dot product provides the prediction. The matrix R is constructed by assigning random values throughout the matrix. In this example, two training matrices are assembled. The first matrix X contains training input features and the second matrix Z contains the known output of the training input features. First the dot product of R and X are computed and the square mean error, as one example method, of the result is estimated. The values in R are modulated and the process is repeated in a gradient descent style approach until the error is appropriately minimized. The trained matrix R is then used in the machine learning model. In example embodiments, optic power levels for components of the network105are used to train the relationship matrix R in a matrix factorization machine learning module. After training, the relationship matrix R and input matrix F, which comprises vector representations of optic power levels for components of the network105, results in the prediction matrix P comprising whether a failure is likely. Hidden Markov Model In example embodiments, a hidden Markov model is implemented. A HMM takes advantage of the statistical Markov model to predict an outcome. A Markov model assumes a Markov process, wherein the probability of an outcome is solely dependent on the previous event. In the case of HMM, it is assumed an unknown or “hidden” state is dependent on some observable event. A HMI comprises a network of connected nodes. Traversing the network is dependent on three model parameters: start probability; state transition probabilities; and observation probability. The start probability is a variable that governs, from the input node, the most plausible consecutive state. From there each node i has a state transition probability to node j. Typically the state transition probabilities are stored in a matrix Mijwherein the sum of the rows, representing the probability of state i transitioning to state j, equals 1. The observation probability is a variable containing the probability of output o occurring. These too are typically stored in a matrix Nojwherein the probability of output o is dependent on state j. To build the model parameters and train the HMM, the state and output probabilities are computed. This can be accomplished with, for example, an inductive algorithm. Next, the state sequences are ranked on probability, which can be accomplished, for example, with the Viterbi algorithm. Finally, the model parameters are modulated to maximize the probability of a certain sequence of observations. This is typically accomplished with an iterative process wherein the neighborhood of states is explored, the probabilities of the state sequences are measured, and model parameters updated to increase the probabilities of the state sequences. In example embodiments, user transaction histories and geolocations are used to train the nodes/states of the HMM machine learning module, which, after training, is used to estimate whether one or more rules would be useful to identify a likely failure. Support Vector Machine In example embodiments, support vector machines are implemented. SVMs separate data into classes defined by n-dimensional hyperplanes (n-hyperplane) and are used in both regression and classification problems. Hyperplanes are decision boundaries developed during the training process of a SVM. The dimensionality of a hyperplane depends on the number of input features. For example, a SVM with two input features will have a linear (1-dimensional) hyperplane while a SVM with three input features will have a planer (2-dimensional) hyperplane. A hyperplane is optimized to have the largest margin or spatial distance from the nearest data point for each data type. In the case of simple linear regression and classification a linear equation is used to develop the hyperplane. However, when the features are more complex a kernel is used to describe the hyperplane. A kernel is a function that transforms the input features into higher dimensional space. Kernel functions can be linear, polynomial, a radial distribution function (or gaussian radial distribution function), or sigmoidal. In example embodiments, optic power levels for components of the network105are used to train the linear equation or kernel function of the SVM machine learning module, which, after training, is used to estimate whether one or more rules would be useful to identify a likely failure. K-Means Clustering In one example embodiment, K-means clustering is implemented. KMC assumes data points have implicit shared characteristics and “clusters” data within a centroid or “mean” of the clustered data points. During training, KMC adds a number of k centroids and optimizes its position around clusters. This process is iterative, where each centroid, initially positioned at random, is re-positioned towards the average point of a cluster. This process concludes when the centroids have reached an optimal position within a cluster. Training of a KMC module is typically unsupervised. In example embodiments, optic power levels for components of the network105are used to train the centroids of a KMC machine learning module, which, after training, is used to estimate whether a failure is likely. K-Nearest Neighbor In one example embodiment, K-nearest neighbor is implemented. On a general level, KNN shares similar characteristics to KMC. For example, KNN assumes data points near each other share similar characteristics and computes the distance between data points to identify those similar characteristics but instead of k centroids, KNN uses k number of neighbors. The k in KNN represents how many neighbors will assign a data point to a class, for classification, or object property value, for regression. Selection of an appropriate number of k is integral to the accuracy of KNN. For example, a large k may reduce random error associated with variance in the data but increase error by ignoring small but significant differences in the data. Therefore, a careful choice of k is selected to balance overfitting and underfitting. Concluding whether some data point belongs to some class or property value k, the distance between neighbors is computed. Common methods to compute this distance are Euclidean, Manhattan or Hamming to name a few. In some embodiments, neighbors are given weights depending on the neighbor distance to scale the similarity between neighbors to reduce the error of edge neighbors of one class “out-voting” near neighbors of another class. In one example embodiment, k is 1 and a Markov model approach is utilized. In example embodiments, user transaction histories and geolocations are used to train a KNN machine learning module, which, after training, is used to estimate a failure is likely. To perform one or more of its functionalities, the machine learning module may communicate with one or more other systems. For example, an integration system may integrate the machine learning module with one or more email servers, web servers, one or more databases, or other servers, systems, or repositories. In addition, one or more functionalities may require communication between a user and the machine learning module. Any one or more of the module described herein may be implemented using hardware (e.g., one or more processors of a computer/machine) or a combination of hardware and software. For example, any module described herein may configure a hardware processor (e.g., among one or more hardware processors of a machine) to perform the operations described herein for that module. In some example embodiments, any one or more of the modules described herein may comprise one or more hardware processors and may be configured to perform the operations described herein. In certain example embodiments, one or more hardware processors are configured to include any one or more of the modules described herein. Moreover, any two or more of these modules may be combined into a single module, and the functions described herein for a single module may be subdivided among multiple modules. Furthermore, according to various example embodiments, modules described herein as being implemented within a single machine, database, or device may be distributed across multiple machines, databases, or devices. The multiple machines, databases, or devices are communicatively coupled to enable communications between the multiple machines, databases, or devices. The modules themselves are communicatively coupled (e.g., via appropriate interfaces) to each other and to various data sources, to allow information to be passed between the applications so as to allow the applications to share and access common data. Multimodal Translation In an example embodiment, the machine learning module comprises multimodal translation (MT), also known as multimodal machine translation or multimodal neural machine translation. MT comprises of a machine learning module capable of receiving multiple (e.g. two or more) modalities. Typically, the multiple modalities comprise of information connected to each other. In example embodiments, the MT may comprise of a machine learning method further described herein. In an example embodiment, the MT comprises a neural network, deep neural network, convolutional neural network, convolutional autoencoder, recurrent neural network, or an LSTM. For example, one or more microscopy imaging data comprising multiple modalities from a subject is embedded as further described herein. The embedded data is then received by the machine learning module. The machine learning module processes the embedded data (e.g. encoding and decoding) through the multiple layers of architecture then determines the corresponding the modalities comprising the input. The machine learning methods further described herein may be engineered for MT wherein the inputs described herein comprise of multiple modalities. See e.g. Sulubacak, U., Caglayan, O., Grönroos, S A. et al. Multimodal machine translation through visuals and speech. Machine Translation 34, 97-147 (2020) and Huang, Xun, et al. “Multimodal unsupervised image-to-image translation.” Proceedings of the European conference on computer vision (ECCV). 2018. The ladder diagrams, scenarios, flowcharts and block diagrams in the figures and discussed herein illustrate architecture, functionality, and operation of example embodiments and various aspects of systems, methods, and computer program products of the present invention. Each block in the flowchart or block diagrams can represent the processing of information and/or transmission of information corresponding to circuitry that can be configured to execute the logical functions of the present techniques. Each block in the flowchart or block diagrams can represent a module, segment, or portion of one or more executable instructions for implementing the specified operation or step. In example embodiments, the functions/acts in a block can occur out of the order shown in the figures and nothing requires that the operations be performed in the order illustrated. For example, two blocks shown in succession can executed concurrently or essentially concurrently. In another example, blocks can be executed in the reverse order. Furthermore, variations, modifications, substitutions, additions, or reduction in blocks and/or functions may be used with any of the ladder diagrams, scenarios, flow charts and block diagrams discussed herein, all of which are explicitly contemplated herein. The ladder diagrams, scenarios, flow charts and block diagrams may be combined with one another, in part or in whole. Coordination will depend upon the required functionality. Each block of the block diagrams and/or flowchart illustration as well as combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by special purpose hardware-based systems that perform the aforementioned functions/acts or carry out combinations of special purpose hardware and computer instructions. Moreover, a block may represent one or more information transmissions and may correspond to information transmissions among software and/or hardware modules in the same physical device and/or hardware modules in different physical devices. The present techniques can be implemented as a system, a method, a computer program product, digital electronic circuitry, and/or in computer hardware, firmware, software, or in combinations of them. The system may comprise distinct software modules embodied on a computer readable storage medium; the modules can include, for example, any or all of the appropriate elements depicted in the block diagrams and/or described herein; by way of example and not limitation, any one, some or all of the modules/blocks and or sub-modules/sub-blocks described. The method steps can then be carried out using the distinct software modules and/or sub-modules of the system, as described above, executing on one or more hardware processors. Example Computing Device FIG.9depicts a block diagram of a computing machine2000and a module2050in accordance with certain examples. The computing machine2000may comprise, but is not limited to, remote devices, work stations, servers, computers, general purpose computers, Internet/web appliances, hand-held devices, wireless devices, portable devices, wearable computers, cellular or mobile phones, personal digital assistants (PDAs), smart phones, smart watches, tablets, ultrabooks, netbooks, laptops, desktops, multi-processor systems, microprocessor-based or programmable consumer electronics, game consoles, set-top boxes, network PCs, mini-computers, and any machine capable of executing the instructions. The module2050may comprise one or more hardware or software elements configured to facilitate the computing machine2000in performing the various methods and processing functions presented herein. The computing machine2000may include various internal or attached components such as a processor2010, system bus2020, system memory2030, storage media2040, input/output interface2060, and a network interface2070for communicating with a network2080. The computing machine2000may be implemented as a conventional computer system, an embedded controller, a laptop, a server, a mobile device, a smartphone, a set-top box, a kiosk, a router or other network node, a vehicular information system, one or more processors associated with a television, a customized machine, any other hardware platform, or any combination or multiplicity thereof. The computing machine2000may be a distributed system configured to function using multiple computing machines interconnected via a data network or bus system. The one or more embodiments of processor2010may be configured to execute code or instructions to perform the operations and functionality described herein, manage request flow and address mappings, and to perform calculations and generate commands. Such code or instructions could include, but is not limited to, firmware, resident software, microcode, and the like. The processor2010may be configured to monitor and control the operation of the components in the computing machine2000. The processor2010may be a general purpose processor, a processor core, a multiprocessor, a reconfigurable processor, a microcontroller, a digital signal processor (“DSP”), an application specific integrated circuit (“ASIC”), tensor processing units (TPUs), a graphics processing unit (“GPU”), a field programmable gate array (“FPGA”), a programmable logic device (“PLD”), a radio-frequency integrated circuit (RFIC), a controller, a state machine, gated logic, discrete hardware components, any other processing unit, or any combination or multiplicity thereof. In example embodiments, each processor2010can include a reduced instruction set computer (RISC) microprocessor. The processor2010may be a single processing unit, multiple processing units, a single processing core, multiple processing cores, special purpose processing cores, co-processors, or any combination thereof. According to certain examples, the processor2010along with other components of the computing machine2000may be a virtualized computing machine executing within one or more other computing machines. Processors2010are coupled to system memory and various other components via a system bus2020. The system memory2030may include non-volatile memories such as read-only memory (“ROM”), programmable read-only memory (“PROM”), erasable programmable read-only memory (“EPROM”), flash memory, or any other device capable of storing program instructions or data with or without applied power. The system memory2030may also include volatile memories such as random-access memory (“RAM”), static random-access memory (“SRAM”), dynamic random-access memory (“DRAM”), and synchronous dynamic random-access memory (“SDRAM”). Other types of RAM also may be used to implement the system memory2030. The system memory2030may be implemented using a single memory module or multiple memory modules. While the system memory2030is depicted as being part of the computing machine2000, one skilled in the art will recognize that the system memory2030may be separate from the computing machine2000without departing from the scope of the subject technology. It should also be appreciated that the system memory2030is coupled to system bus2020and can include a basic input/output system (BIOS), which controls certain basic functions of the processor2010and/or operate in conjunction with, a non-volatile storage device such as the storage media2040. In example embodiments, the computing device2000includes a graphics processing unit (GPU)2090. Graphics processing unit2090is a specialized electronic circuit designed to manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display. In general, a graphics processing unit2090is efficient at manipulating computer graphics and image processing and has a highly parallel structure that makes it more effective than general-purpose CPUs for algorithms where processing of large blocks of data is done in parallel. The storage media2040may include a hard disk, a floppy disk, a compact disc read only memory (“CD-ROM”), a digital versatile disc (“DVD”), a Blu-ray disc, a magnetic tape, a flash memory, other non-volatile memory device, a solid state drive (“SSD”), any magnetic storage device, any optical storage device, any electrical storage device, any electromagnetic storage device, any semiconductor storage device, any physical-based storage device, any removable and non-removable media, any other data storage device, or any combination or multiplicity thereof. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any other data storage device, or any combination or multiplicity thereof. The storage media2040may store one or more operating systems, application programs and program modules such as module2050, data, or any other information. The storage media2040may be part of, or connected to, the computing machine2000. The storage media2040may also be part of one or more other computing machines that are in communication with the computing machine2000such as servers, database servers, cloud storage, network attached storage, and so forth. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. The module2050may comprise one or more hardware or software elements, as well as an operating system, configured to facilitate the computing machine2000with performing the various methods and processing functions presented herein. The module2050may include one or more sequences of instructions stored as software or firmware in association with the system memory2030, the storage media2040, or both. The storage media2040may therefore represent examples of machine or computer readable media on which instructions or code may be stored for execution by the processor2010. Machine or computer readable media may generally refer to any medium or media used to provide instructions to the processor2010. Such machine or computer readable media associated with the module2050may comprise a computer software product. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. It should be appreciated that a computer software product comprising the module2050may also be associated with one or more processes or methods for delivering the module2050to the computing machine2000via the network2080, any signal-bearing medium, or any other communication or delivery technology. The module2050may also comprise hardware circuits or information for configuring hardware circuits such as microcode or configuration information for an FPGA or other PLD. The input/output (“I/O”) interface2060may be configured to couple to one or more external devices, to receive data from the one or more external devices, and to send data to the one or more external devices. Such external devices along with the various internal devices may also be known as peripheral devices. The I/O interface2060may include both electrical and physical connections for coupling in operation the various peripheral devices to the computing machine2000or the processor2010. The I/O interface2060may be configured to communicate data, addresses, and control signals between the peripheral devices, the computing machine2000, or the processor2010. The I/O interface2060may be configured to implement any standard interface, such as small computer system interface (“SCSI”), serial-attached SCSI (“SAS”), fiber channel, peripheral component interconnect (“PCI”), PCI express (PCIe), serial bus, parallel bus, advanced technology attached (“ATA”), serial ATA (“SATA”), universal serial bus (“USB”), Thunderbolt, FireWire, various video buses, and the like. The I/O interface2060may be configured to implement only one interface or bus technology. Alternatively, the I/O interface2060may be configured to implement multiple interfaces or bus technologies. The I/O interface2060may be configured as part of, all of, or to operate in conjunction with, the system bus2020. The I/O interface2060may include one or more buffers for buffering transmissions between one or more external devices, internal devices, the computing machine2000, or the processor2010. The I/O interface2060may couple the computing machine2000to various input devices including cursor control devices, touch-screens, scanners, electronic digitizers, sensors, receivers, touchpads, trackballs, cameras, microphones, alphanumeric input devices, any other pointing devices, or any combinations thereof. The I/O interface2060may couple the computing machine2000to various output devices including video displays (The computing device2000may further include a graphics display, for example, a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, a cathode ray tube (CRT), or any other display capable of displaying graphics or video), audio generation device, printers, projectors, tactile feedback devices, automation control, robotic components, actuators, motors, fans, solenoids, valves, pumps, transmitters, signal emitters, lights, and so forth. The I/O interface2060may couple the computing device2000to various devices capable of input and out, such as a storage unit. The devices can be interconnected to the system bus2020via a user interface adapter, which can include, for example, a Super I/O chip integrating multiple device adapters into a single integrated circuit. The computing machine2000may operate in a networked environment using logical connections through the network interface2070to one or more other systems or computing machines across the network2080. The network2080may include a local area network (“LAN”), a wide area network (“WAN”), an intranet, an Internet, a mobile telephone network, storage area network (“SAN”), personal area network (“PAN”), a metropolitan area network (“MAN”), a wireless network (“WiFi;”), wireless access networks, a wireless local area network (“WLAN”), a virtual private network (“VPN”), a cellular or other mobile communication network, Bluetooth, near field communication (“NFC”), ultra-wideband, wired networks, telephone networks, optical networks, copper transmission cables, or combinations thereof or any other appropriate architecture or system that facilitates the communication of signals and data. The network2080may be packet switched, circuit switched, of any topology, and may use any communication protocol. The network2080may comprise routers, firewalls, switches, gateway computers and/or edge servers. Communication links within the network2080may involve various digital or analog communication media such as fiber optic cables, free-space optics, waveguides, electrical conductors, wireless links, antennas, radio-frequency communications, and so forth. Information for facilitating reliable communications can be provided, for example, as packet/message sequencing information, encapsulation headers and/or footers, size/time information, and transmission verification information such as cyclic redundancy check (CRC) and/or parity check values. Communications can be made encoded/encrypted, or otherwise made secure, and/or decrypted/decoded using one or more cryptographic protocols and/or algorithms, such as, but not limited to, Data Encryption Standard (DES), Advanced Encryption Standard (AES), a Rivest-Shamir-Adelman (RSA) algorithm, a Diffie-Hellman algorithm, a secure sockets protocol such as Secure Sockets Layer (SSL) or Transport Layer Security (TLS), and/or Digital Signature Algorithm (DSA). Other cryptographic protocols and/or algorithms can be used as well or in addition to those listed herein to secure and then decrypt/decode communications. The processor2010may be connected to the other elements of the computing machine2000or the various peripherals discussed herein through the system bus2020. The system bus2020represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. For example, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus. It should be appreciated that the system bus2020may be within the processor2010, outside the processor2010, or both. According to certain examples, any of the processor2010, the other elements of the computing machine2000, or the various peripherals discussed herein may be integrated into a single device such as a system on chip (“SOC”), system on package (“SOP”), or ASIC device. Examples may comprise a computer program that embodies the functions described and illustrated herein, wherein the computer program is implemented in a computer system that comprises instructions stored in a machine-readable medium and a processor that executes the instructions. However, it should be apparent that there could be many different ways of implementing examples in computer programming, and the examples should not be construed as limited to any one set of computer program instructions. Further, a skilled programmer would be able to write such a computer program to implement an example of the disclosed examples based on the appended flow charts and associated description in the application text. Therefore, disclosure of a particular set of program code instructions is not considered necessary for an adequate understanding of how to make and use examples. Further, those ordinarily skilled in the art will appreciate that one or more aspects of examples described herein may be performed by hardware, software, or a combination thereof, as may be embodied in one or more computing systems. Moreover, any reference to an act being performed by a computer should not be construed as being performed by a single computer as more than one computer may perform the act. The examples described herein can be used with computer hardware and software that perform the methods and processing functions described herein. The systems, methods, and procedures described herein can be embodied in a programmable computer, computer-executable software, or digital circuitry. The software can be stored on computer-readable media. For example, computer-readable media can include a floppy disk, RAM, ROM, hard disk, removable media, flash memory, memory stick, optical media, magneto-optical media, CD-ROM, etc. Digital circuitry can include integrated circuits, gate arrays, building block logic, field programmable gate arrays (FPGA), etc. A “server” may comprise a physical data processing system (for example, the computing device2000as shown inFIG.3) running a server program. A physical server may or may not include a display and keyboard. A physical server may be connected, for example by a network, to other computing devices. Servers connected via a network may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a distributed (e.g., peer-to-peer) network environment. The computing device2000can include clients' servers. For example, a client and server can be remote from each other and interact through a network. The relationship of client and server arises by virtue of computer programs in communication with each other, running on the respective computers. The example systems, methods, and acts described in the examples and described in the figures presented previously are illustrative, not intended to be exhaustive, and not meant to be limiting. In alternative examples, certain acts can be performed in a different order, in parallel with one another, omitted entirely, and/or combined between different examples, and/or certain additional acts can be performed, without departing from the scope and spirit of various examples. Plural instances may implement components, operations, or structures described as a single instance. Structures and functionality that may appear as separate in example embodiments may be implemented as a combined structure or component. Similarly, structures and functionality that may appear as a single component may be implemented as separate components. Accordingly, such alternative examples are included in the scope of the following claims, which are to be accorded the broadest interpretation to encompass such alternate examples. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
105,665
11943097
DETAILED DESCRIPTION To understand features and technical contents of embodiments of the present disclosure in more detail, implementations of the embodiments of the present disclosure will be described in detail below with reference to accompanying drawings. The accompanying drawings are used for reference only and are not intended to limit embodiments of the present disclosure. Before describing embodiments of the present disclosure in detail, a V2X system will be briefly explained. A V2X system is a Sidelink (SL) transmission technology based on Device-to-Device (D2D) communication. Different from a way in which communication data is received or sent through a base station in a conventional LTE system, communication data in the V2X system is directly communicated from a terminal to a terminal, thus having a higher spectrum efficiency and a lower transmission latency. In Release 14 (Rel-14) of 3GPP, V2X is standardized, and two transmission modes are defined: mode 3 and mode 4. As for mode 3, a procedure of data transmission is as shown inFIG.1. Sidelink transmission resource of a vehicle-mounted terminal is allocated by a base station. The vehicle-mounted terminal sends data on a sidelink according to the resource allocated by the base station. The base station may allocate, to the terminal, a resource for a single transmission or a resource for a semi-static transmission. As for mode 4, a procedure of data transmission is as shown inFIG.2. A vehicle-mounted terminal adopts a transmission mode of sensing and reservation. The vehicle-mounted terminal acquires a set of available transmission resources in a resource pool by sensing, and the terminal randomly selects a resource from the set to perform a sidelink data transmission. Since services in the V2X system have a periodic character, the terminal usually adopts a semi-static transmission mode, that is, after selecting a transmission resource, the terminal will persistently use the resource in a plurality of transmission periods, thus reducing probabilities of a resource re-selection and a resource conflict. The terminal will carry information of a resource reserved for next transmission in control information of a current transmission, so that other terminals may determine whether the resource is reserved and used by a user by detecting control information of the user, thus achieving a purpose of reducing resource conflicts. UE-UE control is introduced into NR-V2X. However, there is no effective solution regarding how to achieve UE-UE control, such as how to trigger capability interaction and mutual control of UE-UE, in what granularity to perform interaction control, how to decide a controlling end and a controlled end, and how to deal with abnormal situations. Therefore, a parameter configuration method is provided by the present application. The parameter configuration methods of embodiments of the present application may be applied to various communication systems, such as a Global System of Mobile communication (GSM) system, a Code Division Multiple Access (CDMA) system, a Wideband Code Division Multiple Access (WCDMA) system a General Packet Radio Service (GPRS), a Long Term Evolution (LTE) system, a LTE Frequency Division Duplex (FDD) system, a LTE Time Division Duplex (TDD) system, a Universal Mobile Telecommunication System (UMTS), a Worldwide Interoperability for Microwave Access (WiMAX) communication system, or a 5G system, etc. Illustratively, a communication system100applied in an embodiment of the present application is as shown inFIG.3. The communication system100may include a network device110, and the network device110may be a device that communicates with a terminal device120(or referred to as a communication terminal, or a terminal). The network device110may provide communication coverage for a specific geographical area, and may communicate with terminal devices located within the coverage area. Optionally, the network device110may be a Base Transceiver Station (BTS) in a GSM system or CDMA system, a NodeB (NB) in a WCDMA system, an Evolutional Node B (eNB or eNodeB) in a LTE system, or a radio controller in a Cloud Radio Access Network (CRAN), or the network device may be a network side device in a mobile switch center, a relay station, an access point, a vehicle-mounted device, a wearable device, a hub, a switch, a bridge, a router, or a 5G network, or a network device in a future evolved Public Land Mobile Network (PLMN), etc. The communication system10also includes at least one terminal device120located within the coverage area of the network device110. As used herein, the term “terminal device” includes, but not limited to, a device configured to receive/send a communication signal via a wired circuit, for example, via a Public Switched Telephone Network (PSTN), a Digital Subscriber Line (DSL), a digital cable, a direct cable; and/or another data connection/network; and/or via a wireless interface, for instance, for a cellular network, a Wireless Local Area Network (WLAN), a digital television network such as a Digital Video Broadcasting-Handheld (DVB-H) network, a satellite network, and an AM-FM broadcast transmitter; and/or another communication terminal; and/or an Internet of Things (IoT) device. A terminal device configured to communicate via a wireless interface may be referred to as a “wireless communication terminal”, a “wireless terminal” or a “mobile terminal”. Examples of the mobile terminal include, but not limited to, a satellite or cellular telephone, a Personal Communication System (PCS) terminal that can be combined with a cellular wireless telephone and data processing, faxing, and data communication abilities, a Personal Digital Assistant (PDA) that may include a radio telephone, a pager, an internet/intranet access, a Web browser, a memo pad, a calendar, and/or a Global Positioning System (GPS) receiver, and a conventional laptop and/or palmtop receiver or another electronic apparatus including a radio telephone transceiver. The terminal device may be referred to as an access terminal, a User Equipment (UE), a subscriber unit, a subscriber station, a mobile station, a mobile platform, a remote station, a remote terminal, a mobile device, a user terminal, a terminal, a wireless communication device, a user agent, or a user apparatus. The access terminal may be a cellular phone, a cordless phone, a Session Initiation Protocol (SIP) phone, a Wireless Local Loop (WLL) station, a Personal Digital Assistant (PDA), a handheld device with a wireless communication function, a computing device, or another processing device connected to a wireless modem, a vehicle-mounted device, a wearable device, a terminal device in a 5G network, or a terminal device in a future evolved Public Land Mobile Network (PLMN), or the like. Optionally, Device to Device (D2D) communication may be performed between the terminal devices120. Optionally, the 5G system or 5G network may be referred to as a New Radio (NR) system or a NR network. FIG.3illustratively shows one network device and two terminal devices as an example. Optionally, the communication system100may include a plurality of network devices, and other quantity of terminal devices may be included within a coverage area of each network device, and this is not limited in the embodiments of the present application. Optionally, the communication system100may include other network entities such as a network controller, and a mobile management entity, which is not limited in the embodiments of the present application. It should be understood that, a device with a communication function in a network/system in embodiments of the present application may be referred to as a communication device. Taking the communication system100shown inFIG.3as an example, the communication devices may include a network device110and a terminal device120which have communication functions. The network device110and the terminal device120may be specific devices described above, which will not be described here again. The communication devices may also include other devices in the communication system100, such as network controllers, mobile management entities, and other network entities, which is not limited in the embodiments of the present application. As shown inFIG.4, an optional processing flow of a parameter configuration method provided by an embodiment of the present disclosure includes the following step S201. In step S201, a second terminal device sends a first parameter combination to a first terminal device. In an embodiment of the present disclosure, the first parameter combination is used for indicating sidelink data sending parameters of the second terminal device, or the first parameter combination is used for indicating sidelink data receiving parameters of the second terminal device, or the first parameter combination is used for indicating sidelink data sending parameters and sidelink data receiving parameters of the second terminal device. The second terminal device and the first terminal device are different terminal devices. In some embodiments, the second terminal device may send the first parameter combination to the first terminal device by a Physical Sidelink Control Channel (PSCCH), a Physical Sidelink Shared Channel (PSSCH), a sidelink feedback channel, or a Physical Sidelink Discovery Channel (PSDCH), etc. In some embodiments, the first parameter combination is transmitted through higher layer signaling, that is, the second terminal device sends the first parameter combination to the first terminal device through higher layer signaling. Optionally, the first parameter combination is transmitted by higher layer signaling through a container. In other embodiments, the first parameter combination is transmitted through Radio Resource Control (RRC) signaling, that is, the second terminal device sends the first parameter combination to the first terminal device through RRC signaling. In a specific implementation, that the second terminal device sends the first parameter combination to the first terminal device may be triggered by a higher layer message of the second terminal device or a first message of the first terminal device. The higher layer message is carried within RRC signaling and sent to the first terminal device simultaneously with the first parameter combination. The higher layer message includes at least one of the following: a connection establishment request message DIRECT_COMMUNICATION_REQUEST, a security parameter configuration message DIRECT_SECURITY_MODE_COMMAND, a security parameter configuration complete message DIRECT_SECURITY_MODE_COMPLETE, and a connection establishment complete message DIRECT_COMMUNICATION_ACCEPT. In a specific implementation, that the second terminal device sends the first parameter combination to the first terminal device may be triggered by an access layer of the second terminal device. As an example, the second terminal device triggers the second terminal device to send the first parameter combination to the first terminal device when a sidelink data sending capability of the second terminal device and/or a sidelink data receiving capability of the second terminal device changes. It may be understood that the higher layer message of the second terminal device triggers the second terminal device to send the first parameter combination to the first terminal device, or a first message sent by the first terminal device triggers the second terminal device to send the first parameter combination to the first terminal device. Regardless of whether the sending of the first parameter combination is implemented through triggering by the higher layer message of the second terminal device or through triggering by the first message of the first terminal device, the triggering may be periodic. That is, the higher layer of the second terminal device periodically triggers the second terminal device to send the first parameter combination to the first terminal device. Or the first terminal device periodically sends a first message to trigger the second terminal device to send the first parameter combination to the first terminal device. Here, the first message is used for indicating a type of a first parameter in the first parameter combination that the second terminal needs to send. For example, the first parameter in the first parameter combination includes different dimensions, such as a Quality of Service (QoS) attribute, a propagation mode, an identifier of the second terminal device, a target address, a source address, a provider service identifier (PSID), an intelligent transportation system-application identification (ITS-AID), a radio access type (RAT), and parameter configuration information issued by the higher layer of the second terminal device. Optionally, the types of the first parameter in embodiments of the present disclosure are different dimensions of the first parameters. Herein, propagation modes include unicast, multicast and broadcast, and RATs include access in a NR system and access in an LTE system. The first message is further used for indicating the sidelink data sending parameters of the first terminal device and/or the sidelink data receiving parameters of the first terminal device. Optionally, the first message indicates that the sidelink data sending parameters that the second terminal needs to transmit are sending parameters compatible with the receiving parameters of the first terminal. The first message indicates that the receiving parameters of the sidelink data that the second terminal needs to send are receiving parameters compatible with the sending parameters of the first terminal. The first message is a PC5-signaling message, an RRC message, a connection establishment request message, a security parameter configuration message, a security parameter configuration complete confirmation message, or a connection establishment complete confirmation message. In some embodiments, when sending by the second terminal device the first parameter combination to the first terminal device is triggered by the first message sent by the first terminal device, the method further includes: sending, by the second terminal device, a third message to the first terminal device, wherein the third message is used for indicating a type of a third parameter in a third parameter combination that the first terminal device needs to send. The third message and the first parameter combination are transmitted using a same RRC message. The first parameter is configured according to different situations by the first parameter combination sent by the second terminal device to the first terminal device. The first parameter includes a sidelink data sending parameter of the second terminal device, a sidelink data receiving parameter of the second terminal device, or a sidelink data sending parameter and a receiving parameter of the second terminal device. The situations include at least one of the following: Quality of Service (QoS), a propagation mode, an identifier of the second terminal device, a target address, a source address, a Provider Service Identifier (PSID), an intelligent transportation systems application-identifier (ITS-AID), a Radio Access Type (RAT), and parameter configuration information issued by a higher layer of the second terminal device. When the first parameter combination includes a QoS attribute, the first parameter combination is used for indicating that the second terminal device requires initiating a service with the QoS attribute. Or, when the first parameter combination includes a Quality of Service (QoS) attribute, the first parameter combination is used for indicating that the second terminal device requires releasing the service with the QoS attribute. Therefore, according to embodiments of the present disclosure, based on any one or more of a propagation mode, an identifier of the second terminal device, a target address, a source address, PSID and ITS-AID, the sidelink data sending parameters and/or the sidelink data receiving parameters of the second terminal device are configured. In an embodiment of the present disclosure, the first parameter combination includes at least one of the following: sending resources of the second terminal device, receiving resources of the second terminal device, related information of sending resource pool of the second terminal device, related information of receiving resource pool of the second terminal device, and related information of transmission power of the second terminal device. Optionally, the first parameter combination includes at least one of the following: a QoS attribute, a propagation mode, an identifier of the second terminal device, a target address, a source address, a PSID, an ITS-AID, a Radio Access Type (RAT), and parameter configuration information issued by a higher layer of the second terminal device. Herein, propagation modes include unicast, multicast and broadcast, and RATs includes access in NR system and access in LTE system. In some embodiments, the sidelink data receiving parameters of the second terminal device include at least one of the following: a Reference Signal Receiving Power (RSRP) of the sidelink of the second terminal device, a Reference Signal Receiving Quality (RSRQ) of the sidelink of the second terminal device, and a Received Signal Strength Indication (RSSI) of the sidelink of the second terminal device. The sidelink of the second terminal device includes the sidelink between the second terminal device and the first terminal device and/or the sidelink between the second terminal device and the third terminal device. The first terminal device is different from the third terminal device. In an embodiment of the present disclosure, the first parameter combination is determined by the second terminal device according to an existing service, or the first parameter combination is determined by the second terminal device according to a newly generated service. Another optional processing flow of the parameter configuration method provided by an embodiment of the present disclosure, as shown inFIG.5, includes the following step S301. In step S301, the first terminal device sends a second parameter combination to the second terminal device. In an embodiment of the present disclosure, the second parameter combination is used to indicate the sidelink data sending parameters of the second terminal device, or the second parameter combination is used to indicate the sidelink data receiving parameters of the second terminal device, or the second parameter combination is used to indicate the sidelink data sending parameters and receiving parameters of the second terminal device. The second terminal device and the first terminal device are different terminal devices. In some embodiments, the first terminal device may send the second parameter combination to the second terminal device through PSCCH. PSSCH, sidelink feedback channel, or PSDCH, etc. In some embodiments, the second parameter combination is transmitted through higher layer signaling, that is, the first terminal device sends the second parameter combination to the second terminal device through higher layer signaling. Optionally, the second parameter combination is transmitted by higher layer signaling through a container. In other embodiments, the second parameter combination is transmitted through RRC signaling, that is, the first terminal device sends the second parameter combination to the second terminal device through RRC signaling. In specific implementation, that the first terminal device sends the second parameter combination to the second terminal device can be triggered by a higher layer message of the first terminal device or triggered by a second message of the second terminal device. The higher layer message is carried in RRC signaling, and is sent to the second terminal device simultaneously with the second parameter combination. The higher layer message includes at least one of the following a connection establishment request message, a security parameter configuration message, a security parameter configuration complete message and a connection establishment complete message. In some embodiments, that the first terminal device sends the second parameter combination to the second terminal device can be triggered by the access layer of the first terminal device. In specific implementation, the first terminal device receives a fifth parameter combination sent by the second terminal device, wherein the fifth parameter combination is used for indicating the sidelink data sending parameters of the first terminal device and/or the sidelink data receiving parameters of the second terminal device; The first terminal device configures the fifth parameter combination, and triggers the first terminal device to send the second parameter combination to the second terminal device when the first terminal device successfully configures the fifth parameter combination. It can be understood that successfully configuring, by the first terminal device, the fifth parameter combination is the triggering, by the access layer of the first terminal device, the first terminal device to send the second parameter combination to the second terminal device. It can be understood that the higher layer of the first terminal device triggers the first terminal device to send the second parameter combination to the second terminal device, or the second message of the second terminal device triggers the first terminal device to send the second parameter combination to the second terminal device. Whether it is triggered by the higher layer of the first terminal device or triggered by the second message of the second terminal device, the sending of the second parameter combination can be triggered periodically. That is, the higher layer of the first terminal device periodically triggers the first terminal device to send the second parameter combination to the second terminal device, or the second terminal device periodically sends the second message to the first terminal device to trigger the first terminal device to send the second parameter combination to the second terminal device. Herein, the second message is PC5-signaling message, RRC message, connection establishment request message, security parameter configuration message, security parameter configuration confirmation message, or connection establishment complete confirmation message. The second parameter is configured according to different situations by the second parameter combination sent by the first terminal device to the second terminal device. The second parameter includes a sidelink data sending parameter of the second terminal device, or a sidelink data receiving parameter of the second terminal device, or the second parameter includes a sidelink data sending parameter and a receiving parameter of the second terminal device. The situations include at least one of the following: a QoS attribute, a propagation mode, an identifier of the second terminal device, a target address, a source address, a PSID, an ITS-AID, a RAT, and parameter configuration information issued by a higher layer of the second terminal device. Therefore, according to the embodiment of the present disclosure, based on any one or more of a propagation mode, an identifier of the second terminal device, a target address, a source address, a PSID and an ITS-AID, the sidelink data sending parameters and/or receiving parameters of the second terminal device are configured. In an embodiment of the present disclosure, the second parameter combination includes at least one of the following: sending resources of the second terminal device, receiving resources of the second terminal device, related information of a sending resource pool of the second terminal device, related information of a receiving resource pool of the second terminal device, and related information of a transmission power of the second terminal device. In some embodiments, the description of the second parameter combination is same as that of the aforementioned first parameter combination, and will not be repeated here. The description of the sidelink of the second terminal device is same as that of the sidelink of the second terminal device mentioned in above embodiments of the present disclosure, and will not be repeated here. According to another optional processing flow of a parameter configuration method provided by an embodiment of the present disclosure, step S302is added on a basis of above step S301. In step S302, the first terminal device receives third information sent by the second terminal device. In an embodiment of the present disclosure, the third information indicates that the second terminal device has completed a configuration of the second parameter combination, or that the second terminal device refuses to configure the second parameter combination, or that the second terminal device fails to configure the second parameter combination. When the third information indicates that the second terminal device fails to configure the second parameter combination, the third information indicates a third parameter combination selected by the second terminal device, or a fourth parameter combination being used by the second terminal device. Or the third information is used for indicating that the second terminal device requires reestablishing a sidelink connection with the first terminal device. In an optional embodiment, before the second terminal device sends the third information, the second terminal device uses a previous configuration of the second parameter combination. Or before the second terminal device sends the third information, the second terminal device stops at least one radio bearer. The third parameter combination selected by the second terminal device refers to a third parameter combination provided by the second terminal device or a third parameter combination suggested by the second terminal device. In some embodiments, the method further includes: receiving, by the first terminal device, a sixth parameter combination sent by the second terminal device, wherein the sixth parameter combination is used for indicating at least one of the following: sidelink data sending parameters of the first terminal device and sidelink data receiving parameters of the first terminal device. The sixth parameter combination and the third information are transmitted using a same RRC message. In some embodiments, information included in the third parameter combination and the fourth parameter combination is the same as information included in aforementioned first parameter combination, which will not be described in detail here. An embodiment of the present disclosure further provides a second terminal device, wherein a schematic diagram of composition structure of the second terminal device400is shown inFIG.6. The second terminal device includes: a first sending unit401, configured to send a first parameter combination to a first terminal device, wherein the first parameter combination is used for indicating sidelink data sending parameters of the second terminal device and/or sidelink data receiving parameters of the second terminal device. In the embodiment of the present disclosure, the first parameter combination is transmitted through higher layer signaling. Optionally, the first parameter combination is transmitted by higher layer signaling through a container. Or the first parameter combination is transmitted through RRC signaling. In the embodiment of the present disclosure, that the first sending unit401sends the first parameter combination to the first terminal device is triggered by a higher layer of the second terminal device or triggered by a first message of the first terminal device. The first message is used for indicating at least one of the following: the sidelink data sending parameters of the first terminal device and the sidelink data receiving parameters of the first terminal device. In the embodiment of the present disclosure, that the first sending unit401sends the first parameter combination to the first terminal device is implemented by periodic triggering. In the embodiment of the present disclosure, the first message is a PC5-signaling message, an RRC message, a connection establishment request message, a security parameter configuration message, a security parameter configuration complete confirmation message, or a connection establishment complete confirmation message. In the embodiment of the present disclosure, the first parameters include sidelink data sending parameters of the second terminal device and/or sidelink data receiving parameters of the second terminal device. Situations include at least one of the following: a QoS attribute, a propagation mode, an identifier of the second terminal device, a target address, a source address, a PSID, an ITS-AID, a RAT, and parameter configuration information issued by the higher layer of the second terminal device. In some embodiments, the first sending unit401is further configured to send a third message to the first terminal device, wherein the third message is used for indicating a type of a third parameter in a third parameter combination that the first terminal device needs to send. The third message and the first parameter combination are transmitted using a same RRC message. Herein, related description of the first parameter combination is same as that of the first parameter combination mentioned in above embodiments of the present disclosure, and will not be repeated herein. The description of a sidelink of the second terminal device is same as that of a sidelink of the second terminal device mentioned in above embodiments of the present disclosure, and will not be repeated herein. Related descriptions for a higher layer message and the first message are same as those in the parameter configuration method performed by the second terminal device, and will not be repeated herein. An embodiment of the present disclosure also provides a first terminal device, and a schematic diagram of composition structure of the first terminal device500is shown inFIG.7. The first terminal device includes: a second sending unit501, configured to send a second parameter combination to a second terminal device, wherein the second parameter combination is used for indicating sidelink data sending parameters of the second terminal device and/or sidelink data receiving parameters of the second terminal device. In the embodiment of the present disclosure, the second parameter combination is transmitted through higher layer signaling. Optionally, the second parameter combination is transmitted by higher layer signaling through a container. In the embodiment of the present disclosure, the second parameter combination is transmitted through RRC signaling. In the embodiment of the present disclosure, that the second sending unit501sends the second parameter combination to the second terminal device is triggered by a higher layer of the first terminal device. The higher layer message is carried in RRC signaling and sent to the second terminal device simultaneously with the second parameter combination. In the embodiment of the present disclosure, the higher layer message includes at least one of following: a connection establishment request message, a security parameter configuration message, a security parameter configuration complete message and a connection establishment complete message. In the embodiment of the present disclosure, that the second sending unit501sends the second parameter combination to the second terminal device is triggered by a second message from the second terminal device. In the embodiment of the present disclosure, that the second sending unit501sends the second parameter combination to the second terminal device is implemented by a periodic triggering. Herein, the second message is a PC5-signaling message, an RRC message, a connection establishment request message, a security parameter configuration message, a security parameter configuration complete confirmation message, or a connection establishment complete confirmation message. In the embodiment of the present disclosure, the second parameter combination is used for configuration of the second parameter for different situations. The second parameter includes a sidelink data sending parameter of the second terminal device and/or a sidelink data receiving parameter of the second terminal device. The situations include at least one of the following: a propagation mode, an identifier of the second terminal device, a target address, a source address, a PSID and an ITS-AID. In the embodiment of the present disclosure, that the second sending unit501sends the second parameter combination to the second terminal device is triggered by an access layer of the first terminal device. When that the second sending unit501sends the second parameter combination to the second terminal device is triggered by the access layer of the first terminal device, the second sending unit501is configured to receive a fifth parameter combination sent by the second terminal device, wherein the fifth parameter combination is used for indicating sidelink data sending parameters of the first terminal device and/or sidelink data receiving parameters of the second terminal device, and that the first terminal device successfully configures the fifth parameter combination. In some embodiments, the first terminal device further includes a first receiving unit502, configured to receive third information sent by the second terminal device. The third information indicates that the second terminal device completes configuring the second parameter combination, or that the second terminal device refuses to configure the second parameter combination, or that the second terminal device fails to configure the second parameter combination. When the third information indicates that the second terminal device fails to configure the second parameter combination, the third information indicates a third parameter combination selected by the second terminal device or a fourth parameter combination being used by the second terminal device. In some embodiments, the first terminal device further includes: a second receiving unit503, configured to receive a sixth parameter combination sent by the second terminal device, wherein the sixth parameter combination is used for indicating at least one of following: sidelink data sending parameters of the first terminal device; and sidelink data receiving parameters of the first terminal device. The sixth parameter combination and the third information are transmitted using a same RRC message. In embodiments of the present disclosure, related descriptions of the second parameter combination, the third parameter combination and the fourth parameter combination are same as those of second parameter combination, third parameter combination and fourth parameter combination mentioned in above embodiments of the present disclosure, and will not be repeated herein. The description of a sidelink of the second terminal device in the embodiment of the present disclosure is same as that of a sidelink of the second terminal device mentioned in above embodiments of the present disclosure, and will not be repeated here. An embodiment of the present disclosure also provides a terminal device, including a processor and a memory configured to store a computer program which is capable of being run on the processor, wherein the processor is configured to execute acts of the above method when running the computer program. FIG.8is a schematic diagram of hardware composition structure of a terminal device (a first terminal device and a second terminal device) of an embodiment of the present disclosure. The terminal device700includes at least one processor701, a memory702and at least one network interface704. Various components in the terminal device700are coupled together by a bus system705. It may be understood that the bus system705is used for implementing connection and communication between these components. In addition to a data bus, the bus system705includes a power bus, a control bus, and a status signal bus. However, for clarity, all kinds of buses are uniformly referred to as a bus system705in theFIG.8. It can be understood that the memory702may be a volatile memory or a non-volatile memory, or may include both the volatile memory and the non-volatile memory. The non-volatile memory may be a ROM, a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a ferromagnetic random access memory (FRAM), a Flash Memory, a magnetic surface memory, a compact disk, or a Compact Disc Read-Only Memory (CD-ROM). The magnetic surface memory may be a magnetic disk memory or a magnetic tape memory. The volatile memory may be a Random Access Memory (RAM) which serves as an external cache. By way of exemplary but not restrictive illustrations, many forms of RAMs are available, such as a Static Random Access Memory (SRAM), a Synchronous Static Random Access Memory (SSRAM), a Dynamic Random Access Memory (DRAM), a Synchronous Dynamic Random Access Memory (SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), an Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), a SyncLink Dynamic Random Access Memory (SLDRAM), a Direct Rambus Random Access Memory (DRRAM). The memory702described in the embodiment of the present disclosure is intended to include, but is not limited to, these and any other suitable types of memories. The memory702in embodiments of the present disclosure is configured to store various types of data to support operations of the terminal device700. Examples of such data include any computer program for operating on the terminal device700, such as an application program7022. A program for implementing the method of embodiments of the present disclosure may be included in the application program7022. Methods disclosed in above embodiments of the present disclosure may be applied in the processor701or implemented by the processor701. The processor701may be an integrated circuit chip with a signal processing capability. In an implementation process, the acts of the methods described above may be accomplished by integrated logic circuits of hardware in the processor701or instructions in a form of software. The above-mentioned processor701may be a general-purpose processor, a Digital Signal Processor (DSP), or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. The processor701may implement or execute the disclosed methods, acts and logical block diagrams in the embodiments of the present disclosure. The general purpose processor may be a microprocessor or any conventional processor or the like. Acts of the methods disclosed in combination with the embodiments of the present disclosure may be directly embodied as being executed and accomplished by a hardware decoding processor or by a combination of hardware and software modules in a decoding processor. The software module may be located in a storage medium, and the storage medium is located in the memory702. The processor701reads information in the memory702and accomplishes the acts of the aforementioned methods in combination with hardware thereof. In an exemplary embodiment, the terminal device700may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), FPGAs, general-purpose processors, controllers, MCUs, MPUs or other electronic components, for executing the aforementioned methods. An embodiment of the present application further provides a storage medium configured to store a computer program. Optionally, the storage medium may be applied to the terminal device in the embodiment of the present application, and the computer program enables the computer to execute corresponding flow in each method in the embodiment of the present application, which will not be repeated here for brevity. The present disclosure is described with reference to flowcharts and/or block diagrams of methods, devices (systems), and computer program products of embodiments of the present disclosure. It should be understood that each flow and/or block in the flowchart and/or block diagram, and combinations of flows and/or blocks in the flowchart and/or block diagram may be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, a special purpose computer, an embedded processor or other programmable data processing apparatus to produce a machine, such that the instructions which are executed by the processor of the computer or other programmable data processing apparatus produce means for implementing functions specified in one or more flows in the flow charts and/or one or more blocks in the block diagrams. These computer program instructions may alternatively be stored in a computer-readable memory that may direct a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means that implement the functions specified in one or more flows of the flowchart and/or one or more blocks of the block diagram. These computer program instructions may alternatively be loaded onto a computer or other programmable data processing apparatus, such that a series of operational steps are performed on the computer or other programmable device to produce a computer-implemented process, such that the instructions executed on the computer or other programmable apparatus provide steps for implementing the functions specified in one or more flows of the flowchart and/or one or more blocks of the block diagram. The above descriptions are only preferred embodiments of the present disclosure and are not intended to limit a scope of protection of the present disclosure. Any modification, equivalent substitution, improvement, etc. made within the spirit and principles of the present disclosure shall be comprised within the scope of protection of the present disclosure.
42,726
11943098
DETAILED DESCRIPTION For simplicity and illustrative purposes, the present embodiments are described by referring mainly to an exemplary embodiment thereof. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present embodiment. However, it will be readily apparent to one of ordinary skill in the art that the present embodiments may be practiced without limitation to these specific details. In this description, well known methods and structures have not been described in detail so as not to unnecessarily obscure the present embodiment. Turning now to the drawings,FIG.1is a functional block diagram illustrating a system model10having a plurality of Network Resource Management (NRM) model fragments or “objects” according to one embodiment of the present disclosure. Particularly, as seen inFIG.1, model10comprises a ManagedElement (ME) object12, a ManagedFunction (MF) object14, an FMControl object16, a SupportedFaultTypeList object18, and a currentFaultList object20. Two of the objects i.e., ME object12and MF object14—are known. These objects, both of Information Object Class (IOC), are defined, e.g., in the specifications listed in the Background section. Briefly, and without limitation, the MF object14, together with the ME object12, represent a system (e.g., network equipment, a network, a subnetwork, etc.) manufactured by a particular vendor and in operation in an operator network. In one embodiment, an ME object12corresponds to a network node or other piece of equipment (i.e., a device hereinafter referred to as “managed equipment”), and comprises one or more MF objects14. Each MF object14corresponds to distinct operations which the managed equipment may perform. These functions include, but are not limited to, detecting faults for a network management system, producing corresponding fault reports, reporting the fault reports to one or more addresses of interested nodes, and logging the fault reports to a log file. According to one embodiment of the present disclosure, an MF object14may be implemented as a class and contain only itself. That is, the given MF object14specifies the variables, classes, and functions that it needs to detect faults, report faults, and log faults, but does not contain the variables, classes, and functions associated with any other MF objects14. The present disclosure also provides three new objects the FMControl object16, the supportedFaultTypeList object18, and the currentFaultList object20. As seen inFIG.1, each of these objects is contained in MF14. More particularly, each MF object14contains one FMContol object16and represents the capabilities of MF object14to generate fault reports, report those fault reports to other interested nodes, and log the fault reports. The FMControl object16may be altered at runtime by both the ME object12and by the network system management, as explained herein, and includes attributes several attributes. As seen in the following table, some attributes control the administrative and operational states of the FMControl object16(and hence, the fault management procedures), while other attributes specify how the generated fault reports are to be logged and communicated to the network system management. TABLE 1FMControl AttributesAttribute NameValuesWritten ByDescriptionadministrativeStateLOCKEDNetworkThe administrativeState attribute indicates(set)Managementthe administrative state of the FMCControlUNLOCKEDSystemobject, and is set and reset to start and(reset)cease, respectively, fault detection,reporting, and logging.operationalStateENABLEDMF objectThe operationalState attribute indicates the(set)operational state of the FMCControl object.DISABLED(reset)faultReportLogURL of faultMF objectThe faultReportLog attribute specifies areport log filepath to a file system where the reportedfaults are logged.faultReportTargetURL ofNetworkThe faultReportTarget attribute is a list ofmanagementManagementaddresses to which MF object 14 is tonode(s)Systemsend fault reports. In one embodiment, the administrativeState attribute is set or reset by the Network Management System to effect fault detection, reporting, and logging by MF object14. Particularly, the Network Management System sets this attribute to be UNLOCKED or LOCKED, using a WRITE command. Setting the attribute to UNLOCKED indicates to MF object14that it should begin detecting, reporting, and logging faults. Setting the attribute to LOCKED indicates to MF object14that fault detection, reporting, and recording are no longer needed, and thus, MF object14should suspend or cease detecting, reporting, and recording faults. The operationalState attribute indicates the state of the FMControl object16. In particular, MF object14sets this attribute to ENABLED or DISABLED. Setting the attribute to ENABLED indicates that MF object14has adequate resources to begin detecting faults and generating and logging the resultant fault reports. Resetting this attribute to DISABLED indicates that MF object14does not have sufficient resources to detect, report, or record faults, or to send fault reports to faultReportTarget. The faultReportLog identifies the file system where the faults reports are to be logged. The size and location of the specified file system is determined by the MF object14. In one embodiment, the file system is a circular log file. In situations where there is insufficient space to log new fault reports, MF object14deletes or overwrites the oldest fault reports to make room for the new fault reports. According to the present disclosure, MF object14is configured to log a fault report when:a fault is detected; andwhen the operationalState attribute is ENABLED; andwhen the administrativeState attribute is UNLOCKED. Additionally, according to the present disclosure, the Network System Management does not set faultReportLog. However, the network system management can read the logged fault reports from the file system identified in faultReportLog as long as:the operationalState attribute is ENABLED; andadministrativeState attribute is UNLOCKED. The faultReportTarget attribute identifies the addresses of one or more nodes that are interested in knowing about the faults detected and logged by MF object14. The Network Management System writes these addresses, and as seen in more detail later, the MF object14sends the fault reports to each of these addresses. Returning toFIG.1, the supportedFaultTypeList object18comprises a list that identifies the various supported types of faults that MF object14is capable of detecting, reporting, and logging. However, unlike the data in FMControl object16which can be modified at runtime, the information in the supportedFaultTypeList object18cannot. Rather, the information included in the supportedFaultTypeList object18is created when MF object14is created or updated. According to the present embodiments, a single supportedFaultTypeList object18can be related to one or more MF objects14, and one MF object14can use one or more supportedFaultTypeList objects18. The currentFaultList object20comprise a list of current fault information. More particularly, the currentFaultList object20is a list of all current fault reports. An occurrence of a fault report in the currentFaultList object20implies that the same fault report information has been reported to the one or more addresses in the list specified in the faultReportTarget attribute, and recorded in the log file specified in the faultReportLog attribute. As seen inFIG.1, MF object14contains only itself. That is, MF object14is implemented as a class having the variables, classes, and functions that it needs to detect faults, report faults, and log faults for itself, but not for those of any other MF objects14. However, those of ordinary skill in the art should appreciate that the present disclosure is not so limited. In other embodiments, such as the embodiment seen inFIG.2, MF object14can contain itself and one or more other MF objects22,24, and26, also implemented as classes. In such embodiments, there is still an FMControl object16, a supportedFaultTypeList object18, and a currentFaultList20. However, not only are the FMControl object16, the supportedFaultTypeList object18, and the currentFaultList20used in connection with the functions of MF object14, they are also used in connection with the functions of MF objects22,24, and26. For example, as seen inFIG.2, the MF objects14,22,24, and26form a tree with the ME object12forming the root of the tree. The top-level MF object i.e., MF object14—is configured to detect, report, and log the faults that it is associated with, as well as those of all other MF objects22,24,26of the tree based on the information configured in FMControl object16. The supportedFaultTypeList object18would comprise a list identifying the types of faults supported by MF objects14,22,24, and26, and a currentFaultList20comprising a list of current fault information for all MF objects14,22,24, and26. FIG.3is a signaling diagram illustrating a fault management operation according to one embodiment of the present disclosure. As seen inFIG.3, a Network Management System (MS)30reads the supportedFaultTypeList object18of MF object14to determine the types of faults that MF object14is capable of detecting, reporting, and logging (line50). This may be accomplished, for example, using a READ command. So informed, MS30sets (or resets) the attributes of the FMControl object16to effect fault detection, reporting, and logging (line52). Particularly, MS30sets the administrativeState attribute of the FMControl object16to UNLOCKED. This indicates to MF object14that it should begin detecting, reporting, and logging faults, as MF object14will perform these functions only when this attribute is set to UNLOCKED. Alternatively, MS30may reset the administrativeState attribute of the FMControl object16to LOCKED, as previously described, to indicate to MF object14that the fault detection, reporting, and recording are no longer needed. When this attribute is reset, MF object14suspends or ceases detecting, reporting, and recording faults. Additionally, MS30may use a WRITE command to write the addresses of the management nodes to the faultReportTarget attribute of the MFControl object16thereby indicating to MF object14where it should send the fault reports. Whenever MF object14detects a fault, it generates a fault report and uses a SEND command to send the fault report to the one or more addresses listed in the faultReportTarget attribute16aof FMControl object16(line54). Additionally, MF object14utilizes a WRITE command to log the fault report to the file system identified in the faultReportLog attribute16bof FMControl object16(line56), and writes the fault report to the currentFaultList20(line58). Thereafter, the MS30can issue one or more READ commands, for example, to read the historical fault report information from the log file identified in the faultReportLog attribute16bof FMControl object16(line60), and obtain information on current faults by reading the fault reports written to the currentFaultList20(line62). FIG.4is a flow diagram illustrating a method70of performing a Network Resource Management (NRM) Fault Management (FM) procedure according to one embodiment of the present disclosure. It should be noted here that method70is described in terms of a single MF object14. However, this is for illustrative purposes only. Method70is also applicable to embodiments where multiple MF objects14,22,24,26exist, such as in the embodiment ofFIG.2. As seen inFIG.4, method70calls for maintaining a NRM Information Object Class (IOC) ME object12comprising one or more NRM IOC MF objects14, with each MF object14configured to detect, report, and log faults (box72). Method70also calls for maintaining, for each MF object14, a NRM IOC FMControl object16, one or more NRM faultTypeList objects18, and a NRM currentFaultList object20(box74). As previously described, the FMControl object16comprises a plurality of attributes including an administrative state attribute, an operational state attribute, and a faultReportTarget attribute identifying the one or more addresses where MF object14is to send fault reports. These objects can be set and reset to effect whether the MF object14detects, reports, and logs faults. Method70continues with the MS30verifying the types of faults that MF object14is capable of detecting, reporting, and logging, and setting the administrativeState attribute of the FMControl object16to control the MF object14to detect faults and send the fault reports (box76). These functions can be respectively accomplished by MS30utilizing a READ command to read the supportedFaultTypeList object18, and a WRITE command to set the adminsitrativeState attribute of FMControl object16to UNLOCKED. In embodiments where MF object14is to suspend or cease detecting, reporting, and logging faults, MS30would reset the adminsitrativeState attribute of FMControl object16to LOCKED, as previously described. In this embodiment, however, MF object14is to begin detecting, reporting, and logging faults. Therefore, responsive to MS30setting the administrativeState attribute of FMControl object16, and if the operationalState attribute of FMControl object is set to ENABLED, MF object14begins the processes of detecting, reporting, and logging the faults (box78). So logged, MF object14sends the fault reports to the one or more addresses specified in the faultReportTarget attribute of FMControl object16(box80). FIG.5is a flow diagram illustrating a method90, implemented by network equipment, of writing the fault reports to the log file according to one embodiment of the present disclosure. As seen inFIG.5, method90begins with checking the values of the administrativeState and operationalState attributes of FMControl object16(box92). Responsive to determining that the operationalState attribute is set to ENABLED, and that the administrativeState attribute is set to UNLOCKED, MF object14will write information associated with a detected fault to the location specified in the faultReportLog attribute of MFControl object16(box94). Additionally, as stated above, setting these attributes indicates to MF object14to detect, report, and log the faults (box96). However, responsive to determining that the operationalState attribute is set to DISABLED and that the administrativeState attribute is set to LOCKED, MF object14will not write information associated with a detected fault to the location specified in the faultReportLog attribute of MFControl object16(box98) and cease to detect, report, and log the faults (box100). FIG.6is a flow diagram illustrating a method110, implemented by network equipment, of indicating whether the network equipment has sufficient resources to detect faults, generate fault reports, and log the fault reports according to one embodiment of the present disclosure. Method110begins with MF object14determining whether sufficient space exists at the log file specified in the faultReportTarget attribute of FMControl object16to write a fault report associated with a currently detected fault (box112). If sufficient space does not exist, MF object14deletes or overwrites the oldest fault reports stored in the log file to make room to write the new fault reports (box114). When sufficient space does exist, however, MF object14writes the fault report to the log file (box116). FIG.7is a flow diagram illustrating a method120, implemented at a network management node, of performing a NRM FM procedure according to one embodiment of the present disclosure. Specifically, method120is performed by MF object14to indicate whether it does or does not have a sufficient amount of resources to perform the FM process. As seen inFIG.7, method120begins with MF object14checking to determine whether it has sufficient resources to detect faults, produce fault reports to send to other nodes, and to log the fault reports (box122). If sufficient resources exist, MF object14sets the operationalState attribute of FMControl object16to ENABLED (box124). If sufficient resources do not exist, however, MF object14resets the operationalState attribute of FMControl object16to DISABLED (box124). So set, a Network Management System, such as MS30, can determine whether MF object14does or does not have the resources it needs to perform the FM procedures simply by reading the operationalState attribute of the FMControl object16. FIG.8is a flow diagram illustrating a method, implemented by network equipment, of detecting faults and writing fault reports according to one embodiment of the present disclosure. As seen inFIG.8, method130calls for maintaining an IOC ME object12comprising one or more NRM IOC MF objects14. Each MF object14is configured to detect, report, and log faults (box132). Additionally, each MF object14is associated with a NRM IOC FMControl object16, one or more NRM faultTypeList objects18, and a NRM currentFaultList object20, as previously described. The Network Management System, such as MS30, verifies the types of faults that MF object14is capable of detecting, reporting, and logging by reading the faultTypeList object18(box134). MS30then indicates to MF object14that it can detect faults, generate fault reports, and send the fault reports to specified destination addresses by setting the administrativeState attribute of the FMControl object16to a first predefined value (e.g., UNLOCKED) (box136). MS30also writes the one or more addresses of the nodes interested in receiving the error reports to the faultReportTarget attribute of FMControl object16to instruct the MF object16to send the fault reports to those addresses (box138). If MS30decides to instruct MF object14to suspend or cease detecting, reporting, and logging faults, MS30will reset the administrativeState attribute of the FMControl object16to a second predefined value (e.g., LOCKED) (box140). Regardless, however, if the administrativeState attribute of FMControl16is set to the first predefined value (e.g., UNLOCKED), and the operationalState attribute of FMControl attribute16is set to a third pre-defined value (e.g., ENABLED), MS30can read the fault reports from the log file specified in the faultReportLog attribute of the FMControl object16(box142). FIG.9illustrates a representative network150comprising an instance of network equipment160, a network management node180, such as MS30, and one or more other network nodes190communicatively connected to network equipment160and network management node180. The other network nodes190may include, for example, a device configured as the file storage location specified in the faultReportLog attribute of FMControl object16, and one or more devices having the addresses listed in the faultReportTarget attribute of FMControl16. The network equipment160, which in one embodiment executes MF object14, includes processing circuitry162(e.g., one or more general and/or special purpose microprocessors etc.), memory164, communication circuitry166, and in some embodiments, fault detection circuitry168operative to detect faults and report information associated with the faults to processing circuitry162. Although the memory164is depicted as being separate from the processing circuitry162, those of skill in the art understand that the present disclosure is not so limited. In one embodiment, processing circuitry162includes memory164as internal memory, such as a cache memory. Those of skill in the art additionally understand that virtualization techniques allow some functions nominally executed by the processing circuitry162to actually be executed by other hardware, perhaps remotely located (e.g., in the so-called “cloud”). The memory164is operative to store, and the processing circuitry162is operative to execute, software that implements the NRM FM procedure described herein to detect faults, generate fault reports about the faults, and send the fault reports to one or more destination addresses. In particular, the processing circuitry162is operative to perform any of the methods previously described and claimed herein. To accomplish communication, network equipment160may additionally have components or circuits not depicted inFIG.9, such as a wireless communication transceiver or other dedicated network hardware, a user interface, and the like. The network management node180(e.g., MS30) includes processing circuitry182, memory184, and communication circuitry186. As above, memory184and processing circuitry182are illustrated as comprising separate, independent components. However, those of skill in the art understand that the present embodiments are not so limited. In at least one embodiment, processing circuitry182includes memory184as internal memory, such as a cache memory. Those of skill in the art additionally understand that virtualization techniques allow some functions nominally executed by the processing circuitry182to actually be executed by other hardware, perhaps remotely located (e.g., in the so-called “cloud”). The memory184is operative to store, and the processing circuitry182is operative to execute, software that facilitates fault detection, generating fault reports, sending those reports to one or more destination addresses, and logging those fault reports according to a NRM FM procedure as described herein. In particular, the processing circuitry182is operative to perform any of the methods previously described and claimed herein. Additionally, network management node180may other have components or circuits not specifically shown inFIG.9. In all embodiments, the processing circuitry162,182may comprise any sequential state machine operative to execute machine instructions stored as machine-readable computer control programs170,188in memory164,184, respectively. For example, such control programs170,188may comprise one or more hardware-implemented state machines (e.g., in discrete logic, FPGA, ASIC, etc.); programmable logic together with appropriate firmware; one or more stored-program, general-purpose processors, such as a microprocessor or Digital Signal Processor (DSP), or any combination of the above. In all embodiments, the memory164,184may comprise any non-transitory machine-readable media known in the art or that may be developed, including but not limited to magnetic media (e.g., floppy disc, hard disc drive, etc.), optical media (e.g., CD-ROM, DVD-ROM, etc.), solid state media (e.g., SRAM, DRAM, DDRAM, ROM, PROM, EPROM, Flash memory, solid state disc, etc.), or the like. In all embodiments, the communication circuits166,186may comprise a receiver and transmitter interface used to communicate with one or more other nodes over a communication network according to one or more communication protocols known in the art or that may be developed, such as Ethernet, TCP/IP, SONET, ATM, IMS, SIP, or the like. The communication circuits166,186implement receiver and transmitter functionality appropriate to the communication network links (e.g., optical, electrical, and the like). The transmitter and receiver functions may share circuit components and/or software, or alternatively may be implemented separately. Embodiments of the present disclosure present numerous advantages over the prior art. By way of example only, conventional FM procedures require a Network Management System such as MS30to implement a subscribe/unsubscribe mechanism with MF object14at runtime so that MF object14could send alarm notifications using the prior art notifyNewAlarm function. Embodiments of the present disclosure, however, negate the need for such subscription mechanisms. This is because the present embodiments configure MS30to identify the one or more addresses where fault reports are to be sent in the faultReportTarget attribute of FMControl object16. The MF object14would then send the fault reports to those addresses whenever a fault report is produced and recorded, but only if the administrativeState attribute of the FMControl object16is set to a predetermined value, such as UNLOCKED. Additionally, the present embodiments reduce complexity by utilizing standardized configuration management operations and functions (i.e., READ, WRITE, SEND) to implement FM procedures rather than the specialized operations and functions needed to set up (i.e., subscribe) and unsubscribe the prior art subscription mechanisms. This is especially beneficial because the standardized functions are already implemented in FM systems and would replace the specialized functions and operations. Thus, no new functions or operations are required. In particular, the MS30WRITES the administrativeState attribute to be UNLOCKED or LOCKED, as well as the addresses in the faultReportTarget attribute of FMControl object16. MS30also READS the operationalState attribute of FMControl object16. As for the MF object14, it WRITES the operationalState attribute to be ENABLED or DISABLED, and READS the addresses in the faultReportTarget attribute of FMControl object16, as well as the path to the file system specified in the faultReportLog attribute for logging the fault reports. Moreover, prior art systems require the runtime implementation of a complex system of operations to establish an understanding of the types of faults that can be reported from each MF object14or group of MF objects14. With the present embodiments, however, these functions are implemented when the system is set up, and further managed on a per-MF object basis. Additionally, MS30and MF object14can modify certain attributes of an MFControl object16at runtime, where prior art systems cannot. The present embodiment may, of course, be carried out in other ways than those specifically set forth herein without departing from essential characteristics of the embodiment. The present embodiments are to be considered in all respects as illustrative and not restrictive, and all changes coming within the meaning and equivalency range of the appended claims are intended to be embraced therein.
26,113
11943099
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS In this specification, the claims, and the accompanying drawings of this application, terms “first”, “second”, “third”, “fourth”, and the like (if existent) are intended to distinguish between similar objects but do not necessarily indicate a specific order or sequence. It should be understood that data termed in such a way is interchangeable in appropriate circumstances, so that embodiments of this application described herein can be implemented in other orders than the order illustrated or described herein. In addition, terms such as “include”, “correspond”, and any variations thereof are intended to cover non-exclusive inclusions, for example, a process, method, system, product, or device that includes a series of steps or units is not necessarily limited to those clearly listed steps or units, but may include other steps or units that are not clearly listed or inherent to such a process, method, product, or device. In embodiments of this application, terms “example”, “for example”, and the like are used to represent giving an example, an illustration, or a description. Any embodiment or design scheme described as an “example” or “for example” in embodiments of this application should not be explained as being more preferred or having more advantages than another embodiment or design scheme. Exactly, use of the terms “example”, “for example”, and the like is intended to present a related concept in a specific manner. In a related technology, when a head node encapsulates an iFIT measurement header in a packet, if a tail node does not have an iFIT capability, the tail node cannot strip the iFIT measurement header from the packet. As a result, the packet is discarded because the packet cannot be identified, which affects normal packet forwarding. Therefore, before encapsulating the measurement header, the head node needs to confirm that the tail node has the iFIT capability. This usually requires operation and maintenance personnel to manually configure iFIT-related information on the head node, to ensure that the head node can determine, based on an iFIT capability of a related device, whether to encapsulate the measurement header before encapsulating the measurement header. However, manual configuration is inefficient and complex. For example, in a network, a network device1and a network device2that are connected to each other are included, and the network device2is connected to an external network. The network device1and the network device2may be respectively a head node and a tail node of an iFIT measurement domain. After the network device1obtains a to-be-sent packet, the network device1encapsulates an iFIT measurement header into the packet, and sends, to the network device2, a packet into which the iFIT measurement header is encapsulated. The network device2performs data measurement and reporting based on the iFIT measurement header in the received packet, strips the iFIT measurement header from the packet, and forwards a data packet from which the iFIT measurement header is stripped to the external network. Before the network device1needs to encapsulate the measurement header, whether the network device2supports iFIT and a specific capability of supporting iFIT needs to be determined. Otherwise, if the network device2does not support iFIT or the corresponding capability, after the network device1encapsulates the measurement header and sends the measurement header to the network device2, an error may occur in packet processing or sending because the network device2does not support processing of the measurement header. In this case, the iFIT capability supported by the network device2is usually manually configured on the network device1by operation and maintenance personnel. This results in problems such as complex configuration and low configuration efficiency. Further, after the network is changed, the network device2is changed from a device capable of iFIT processing to a device not capable of iFIT processing. Because the related configuration of the network device2on the network device1is manually configured by the operation and maintenance personnel, the network device1cannot perceive a change on the network device2. In this way, after the network device1receives the packet, the network device1continues to encapsulate the iFIT measurement header into the packet, and forwards, to the network device2, the packet into which the iFIT measurement header is encapsulated. However, because the network device2does not have the iFIT capability, the network device2cannot identify the iFIT measurement header in the packet, and consequently directly forwards the packet to the external network. The external network cannot identify the iFIT measurement header in the packet, and consequently discards the packet, which affects normal packet forwarding. In view of the foregoing problem, embodiments of this application provide a capability notification method and a related device. A first network device sends a packet to a second network device, to notify a support status of an IFIT capability of the first network device. In this way, the second network device can determine, based on the IFIT capability of the first network device, whether to encapsulate a measurement header, to avoid affecting normal packet forwarding because the first network device cannot correctly process the measurement header in the packet. FIG.1is a schematic flowchart of a capability notification method wo according to an embodiment of this application. As shown inFIG.1, the capability notification method100provided in this embodiment of this application includes the following steps. Step101: A first network device sends a first packet to a second network device, where the first packet includes first indication information, and the first indication information indicates a support status of an in-situ flow information telemetry iFIT capability corresponding to the first network device or a first service module included in the first network device. In this embodiment, the first network device and the second network device may be, for example, physical devices such as routers, switches, or gateways, or may be virtual devices that support packet forwarding. Specific types of the first network device and the second network device are not limited in this embodiment. The first network device may include a plurality of service modules. The first service module may be a service module that is in the first network device and that may be configured to perform packet forwarding or packet processing, for example, a VPN processing module. The first packet may be, for example, a packet such as a border gateway protocol (BGP) packet or an interior gateway protocol (IGP) packet. In an example, the first packet is a BGP update packet, and is used for a notification route. For example, the first network device advertises, to the second network device, the notification route for the BGP update packet that carries the support status of the iFIT capability, where the route may be a service route, a tunnel route, an SRv6 route such as an SRv6 locator, or another route; and the second network device may normally receive and identify the iFIT capability carried in the packet. For an address range indicated by the BGP packet, the second network device may serve as an Encap node, and the first network device may serve as a Decap node. In a possible implementation, the support status of the iFIT capability may be whether the first network device or the first service module is capable of iFIT processing. For example, the first indication information may include a bit, and a value of the bit may be “0” or “1”. When the value of the bit is “0”, it indicates that the first network device or the first service module is not capable of iFIT processing. When the value of the bit is “1”, it indicates that the first network device or the first service module is capable of iFIT processing. In an example, before the first network device sends the first packet to the second network device, the first network device may further indicate, by sending another packet to the second network device, or negotiating with the second network device in advance, a specific iFIT capability or specific iFIT capabilities that is or are supported. In another possible implementation, the support status of the iFIT capability may alternatively be a status that the first network device or the first service module is capable of iFIT processing or not capable of iFIT processing. For example, the iFIT capability may include one or more of a path tracking in-situ operation, administration, and maintenance capability, an end-to-end in-situ operation, administration, and maintenance capability, a data direct exporting in-situ operation, administration, and maintenance capability, or an enhanced alternate marking in-situ operation, administration, and maintenance capability. That is, the first indication information may indicate that the first network device or the first service module is capable of and/or not capable of one or more of the four types of iFIT processing. Optionally, in a possible example, the path tracking in-situ operation, administration, and maintenance capability may further include a pre-allocated path tracking in-situ operation, administration, and maintenance capability and an incremental path tracking in-situ operation, administration, and maintenance capability. To be specific, the iFIT capability may include one or more of the four capabilities: the path tracking in-situ operation, administration, and maintenance capability, the end-to-end in-situ operation, administration, and maintenance capability, the data direct exporting in-situ operation, administration, and maintenance capability, or the enhanced alternate marking in-situ operation, administration, and maintenance capability. Alternatively, the iFIT capability may include one or more of the five capabilities: the pre-allocated path tracking in-situ operation, administration, and maintenance capability, the incremental path tracking in-situ operation, administration, and maintenance capability, the end-to-end in-situ operation, administration, and maintenance capability, the data direct exporting in-situ operation, administration, and maintenance capability, or the enhanced alternate marking in-situ operation, administration, and maintenance capability. For ease of description, the following uses an example in which the iFIT capability includes the four iFIT capabilities. In a possible example, the first indication information may include a first TLV, the first TLV includes a first field, and the first field indicates an iFIT capability supported and/or not supported. In other words, the first network device may add a TLV to the first packet, and carry, by using a field in the TLV, an iFIT capability supported and/or not supported by the first network device. In a possible example, the first field includes one or more bits, and each bit in the first field indicates that the first network device is capable of or not capable of one of the foregoing four types of iFIT processing. For example, when the iFIT capability includes the foregoing four iFIT capabilities, the first field includes four bits. The four bits may be, for example, a bit “T”, a bit “E”, a bit “D”, and a bit “A”, and respectively indicate support statuses corresponding to the path tracking in-situ operation, administration, and maintenance capability, the end-to-end in-situ operation, administration, and maintenance capability, the data direct exporting in-situ operation, administration, and maintenance capability, and the enhanced alternate marking in-situ operation, administration, and maintenance capability. For details, refer to Table 1. Table 1 is an example in which the iFIT capability is indicated by using four bits. TABLE 1BitDefinition of the iFIT capabilityTIndicates the support status of the “path tracking in-situ operation,administration, and maintenance capability”:T = 0, indicating that the path tracking in-situ operation,administration, and maintenance capability is not supportedT = 1, indicating that the path tracking in-situ operation,administration, and maintenance capability is supportedEIndicates the support status of the “end-to-end in-situ operation,administration, and maintenance capability”:E = 0, indicating that the end-to-end in-situ operation, administration,and maintenance capability is not supportedE = 1, indicating that the end-to-end in-situ operation, administration,and maintenance capability is supportedDIndicates the support status of the “data direct exporting in-situoperation, administration, and maintenance capability”:D = 0, indicating that the data direct exporting in-situ operation,administration, and maintenance capability is not supportedD = 1, indicating that the data direct exporting in-situ operation,administration, and maintenance capability is supportedAIndicates the support status of the “enhanced alternate marking in-situoperation, administration, and maintenance telemetry capability”:A = 0, indicating that the enhanced alternate marking in-situoperation, administration, and maintenance capability is not supportedA = 1, indicating that the enhanced alternate marking in-situoperation, administration, and maintenance capability is supported It may be learned from Table 1 that the support statuses of the foregoing four iFIT capabilities may be effectively indicated by using different values of the four bits. It may be understood that, the foregoing descriptions are provided by using an example in which a quantity of bits included in the first field is four. In an actual case, there may alternatively be another quantity of bits included in the first field. For example, the first field may include one bit, two bits, five bits, and the like. When the quantity of bits in the first field is less than four, the quantity of bits in the first field corresponds to a quantity of iFIT capabilities that can be indicated by the first field. For example, the first field including only one bit may indicate only a support status of any one of the foregoing four iFIT capabilities. When the quantity of bits in the first field is greater than four, on the basis of indicating the foregoing four iFIT capabilities, a specific bit may be further reserved as a reserved bit, to facilitate subsequent extended use. For another example, when the iFIT capability includes the foregoing five iFIT capabilities, the first field may include five bits. The five bits may be, for example, a bit “P”, a bit “I”, a bit “E”, a bit “D”, and a bit “A”, and respectively indicate support statuses corresponding to the pre-allocated path tracking in-situ operation, administration, and maintenance capability, the incremental path tracking in-situ operation, administration, and maintenance capability, the end-to-end in-situ operation, administration, and maintenance capability, the data direct exporting in-situ operation, administration, and maintenance capability, and the enhanced alternate marking in-situ operation, administration, and maintenance capability. In another possible example, the first field includes a plurality of values, and each value in the first field indicates that the first network device is capable of or not capable of one or more of the foregoing four types of iFIT processing. For example, the first field includes four bits, and the four bits may indicate 16 different values such as “0000” to “1111”, that is, the first field may include 15 values. In this way, different values of the four bits may indicate any one or any combination of the four iFIT capabilities. For example, “0000” indicates that the foregoing four iFIT capabilities are not supported. “0001” to “0000” may respectively indicate that any one of the foregoing four iFIT capabilities is supported and that the other three of the foregoing four iFIT capabilities are not supported. “0101” to “1010” respectively indicate that any two of the foregoing four iFIT capabilities are supported and that the other two of the foregoing four iFIT capabilities are not supported. “1011” to “1110” respectively indicate that any three of the foregoing four iFIT capabilities are supported and that the other one of the foregoing four iFIT capabilities is not supported. “1111” indicates that the foregoing four capabilities are supported. It may be understood that, a quantity of values included in the first field may alternatively be determined based on an actual situation. For example, the first field may include four values, and each of the four values indicates that one of the foregoing four iFIT capabilities is supported. This embodiment is not limited to how the first field indicates the support status of the iFIT capability by using different values. In this embodiment, there may be a plurality of cases in which the first network device is triggered to send the first packet to the second network device. Case 1: The iFIT capability of the first network device is changed. For example, when the first network device obtains instructions for changing the iFIT capability, the first network device may determine that the iFIT capability of the first network device is changed. The first network device may generate the first packet based on the changed iFIT capability, send the first packet to the second network device, and notify the second network device of a support status of the changed iFIT capability. For example, the instructions for changing the iFIT capability may be delivered by operation and maintenance personnel to the first network device by entering a command line. Case 2: The first network device is a device newly deployed in the network. For example, the first network device may be a device directly deployed in the network in a network deployment process (for example, in an early stage of network deployment). Alternatively, the first network device may be a device newly deployed in the network by replacing an original device in the network in a network maintenance process. In this way, after the newly deployed first network device is powered on and started, the first network device may automatically trigger to send the first packet to the second network device, to notify the second network device of the support status of the iFIT capability of the first network device. It may be understood that, in addition to the foregoing cases, the first network device may alternatively be triggered to send the first packet to the second network device in another case. For example, a periodicity may be set to trigger the first network device to periodically send the first packet to the second network device. A case in which the first network device is triggered to send the first packet to the second network device is not specifically limited in this embodiment of this application. Step102: The second network device determines, based on the first packet, the support status of the iFIT capability corresponding to the first network device or the first service module. In correspondence to the first indication information carried in the first packet sent by the first network device, the second network device determines, based on the first indication information, the support status of the iFIT capability corresponding to the first network device or the first service module. In this embodiment, after determining the support status of the iFIT capability corresponding to the first network device or the first service module, the second network device may determine whether a measurement header can be encapsulated into a data packet. For example, after the second network device determines that the first network device is capable of path tracking in-situ operation, administration, and maintenance capability, the second network device may encapsulate the measurement header into the data packet, and send, to the first network device, a data packet into which the measurement header is encapsulated, where the measurement header in the data packet indicates the first network device to perform path tracking in-situ operation, administration, and maintenance. For another example, after the second network device determines that the first network device is not capable of path tracking in-situ operation, administration, and maintenance capability, the second network device directly forwards, to the first network device, the data packet into which the measurement header is not encapsulated. In this embodiment, the first network device sends the packet to the second network device, to notify the support status of the IFIT capability of the first network device. In this way, the second network device can determine, based on the IFIT capability of the first network device, whether to encapsulate the measurement header, to avoid affecting normal packet forwarding because the first network device cannot correctly process the measurement header in the packet. The foregoing describes a process in which the first network device notifies the second network device of the support status of the iFIT capability of the first network device. The following describes in detail how the first network device notifies the support status of the iFIT capability of the first network device by using the first packet. In a possible embodiment, the first packet may include a BGP packet, and the BGP packet may be, for example, a BGP update packet. The BGP packet includes a first attribute, and the first indication information is carried in the first attribute. For example, the first attribute may include a next-hop capability attribute, an extended community attribute, or a next-hop path attribute. A TLV is newly added to the first attribute, so that the support status of the iFIT capability corresponding to the first network device or the first service module may be carried. It may be understood that, in addition to the foregoing first attribute, the BGP packet sent by the first network device to the second network device further includes a next hop, and the next hop in the BGP packet is an address of the first network device. In this way, after the second network device receives the BGP packet sent by the first network device, the second network device may determine, based on that the next hop in the BGP packet is the first network device, that a device indicated by the first indication information in the first attribute is the first network device, that is, determine that the iFIT capability of the first network device is the iFIT capability carried in the first attribute. In a possible example, the first attribute may further carry an originating IP address, and the originating IP address includes the address of the first network device. The originating IP address may be used for enhanced validity check, and the address of the first network device (namely, an address of an originating network device that sends the first packet) is indicated, to check whether the next hop carried in the first packet is the first network device that actually sends the first packet. Optionally, the originating IP address may also be referred to as next-hop information, that is, the first attribute may carry the next-hop information, and the next-hop information includes the address of the first network device. It may be understood that, in some cases, the first network device and the second network device may not be directly connected, that is, another device may be further connected between the first network device and the second network device. For example, a route reflector is further connected between the first network device and the second network device. In this case, the first network device sends the first packet to the route reflector, and then the route reflector forwards the first packet to the second network device. In a process in which the route reflector forwards the first packet, the route reflector may modify the next hop in the first packet to an address of the route reflector (that is, modify the next hop from the first network device to the route reflector). As a result, the second network device cannot determine, based on the next hop in the first packet, a device corresponding to the iFIT capability carried in the first attribute. For example,FIG.2is a schematic diagram of a packet sending procedure according to an embodiment of this application. As shown inFIG.2, in step201, a first network device sends a first packet to a route reflector, where a next hop in the first packet is an address of the first network device. The first packet includes a first attribute used to carry an iFIT capability of the first network device, the first attribute includes next-hop information, and the next-hop information is also the address of the first network device. In step202, the route reflector modifies the next hop in the first packet to an address of the route reflector, but does not modify the first attribute in the first packet. In step203, the route reflector sends, to a second network device, the first packet to which the next hop is modified. In step204, the second network device determines, based on the next-hop information in the first attribute, that a device capable of iFIT processing is the first network device instead of the route reflector indicated in the next hop in the first packet. In other words, the next-hop information is carried in the first attribute in the first packet, so that when there is another network device between the first network device and the second network device, it can be ensured that the second network device can determine the network device corresponding to the iFIT capability carried in the first attribute. In a possible example, the first attribute includes a next-hop path attribute, and the next-hop path attribute may carry support statuses of iFIT capabilities corresponding to a plurality of network devices. For example, before that the first network device sends the first packet to the second network device, the method may further include: The first network device receives a second packet sent by a third network device, where the second packet includes second indication information, and the second indication information indicates a support status of an iFIT capability corresponding to the third network device. The first network device sends the first packet to the second network device based on the second indication information in the second packet, where the first packet further includes the second indication information. In other words, the first packet sent by the first network device includes the first indication information and the second indication information, respectively indicating the support statuses of the iFIT capabilities corresponding to the first network device and the third network device. In this way, after the second network device receives the first packet, the second network device may learn of, based on the first packet, the support statuses of the iFIT capabilities corresponding to the first network device and the third network device, that is, the second network device may learn of, based on a same packet, support statuses of iFIT capabilities corresponding to a plurality of network devices. The first indication information and the second indication information may be carried in the next-hop path attribute. For example, the next-hop path attribute may include a plurality of TLVs indicating iFIT capabilities, and the plurality of TLVs respectively carry iFIT capabilities corresponding to different network devices. In a possible example, the next-hop path attribute may further include addresses of the first network device and the third network device, so that the second network device can determine a network device corresponding to the iFIT capability indicated in the next-hop path attribute. For example, the next-hop path attribute includes a TLV 1 and a TLV 2, the TLV 1 includes the address of the first network device and the support status of the corresponding iFIT capability, and the TLV 2 includes the address of the third network device and the support status of the corresponding iFIT capability. In this way, the second network device may determine, based on an address of a network device in a TLV, a support status of an iFIT capability corresponding to each network device. For ease of understanding, the following describes, with reference to a specific example, a specific implementation in which the first network device notifies the support status of the iFIT capability of the first network device by using the attribute in the packet. Example 1: A Next-Hop Capability Attribute in a BGP Packet Carries the Support Status of the iFIT Capability For example,FIG.3Ais a schematic diagram of a packet format in which a next-hop capability attribute carries an iFIT capability, namely, first indication information, according to an embodiment of this application. As shown inFIG.3A, the next-hop capability attribute may include a capability code, a capability length, and a capability value. The capability code may indicate that the next-hop capability attribute carries the iFIT capability. The capability length may indicate a length of the capability value. The capability value may be used to carry the iFIT capability. For example,FIG.3Bis a schematic diagram of a packet format of a capability value field according to an embodiment of this application. As shown inFIG.3B, the capability value may include the iFIT capability. Optionally, the capability value may further include an originating IP address. The iFIT capability may include a plurality of bits or a plurality of values, to indicate that the iFIT capability is supported and/or not supported. For details, refer to step101. Details are not described herein again. The originating IP address may indicate an IPv4 address or an IPv6 address of the first network device, to enhance validity check. Example 2: An Extended Community Attribute in a BGP Packet Carries the iFIT Capability For example, a possible format of an internet protocol version 4 (IPv4) extended community attribute in the BGP packet may be shown in Table 2. TABLE 2Type 1Sub-Type 1iFIT CapabilityOriginating IPv4 Address The type 1 and the sub-type 1 may indicate that the IPv4 extended community attribute is used to carry the iFIT capability. The iFIT capability is used to carry a support status of the iFIT capability. The iFIT capability is similar to the foregoing capability value, and details are not described herein again. Optionally, the IPv4 extended community attribute may further include an originating IPv4 address, and the originating IPv4 address may be used to carry an originating IP address of a network device that sends a packet, namely, an IPv4 address of the first network device. A possible format of an IPv6 extended community attribute in the BGP packet may be shown in Table 3. TABLE 3Type 1Sub-Type 2iFIT CapabilityOriginating IPv6 AddressOriginating IPv6 Address (continue)Originating IPv6 Address (continue)Originating IPv6 Address (continue) Similarly, the type 2 and the sub-type 2 may indicate that the IPv6 extended community attribute is used to carry the iFIT capability. Optionally, the IPv6 extended community attribute may further include a next-hop IPv6, and the next-hop IPv6 may be used to carry IPv6 next-hop information. Example 3: A Next-Hop Path Attribute in a BGP Packet Carries the iFIT Capability For example,FIG.4is a schematic diagram of a format of a TLV in a next-hop path attribute according to an embodiment of this application. As shown inFIG.4, the TLV includes a type, a length, and a value. The value part in the TLV may include a reserved bit, an originating IP address, and iFIT capabilities. The TLV may indicate a support status of an iFIT capability corresponding to a network device. In the TLV, “Type” indicates a type of the TLV; “Length” indicates length information of the TLV; “Reserved” is used for extended use; “Originating IP Address” indicates an IP address of the network device that sends the TLV; and “iFIT Capability” is used to carry the iFIT capability. In a possible example, in a process of advertising a route by using the BGP packet, the network device may include the foregoing TLV in the next-hop path attribute in the BGP packet, to indicate the support status of the corresponding iFIT capability. In a process of forwarding the BGP packet hop by hop, another network device responsible for forwarding the BGP packet may newly add the TLV to the next-hop path attribute in the BGP packet, to carry a support status of a corresponding iFIT capability. In other words, the next-hop path attribute in the BGP packet may carry a plurality of TLVs in a route advertisement sequence, to indicate support statuses of iFIT capabilities corresponding to different network devices. For example,FIG.5is a schematic diagram of route advertisement according to an embodiment of this application. As shown inFIG.5, a provider edge router (PE)2advertises a route by using a BGP packet, and the route sequentially passes through an area border router (ABR)-c2, an ABR-b2, an ABR-b1, an ABR-a1, and finally to a PE1. In a process of forwarding the BGP packet hop by hop, each ABR along the path adds an iFIT capability corresponding to the ABR to a next-hop path attribute in the BGP packet by using the TLV shown inFIG.4. Finally, the PE1may obtain iFIT capability information supported by each device in the BGP packet advertisement path. In a possible example, the iFIT capability information obtained by the PE1may be shown in Table 4. TABLE 4Originating IP addressiFIT capabilities10.1.3.1 (ABR-a1)T = 1, E = 0, D = 0, A = 010.1.4.1 (ABR-b1)T = 1, E = 0, D = 0, A = 010.1.6.1 (ABR-b2)T = 0, E = 1, D = 0, A = 010.1.8.1 (ABR-c2)T = 0, E = 1, D = 0, A = 010.1.8.1 (PE 2)T = 1, E = 1, D = 0, A = 0 It can be learned from Table 4 that each network device that sends the BGP packet along the path sequentially adds an IP address and an iFIT capability of the network device to the next-hop path attribute in the BGP packet. In this way, the PE1may determine, based on the obtained iFIT capability information, a specific network device that is used as a tail node to initiate iFIT measurement. In addition, the PE1may determine a sending sequence of the BGP packet based on an addition sequence of the iFIT capability in the BGP packet. For example, the PE1may determine, based on the addition sequence of the iFIT capability, that an IP address originally carried in the BGP packet is an originating IP address of the network device that originally advertises the BGP packet, that is, determine that 10.1.8.1 is the originating IP address of the network device that originally advertises the BGP packet. For example, the PE1determines, based on the iFIT capability information shown in Table 4, that the iFIT capability of the ABR-a1 is: T=1, E=0, D=0, A=0, that is, the ABR-a1 is capable of only path tracking in-situ operation, administration, and maintenance capability. Therefore, the PE1may use the PE1as a head node and use the ABR-a1 as a tail node, to initiate path tracking in-situ operation, administration, and maintenance. For another example, the PE1determines, based on the iFIT capability information shown in Table 4, that the iFIT capability of the PE2is: T=1, E=1, D=0, A=0, that is, the PE2is capable of both path tracking in-situ operation, administration, and maintenance capability and end-to-end in-situ operation, administration, and maintenance capability. Therefore, the PE1may use the PE1as a head node and use the PE2as a tail node, to initiate path tracking in-situ operation, administration, and maintenance or end-to-end in-situ operation, administration, and maintenance. Example 4: This example describes an example of notifying a support status of an iFIT capability by a service module of the first network device. In an example, the service module may be an SR module, a tunnel module, or another service module that needs to notify an iFIT capability. This is not specifically limited in this application. This solution is applied to a network that supports segment routing (SR). The foregoing first packet is a BGP packet, and may carry segment routing internet protocol version 6 (SRv6) virtual private network (VPN) information, to notify the iFIT capability. For example, a TLV, for example, a locator iFIT TLV, is newly added to the BGP packet, so that the iFIT capability may be carried.FIG.6is a schematic diagram of a format of a locator iFIT TLV according to an embodiment of this application. As shown inFIG.6, the locator iFIT TLV includes a plurality of fields, which are respectively a locator-prefix, a locator-mask, and iFIT capabilities. A length of the locator-prefix is 128 bits; and a value of the locator-mask ranges from 1 to 128. Based on the locator-prefix and the locator-mask, for example, after an “AND” operation is performed on the locator-prefix and the locator-mask, a value of a locator may be obtained, and the value is an IPv6 address network segment. In an SR network, in this form, a head node may monitor traffic by using an address network segment, to implement end-to-end overall traffic performance detection. In the foregoing descriptions, when the first network device notifies the support status of the iFIT capability by the service module of the first network device, a second network device may perform iFIT detection on only the corresponding service module. This avoids that a tail node receives excessive iFIT packets and is attacked. The foregoing describes a process in which the first network device notifies, by using the attribute in the packet, the support status of the iFIT capability of the first network device. The following describes a process in which the second network device initiates iFIT measurement after the first network device notifies the second network device of the iFIT capability supported by the first network device. FIG.7is a schematic flowchart of an iFIT measurement method700according to an embodiment of this application. As shown inFIG.7, based on the capability notification method shown inFIG.1, this embodiment of this application further provides the iFIT measurement method700. The iFIT measurement method includes the following steps. Step701: A second network device obtains a data packet. The data packet may be a packet to be forwarded after the second network device receives the data packet, and the second network device needs to forward the data packet to a first network device, to ensure that the data packet can be correctly forwarded to a destination. Step702: The second network device encapsulates an iFIT measurement header into the data packet based on a support status of an iFIT capability corresponding to the first network device or a first service module. In this embodiment, the first network device or the first service module has a supported iFIT capability (for example, one or more of the foregoing four iFIT capabilities), and the second network device encapsulates the corresponding iFIT measurement header into the data packet based on the iFIT capability supported by the first network device or the first service module, to initiate corresponding iFIT measurement. The second network device may use the second network device as a head node and the first network device as a tail node. For example, the second network device encapsulates an identifier of the first network device into the data packet, so that the first network device can determine that the first network device is the tail node. Step703: The second network device sends, to the first network device, a data packet into which the iFIT measurement header is encapsulated. Step704: The first network device performs iFIT measurement based on the iFIT measurement header. Step705: The first network device strips the iFIT measurement header from the data packet. In this embodiment, after the first network device receives the data packet into which the iFIT measurement header is encapsulated, the first network device may identify the iFIT measurement header encapsulated into the data packet, to perform data measurement and reporting based on the iFIT measurement header. In addition, the first network device may identify that the first network device is the tail node for iFIT measurement. Therefore, after stripping the iFIT measurement header from the data packet, the first network device continues to forward the data packet from which the iFIT measurement header is stripped. The method100and the method703in embodiments of this application are described above. Network devices in embodiments of this application are described below. The network devices described below have any function of the first network device or the second network device in the method100or the method700. FIG.8is a schematic diagram of a structure of a network device800according to an embodiment of this application. As shown inFIG.8, the network device800includes: a sending unit801, configured to perform step101or703; a processing unit802, configured to perform step102,702,704, or705; and a receiving unit803, configured to perform step701. The network device800corresponds to the first network device in the foregoing method embodiments. The units in the network device800and the foregoing other operations and/or functions are respectively used to implement various steps and methods that are performed by the first network device in the method embodiments. For specific details, refer to the method100or the method700. For brevity, details are not described herein again. When the network device800processes a packet, division into the foregoing functional modules is used as an example for description. During actual application, the foregoing functions may be allocated to different functional modules as required, that is, an internal structure of the network device800is divided into different functional modules, to implement all or a part of the functions described above. In addition, the network device800provided in the foregoing embodiment belongs to a same concept as the method in the embodiment corresponding toFIG.1orFIG.7. For a specific implementation process of the network device800, refer to the method100or the method700. Details are not described herein again. In correspondence to the method embodiments and the virtual apparatus embodiment provided in this application, an embodiment of this application further provides a network device. The following describes a hardware structure of the network device. The network device900or the network device1000described below corresponds to the first network device or the second network device in the foregoing method embodiments, hardware and modules in the network device900or the network device1000and the foregoing other operations and/or functions are respectively used to implement various steps and methods that are performed by the first network device or the second network device in the method embodiments. For a detailed procedure of how the network device900or the network device1000performs capability notification, refer to the foregoing method embodiments for specific details. For brevity, details are not described herein again. The steps of the method100or the method700are completed by using an integrated logic circuit of hardware in a processor of the network device900or the network device1000or instructions in a form of software. The steps of the method disclosed with reference to embodiments of this application may be directly performed by a hardware processor, or may be performed by a combination of hardware and software modules in the processor. The software module may be located in a mature storage medium in the art, such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, an electrically erasable programmable memory, or a register. The storage medium is located in the memory, and the processor reads information in the memory and completes the steps in the foregoing methods in combination with hardware of the processor. To avoid repetition, details are not described herein again. The network device900or the network device1000corresponds to the network device800in the foregoing virtual apparatus embodiment, and each functional module in the network device800is implemented by using software of the network device900or the network device1000. In other words, the functional module included in the network device800is generated after the processor of the network device900or the network device1000reads program code stored in the memory. FIG.9is a schematic diagram of a structure of a network device900according to an example embodiment of this application. The network device900may be configured as the first network device or the second network device. The network device900may be implemented by a general bus architecture. The network device900includes at least one processor901, a communication bus902, a memory903, and at least one communication interface904. The processor901may be a general-purpose CPU, an NP, or a microprocessor, or may be one or more integrated circuits configured to implement the solutions of this application, for example, an application-specific integrated circuit (ASIC), a programmable logic device (PLD), or a combination thereof. The PLD may be a complex programmable logic device (CPLD), a field programmable logic gate array (FPGA), a generic array logic (generic array logic, GAL), or any combination thereof. The communication bus902is configured to transmit information between the foregoing components. The foregoing communication bus902may be classified into an address bus, a data bus, a control bus, and the like. For ease of indication, the bus is indicated by using only one bold line in the figure. However, it does not indicate that there is only one bus or only one type of bus. The memory903may be a read-only memory (ROM) or another type of static storage device that can store static information and instructions, or a random access memory (RAM) or another type of dynamic storage device that can store information and instructions, or may be an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or another optical disk storage, an optical disc storage (including a compact disc, a laser disc, an optical disc, a digital versatile disc (DVD), a Blu-ray disc, or the like), a disk storage medium or another magnetic storage device, or any other medium that can be used to carry or store expected program code in a form of instructions or a data structure and that can be accessed by a computer. However, the memory903is not limited thereto. The memory903may exist independently, and is connected to the processor901through the communication bus902. Alternatively, the memory903may be integrated with the processor901. The communication interface904uses any apparatus such as a transceiver, and is configured to communicate with another device or a communication network. The communication interface904includes a wired communication interface, and may further include a wireless communication interface. The wired communication interface may be, for example, an Ethernet interface. The Ethernet interface may be an optical interface, an electrical interface, or a combination thereof. The wireless communication interface may be a wireless local area network (WLAN) interface, a cellular network communication interface, or a combination thereof. During specific implementation, in an embodiment, the processor901may include one or more CPUs such as a CPU0and a CPU1inFIG.9. During specific implementation, in an embodiment, the network device900may include a plurality of processors, such as the processor901and a processor905shown inFIG.9. Each of these processors may be a single-core processor (single-CPU) or a multi-core processor (multi-CPU). The processor herein may refer to one or more devices, circuits, and/or processing cores configured to process data (for example, computer program instructions). During specific implementation, in an embodiment, the network device900may further include an output device906and an input device907. The output device906communicates with the processor901, and may display information in a plurality of manners. For example, the output device906may be a liquid crystal display (LCD), a light emitting diode (LED) display device, a cathode ray tube (CRT) display device, or a projector. The input device907communicates with the processor901, and may receive a user input in a plurality of manners. For example, the input device907may be a mouse, a keyboard, a touchscreen device, or a sensor device. In some embodiments, the memory903is configured to store program code910for executing the solutions of this application, and the processor901may execute the program code910stored in the memory903. In other words, the network device900may implement the method100or the method700provided in the method embodiments through the processor901and the program code910in the memory903. The network device900in this embodiment of this application may correspond to the first network device or the second network device in the foregoing method embodiments. In addition, the processor901, the communication interface904, and the like in the network device900may implement functions of the first network device or the second network device and/or various steps and methods that are performed by the first network device or the second network device in the foregoing method embodiments. For brevity, details are not described herein again. The sending unit801and the receiving unit803in the network device800are equivalent to the communication interface904in the network device900. The processing unit802in the network device800may be equivalent to the processor901in the network device900. FIG.10is a schematic diagram of a structure of a network device1000according to an example embodiment of this application. The network device1000may be configured as the first network device or the second network device in the method100or the method700in the foregoing method embodiments. The network device1000includes a main control board1010and an interface board1030. The main control board1010is also referred to as a main processing unit (MPU) or a route processor card. The main control board1010is configured to control and manage components in the network device1000, including functions of route calculation, device management, device maintenance, and protocol processing. The main control board1010includes a central processing unit1011and a memory1012. The interface board1030is also referred to as a line processing unit (LPU), a line card, or a service board. The interface board1030is configured to provide various service interfaces and implement data packet forwarding. The service interface includes but is not limited to an Ethernet interface, a POS (Packet over SONET/SDH) interface, and the like. The Ethernet interface is, for example, a flexible Ethernet client (FlexE Client). The interface board1030includes a central processing unit1031, a network processor1032, a forwarding entry memory1034, and a physical interface card (PIC)1033. The central processing unit1031on the interface board1030is configured to control and manage the interface board1030and communicate with the central processing unit1011on the main control board1010. The network processor1032is configured to implement packet forwarding processing. A form of the network processor1032may be a forwarding chip. Specifically, the network processor1032is configured to: forward a received packet based on a forwarding table stored in the forwarding entry memory1034; and if a destination address of the packet is an address of the network device1000, upload the packet to a CPU (for example, the central processing unit1011) for processing; or if a destination address of the packet is not an address of the network device1000, search for a next hop and an outbound interface corresponding to the destination address from the forwarding table based on the destination address, and forward the packet to the outbound interface corresponding to the destination address. Uplink packet processing includes packet inbound interface processing and forwarding table searching; and downlink packet processing includes forwarding table searching, and the like. The physical interface card1033is configured to implement a physical layer interconnection function. Original traffic enters the interface board1030, and a packet obtained after processing is sent out from the physical interface card1033. The physical interface card1033is also referred to as a subcard, may be installed on the interface board1030, and is responsible for converting an optical/electrical signal into a packet, performing validity check on the packet, and then forwarding the packet to the network processor1032for processing. In some embodiments, the central processing unit may also perform a function of the network processor1032, for example, implement software forwarding based on a general-purpose CPU, so that the network processor1032is not required in the physical interface card1033. Optionally, the network device1000includes a plurality of interface boards. For example, the network device1000further includes an interface board1040. The interface board1040includes a central processing unit1041, a network processor1042, a forwarding entry memory1044, and a physical interface card1043. Optionally, the network device1000further includes a switching board1020. The switching board1020may also be referred to as a switch fabric unit (SFU). When the network device has a plurality of interface boards1030, the switching board1020is configured to implement data exchange between the interface boards. For example, the interface board1030and the interface board1040may communicate with each other through the switching board1020. The main control board1010is coupled to the interface board1030. For example, the main control board1010, the interface board1030, the interface board1040, and the switching board1020are connected to a system backplane through a system bus for interconnection. In a possible implementation, an inter-process communication (IPC) channel is established between the main control board1010and the interface board1030, and communication is performed between the main control board1010and the interface board1030through the IPC channel. Logically, the network device1000includes a control plane and a forwarding plane. The control plane includes the main control board1010and the central processing unit1031. The forwarding plane includes components that perform forwarding, such as the forwarding entry memory1034, the physical interface card1033, and the network processor1032. The control plane performs functions such as a router, forwarding table generation, signaling and protocol packet processing, and device status configuration and maintenance. The control plane delivers a generated forwarding table to the forwarding plane. On the forwarding plane, the network processor1032searches for and forwards, based on the forwarding table delivered by the control plane, the packet received by the physical interface card1033. The forwarding table delivered by the control plane may be stored in the forwarding entry memory1034. In some embodiments, the control plane and the forwarding plane may be completely separate and not on a same device. If the network device1000is configured as a first network device, the network processor1032may generate a first packet, and send the first packet out from the physical interface card1033, so that the first packet is transmitted to a second network device. If the network device1000is configured as a second network device, the physical interface card1033receives a first packet, and sends the first packet to the network processor1032, so that the network processor1032obtains, from the first packet, a support status of an iFIT capability of a first network device. The sending unit801and the receiving unit803in the network device800may be equivalent to the physical interface card1033in the network device1000. The processing unit802in the network device800may be equivalent to the network processor1032or the central processing unit1011. In this embodiment of this application, an operation on the interface board1040is the same as an operation on the interface board1030. For brevity, details are not described again. The network device1000in this embodiment may correspond to the first network device or the second network device in the foregoing method embodiments. The main control board low, the interface board1030, and/or the interface board1040in the network device1000may implement functions of the first network device or the second network device and/or various steps that are performed by the first network device or the second network device in the foregoing method embodiments. For brevity, details are not described herein again. It should be noted that there may be one or more main control boards. When there are a plurality of main control boards, an active main control board and a standby main control board may be included. There may be one or more interface boards. The stronger the data processing capability of the network device, the more interface boards are provided. There may also be one or more physical interface cards on the interface board. There may be no switching board or one or more switching boards. When there are a plurality of switching boards, load sharing and redundancy backup may be jointly implemented by the plurality of switching boards. In a centralized forwarding architecture, the network device may not require the switching board, and the interface board processes service data of the entire system. In a distributed forwarding architecture, the network device may have at least one switching board, and data exchange between a plurality of interface boards is implemented through the switching board, to provide a large-capacity data exchange and processing capability. Therefore, a data access and processing capability of the network device in the distributed architecture is greater than that of the device in the centralized architecture. Optionally, a form of the network device may also be that there is only one board, that is, there is no switching board, and functions of the interface board and the main control board are integrated into the board. In this case, a central processing unit on the interface board and a central processing unit on the main control board may be combined into one central processing unit on the board, and perform functions after the two are superposed. A data exchange and processing capability of the device in this form is low (for example, a network device such as a low-end switch or a router). A specific architecture that is to be used depends on a specific networking deployment scenario. This is not limited herein. In some possible embodiments, the first network device or the second network device may be implemented as a virtualization device. For example, the virtualization device may be a virtual machine (VM) running a program that has a function for sending a packet, and the virtual machine is deployed on a hardware device (for example, a physical server). The virtual machine is a complete software-simulated computer system that has complete hardware system functions and that runs in an entirely isolated environment. The virtual machine may be configured as the first network device or the second network device. For example, the first network device or the second network device may be implemented based on a general-purpose physical server in combination with a network function virtualization (NFV) technology. The first network device or the second network device is a virtual host, a virtual router, or a virtual switch. A person skilled in the art may, virtualize, on the general-purpose physical server, the first network device or the second network device having the foregoing functions, with reference to the NFV technology by reading this application. Details are not described herein. For example, the virtualization device may be a container, and the container is an entity configured to provide an isolated virtualization environment. For example, the container may be a docker container. The container may be configured as the first network device or the second network device. For example, the first network device or the second network device may be created by using a corresponding image. For example, two container instances may be created for a proxy-container by using an image of the proxy-container (a container that provides a proxy service), and are respectively a container instance proxy-container1and a container instance proxy-container2. The container instance proxy-container1is provided as the first network device or a first computing device, and the container instance proxy-container2is provided as the second network device or a second computing device. When a container technology is used for implementation, the first network device or the second network device may run by using a kernel of a physical machine, and a plurality of first network devices or second network devices may share an operating system of the physical machine. Different first network devices or second network devices may be isolated by using the container technology. The containerized first network device or second network device may run in the virtualization environment, for example, may run in the virtual machine, or the containerized first network device or second network device may directly run in the physical machine. For example, the virtualization device may be a Pod, and the Pod is Kubernetes (where Kubernetes is an open source container orchestration engine of Google, which is K8s for short), and is a basic unit for deploying, managing, and orchestrating a containerized application. The Pod may include one or more containers. Each container in a same Pod is usually deployed on a same host. Therefore, each container in the same Pod may communicate through the host, and may share a storage resource and a network resource of the host. The Pod may be configured as the first network device or the second network device. For example, specifically, a container as a service (CaaS for short, and is a container-based PaaS service) may be instructed to create the Pod, and the Pod is provided as the first network device or the second network device. Certainly, the first network device or the second network device may alternatively be another virtualization device. This is not enumerated herein one by one. In some possible embodiments, the first network device or the second network device may alternatively be implemented by a general-purpose processor. For example, a form of the general-purpose processor may be a chip. Specifically, the general-purpose processor that implements the first network device or the second network device includes a processing circuit, and an input interface and an output interface that are internally connected to and communicated with the processing circuit. The processing circuit is configured to perform the packet generation step in the foregoing method embodiments through the input interface. The processing circuit is configured to perform the receiving step in the foregoing method embodiments through the input interface. The processing circuit is configured to perform the sending step in the foregoing method embodiments through the output interface. Optionally, the general-purpose processor may further include a storage medium, and the processing circuit is configured to perform the storage step in the foregoing method embodiments through the storage medium. The storage medium may store instructions executed by the processing circuit, and the processing circuit is configured to execute the instructions stored in the storage medium to perform the foregoing method embodiments. Refer toFIG.11. An embodiment of this application provides a network system1100. The system1100includes a first network device1101and a second network device1102. Optionally, the first network device1101is, for example, the network device800, the network device900, or the network device1000, and the second network device1102is the network device800, the network device900, or the network device1000. An embodiment of this application provides a computer program product. When the computer program product runs on a first network device or a second network device, the first network device or the second network device is enabled to perform the method100or the method700in the foregoing method embodiments. The network devices in the foregoing product forms separately have any function of the first network device or the second network device in the foregoing method embodiments. Details are not described herein again. A person of ordinary skill in the art may be aware that, the method steps and units described in embodiments disclosed in this specification may be implemented by electronic hardware, computer software, or a combination thereof. To clearly describe the interchangeability between the hardware and the software, the foregoing has generally described steps and compositions of each embodiment according to functions. Whether the functions are performed by hardware or software depends on particular applications and design constraint conditions of the technical solutions. A person of ordinary skill in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of this application. It may be clearly understood by a person skilled in the art that, for the purpose of convenient and brief description, for a detailed working process of the foregoing system, apparatus, and unit, refer to a corresponding process in the foregoing method embodiment. Details are not described herein again. In the several embodiments provided in this application, the disclosed system, apparatus and method may be implemented in other manners. For example, the described apparatus embodiments are merely examples. For example, the unit division is merely logical function division and may be other division in actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electrical, mechanical, or other forms. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one location, or may be distributed on a plurality of network units. Some or all of the units may be selected based on actual requirements to achieve the objectives of the solutions of embodiments of this application. In addition, functional units in embodiments of this application may be integrated into one processing unit, each of the units may exist alone physically, or two or more units may be integrated into one unit. The integrated unit may be implemented in a form of hardware, or may be implemented in a form of a software functional unit. When the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, the integrated unit may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of this application essentially, or the part contributing to the prior art, or all or some of the technical solutions may be implemented in a form of a software product. The computer software product is stored in a storage medium and includes several instructions for instructing a computer device (which may be a personal computer, a server, a network device, or the like) to perform all or some of the steps of the methods in embodiments of this application. The foregoing storage medium includes any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disc. The foregoing descriptions are merely specific embodiments of this application, but are not intended to limit the protection scope of this application. Any modification or replacement readily figured out by a person skilled in the art within the technical scope disclosed in this application shall fall within the protection scope of this application. Therefore, the protection scope of this application shall be subject to the protection scope of the claims. All or some of the foregoing embodiments may be implemented by using software, hardware, firmware, or any combination thereof. When software is used to implement embodiments, all or some of embodiments may be implemented in a form of a computer program product. The computer program product includes one or more computer program instructions. When the computer program instructions are loaded and executed on a computer, all or some of the procedures or functions according to embodiments of this application are generated. The computer may be a general-purpose computer, a dedicated computer, a computer network, or another programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or may be transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired or wireless manner. The computer-readable storage medium may be any usable medium accessible by a computer, or a data storage device, such as a server or a data center, integrating one or more usable media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, a digital video disc (DVD)), a semiconductor medium (for example, a solid-state drive (SSD)), or the like. A person of ordinary skill in the art may understand that all or some of the steps of embodiments may be implemented by hardware or a program instructing related hardware. The program may be stored in a computer-readable storage medium. The storage medium may be a read-only memory, a magnetic disk, an optical disc, or the like. The foregoing descriptions are merely optional embodiments of this application, but is not intended to limit this application. Any modification, equivalent replacement, or improvement made without departing from the spirit and principle of this application should fall within the protection scope of this application.
71,496
11943100
DETAILED DESCRIPTION FIGS.1through22, discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged system or device. Operation principles of the disclosure will now be described more fully with reference to the accompanying drawings. While describing the disclosure, detailed description of related well-known functions or configurations may be omitted when it is deemed that they may unnecessarily obscure the essence of the disclosure. Also, terms used below are defined in consideration of functions in the disclosure, and may have different meanings according to an intention of a user or operator, customs, or the like. Thus, the terms should be defined based on the description throughout the specification. Throughout the disclosure, the expression “at least one of a, b or c” indicates only a, only b, only c, both a and b, both a and c, both b and c, all of a, b, and c, or variations thereof. Examples of a terminal may include a user equipment (UE), a mobile station (MS), a cellular phone, a smartphone, a computer, a multimedia system capable of performing a communication function, or the like. In the disclosure, a controller may also be referred to as a processor. Throughout the specification, a layer (or a layer apparatus) may also be referred to as an entity. As used in the following description, terms identifying access nodes, terms indicating network entities, terms indicating messages, terms indicating interfaces between network entities, terms indicating various types of identification information, etc. are exemplified for convenience of explanation. Accordingly, the disclosure is not limited to terms to be described later, and other terms representing objects having the equivalent technical meaning may be used. Hereinafter, for convenience of description, the disclosure uses terms and names defined in a 5thgeneration (5G) system (5GS) standard and a New Radio (NR) standard, which are the latest standards defined in the 3rd Generation Partnership Project (3GPP), from among current communication standards. However, the disclosure is not limited to the terms and names but may also be identically applied to wireless communication networks that comply with other standards. In particular, the disclosure may be applied to the 3GPP 5GS/NR (5G mobile communications standard). Time synchronization of related nodes is needed to support scenarios such as factory automation. In particular, in a situation requiring precise work, the accuracy of time synchronization needs to be high. When using Ethernet for industrial purposes, time sensitive networking (TSN) technology, which is a method of supporting time synchronization between Ethernet-connected notes, has been studied and has been commercially used. FIG.1is a view for explaining a time synchronization principle on Ethernet of TSN, according to an embodiment of the disclosure. In a manufacturing site, various networks may be used according to the configuration requirements of equipment and systems necessary for each factory. When networks with different standards are used, there may be difficulties in flexible system management. According to an embodiment of the disclosure, an apparatus and method of synchronizing times between TSN nodes when the TSN nodes operate according to different time standards are provided. Nodes of Time Synchronization in Ethernet (TSN) (hereinafter, TSN nodes) may determine a Grand Master (GM) which serves as a reference. When TSN Node0is determined to be a GM, TSN Node0may generate Sync Frame by inserting a current time of the GM in a Timestamp field and filling a Correction field with 0, and may transmit the generated Sync Frame to a next node. TSN Node1, which is the next node, may receive Sync Frame subjected to Link Delay1, and update the Correction field by taking into account up to Residence Time1as a time during which the received Sync Frame resides in TSN Node1, thereby generating Sync Frame and transmitting the same to TSN Node2which is a next node. TSN Node2may receive Sync Frame subjected to Link Delay2, and update the Correction field by taking into account up to Residence Time2as a time during which the received Sync Frame resides in TSN Node2, thereby generating Sync Frame and transmitting the same to a next node (not shown). Each node periodically measures a delay time for a link with a previous node and calculates and manages an average of measured delay times. In addition, each node may have a method of calculating a residence duration in its own node. FIG.2illustrates a scenario of supporting TSN time synchronization of a 5G network, according to an embodiment of the disclosure. In detail,FIG.2illustrates a factory automation scenario that supports mobility of a UE, to which the 5G network has been applied. In this case, the 5G network may support TSN. Referring toFIG.2, Actuator A may include a UE-side component. For example, Actuator A may include a UE, and a TSN Node connected to the UE. A 3GPP network may include a base station and some components of a core network. For example, the 3GPP network may include, but is not limited to, a UE, a gNB, and a user plane function (UPF). Referring toFIG.2, the 3GPP network may be connected to a factory network. The factory network may include a TSN Ethernet switch and a controller B connected via a wired network, but embodiments are not limited thereto. The factory network may include other components. FIG.3illustrates a method in which the 5G network supports TSN time synchronization, according to an embodiment of the disclosure. FIG.3is a schematic diagram for explaining a method in which the 5G network supports TSN in such a situation as shown inFIG.2. In the disclosure, the 5G network is referred to as a network including a UE, a gNB, and an UPF. In detail, the 5G network including a UE, a gNB, and an UPF is modelled to one TSN Bridge (TSN node) ofFIG.1. In other words, UPF-gNB-UE as the 5G network may operate as a single TSN node, and this TSN node may support TSN by updating Sync Frame by correcting Link Delay and Residence Time. To this end, it is assumed that the UPF, the gNB, and the UE within the 5G network are synchronized with a common 5G GM. For example, the gNB may be connected to a GPS, the UPF may be connected to the gNB via Ethernet-based TSN and synchronized with the gNB, and the UE may be synchronized with the gNB by transmitting and receiving PHY Frame to and from the gNB. The UPF may be connected to a TSN node of a wired network, and the UE may also be connected to a TSN node of a wired network. Referring toFIG.3, because the GM of TSN exists in the TSN node connected to the UPF, the UPF receives Sync Frame from the TSN node connected to the UPF. The UPF records Ingress Time of the received Sync Frame as a time point based on the 5G GM. The UPF may periodically calculate and manage Link Delay with the TSN node connected to the UPF. The UPF may deliver Sync Frame including Ingress Time and Link Delay to the UE. The UE may calculate Residence Time, which is a residence time within the 5G network, as a 5G GM-based time of the moment when Sync Frame is transmitted to the TSN node connected to the UE. For example, the UE may calculate Residence Time and Link Delay, based on a 5G GM-based time of the moment when Ingress Time recorded as a 5G GM-based time and Sync Frame are transmitted to the TSN Node. The UE may generate Sync Frame by updating a Correction field by using the Residence Time and the Link Delay, and may transmit the generated Sync Frame to the TSN node connected to the UE. FIG.4is a diagram for explaining a Management function of TSN according to an embodiment of the disclosure. There are two types of TSN nodes: a bridge and an end-station. A bridge from among the TSN nodes may send its own Port configuration and its own Scheduling capability to a centralized network configuration (CNC) server, and an end-station from among the TSN nodes may send, to the CNC server, information of time sensitive communication (TSC) Stream1that is transmitted/received. The CNC server may inform schedule information at each TSN node for each stream, and the TSN nodes may reflect the schedule information to ensure that the stream is delivered while undergoing a certain delay. For example, when Stream1is transmitted by a Talkers Node (End-Station) and a transmission period of Stream1is 10 millisecond (ms), a Listeners Node (End-Station) needs to receive Stream1within 10 ms after a time point when Stream1is transmitted. Referring toFIG.3, each Bridge Node reports a predetermined delay time and a link delay to the CNC server. For example, each Bridge Node may report, to the CNC server, that the delay time at each of Bridges1,2,3, and4is 1 ms or less and the link delay is 1 ms or less. In this case, when an arrival expected time period at Bridge1is 1 ms, an arrival expected time period at Bridge2is 3 ms, and an arrival expected time period at Bridge3is 5 ms, and an arrival expected time period at Bridge4is 7 ms, Bridges1through4may report their pieces of delay time information to the CNC server. In this case, a total delay time at Bridges1through4is ms, and thus the CNC server may transmit, to each TSN nodes (Bridge Node), scheduling information indicating that Stream1needs to be transmitted with a predetermined delay time of ms or less. In this case, arrival at a Listener Node at 9 ms is anticipated, and thus a requirement that Stream1needs to be transmitted within 10 ms is satisfied. FIG.5illustrates a structure in which the 5G network interoperates with TSN Management, according to an embodiment of the disclosure. Connection between a UPF and external TSN may be performed by a logical function block called a Network-side TSN Translator (NW-TT), and connection between a UE and external TSN may be performed by a logical function block called a Device-Side TSN Translator (DS-TT). They may transmit information as Port within a single logical TSN Bridge as the 5G network to a TSN AF, and the TSN AF may perform Management interoperation with external TSN. When the external TSN is managed via a CNC server, the TSN AF may also interoperate with the CNC server. FIG.6is a schematic diagram illustrating a method of utilizing TSC Assistance Information (TSCAI), according to an embodiment of the disclosure. As described above with reference toFIG.4, the CNC server may inform schedule information of each TSN node for each stream. The 5G network may receive the respective pieces of schedule information of the TSN nodes via a TSN AF. The 5G network may ascertain a period of the stream and an expected arrival time in the 5G network from the pieces of schedule information of the TSN nodes. However, because the period and the expected arrival time are written based on a TSN GM, a session management function (SMF) converts the period and the expected arrival time, based on the 5G GM. Downlink Traffic may be corrected as much correction as a Core Network Packet Delay Budget (CN-PDB) that is a maximum value of a delay time when a packet arrives at the gNB. To this end, the SMF delivers information called TSCAI to the gNB. The TSCAI includes information such as UL/DL, Periodicity, and Burst Arrival Time, from among which Burst Arrival Time is a result of reflecting CN-PDB correction and 5G GM-based time translation in schedule information of a stream received from the CNC server, as described above. FIG.7illustrates an inter-UE Time Synchronization scenario when a GM of TSN exists on the side of a UE, according to an embodiment of the disclosure. In the related art, a scenario where a TSN GM exists on the side of an external TSN node connected to an UPF as shown inFIG.2, and the TSN GM and a UE or the TSN GM and a TSN node connected to the UE support Time Synchronization was considered. In this case, only cases are assumed, where Time Synchronization between UEs, between a UE and external TSN nodes connected to the UE, or between external TSN nodes connected to UEs is each indirectly performed via synchronization with the TSN GM connected to the UPF. However, a case where the TSN GM is connected to a UE or a TSN node connected to the UE is also possible. For example, a case where a controller mounted on a moving vehicle controls a robot or electronic device having a position movement function is also possible. Because real-time monitoring and control of a user are important, a laptop computer or a tablet carriable by the user may serve as a GM of TSN, and thus operations of neighboring moving devices may be precisely controlled based on the laptop computer or tablet. In this case, Time Synchronization may be supported between not only UEs belonging to the same base station but also UEs belonging to different base stations. FIG.8illustrates a scenario in which a 5G network interoperates with TSN Management for achieving TSC support between UEs, according to an embodiment of the disclosure. Interoperation between a 5G network and TSN Management due to the TSN AF collecting information of a NW-TT/UPF and a DS-TT/UE and exchanging the collected information with the CNC server as shown inFIG.5has been described above with reference toFIG.2. In this case, an Establishment/Modification process with respect to a single PDU Session including a NW-TT/UPF and a DS-TT/UE is used. However, in such a situation as shown inFIG.7where a TSN GM is located on the side of a UE and thus Time Synchronization between UEs is necessary, a plurality of PDU sessions configured by connection between a UPF and a UE may be necessary. For example, because pieces of information of the NW-TT/UPF and the DS-TT/UE are collected for a single PDU session by a TSN AF in a single 5G network, two PDU Sessions are needed as shown inFIG.8, in order to collect pieces of information for two UEs. Although two UEs exist inFIG.8, the number of UEs is not limited thereto, and a plurality of UEs may exist. FIG.9illustrates a scenario in which a 5G Network utilizes TSCAI to achieve TSC support between UEs, according to an embodiment of the disclosure. As shown inFIG.8, a TSN AF may collect TSN-related information by using two PDU sessions. Referring toFIG.9, collected PDU session information may interoperate with a CNC server via the TSN AF. To this end, the collected PDU session information is delivered along a path such as UE-SMF-PCF-AF or UPF-SMF-PCF-AF. Because schedule information received by the CNC server is based on the TSN GM, a writing basis of Burst Arrival Time of TSCAI reflected in gNB1on the side of Ingress UE1needs to change from the TSN GM to the 5G GM. The writing basis of Burst Arrival Time of TSCAI delivered to gNB2on the side of UE2needs to change from the TSN GM to the 5G GM, and Burst Arrival Time needs to be corrected by Uplink PDB1and Downlink CN-PDB2, which are delay times within 5GS. Although it is clear that an SMF transmits two pieces of TSCAI, there are many choices as to when the information received from the CNC server is split into two. FIG.10is a flowchart of an inter-UE Time Synchronization scenario when a TSN GM exists on the side of a UE, according to an embodiment of the disclosure. When the TSN GM exists on the side of the UE as shown inFIG.7, the UE may transmit Sync Frame toward a core network (CN) to support Time Synchronization. When DS-TT1or UE1receives Sync Frame from TSN Node0as an external TSN node, DS-TT1or UE1records a reception time, based on the 5G GM. For convenience of explanation, UE1performance of the above recording operation will now be described. However, DS-TT1may perform the recording operation. When UE1transmits Sync Frame to an UPF, UE1may also transmit a value recorded based on the 5G GM. For example, UE1may send reception time information by adding a special Ingress Timestamp field to Sync Frame. In addition, UE1may periodically measure Link Delay1as a delay time with respect to Link1as a link with TSN Node0as a neighboring external TSN node, calculate an average time, based on the measured delay times, and manage the calculated average time. When UE1transmits Sync Frame to the UPF, UE1may also transmit Link Delay1. For example, UE1may add the value of Link Delay1to a Correction field of Sync Frame, and transmit a result of the addition. When Link Delay1is added to the Correction field, Link Dealy1needs to be converted based on the TSN GM, and thus rateRatio1may be applied. rateRatio1is a value obtained by dividing “TSN GM Clock Frequency” by “Local Clock Frequency of UE1”, and may be a value obtainable by multiplying rateRatio0included in a rateRatio field delivered in Sync Frame by neighborRateRatio managed by UE1. rateRatio0is a value obtained by dividing “TSN GM Clock Frequency” by “TSN Node0Local Clock Frequency”, and neighborRateRatio managed by UE1is a value obtained by dividing “TSN Node0Local Clock Frequency” by “Local Clock Frequency of UE1”. While transmitting Sync Frame to the UPF, UE1also transmits rateRatio1. For example, UE1updates the value of the rateRatio field of Sync Frame with rateRatio1and delivers a result of the updating. UE1may set a reception time of Sync Frame as “a Correction field value including Link Delay1converted based on TSN GM+TSN GM”, and thus may synchronize with the TSN GM. After a NW-TT or UPF receives Sync Frame from DS-TT1or UE1, the NW-TT or UPF calculates Residence Time as a residence time in the 5G network and thus updates the Correction field, before transmitting the received Sync Frame to TSN Node4as an external TSN node. For convenience of explanation, UPF performance of the calculation and update operation will now be described. However, the NW-TT may perform the calculation and update operation. The UPF calculates Residence Time by subtracting an Ingress Timestamp value received from UE1, from an egress time when Sync Frame is transmitted to the outside. Before adding a Residence Time value to the Correction field, the UPF applies rateRatio1to the Residence Time value to convert a basis from the 5G GM to the TSN GM. The UPF removes the specially-added Ingress Timestamp field before transmitting Sync Frame to the outside. The UPF may set a time point to transmit Sync Frame as “TSN GM+Correction field value”, and thus may synchronize with the TSN GM. At this time, Residence Time at a single TSN Node is unable to exceed 10 ms, and thus a QoS requirement that a sum of UE-DS-TT ResiTime1as UL UE-DS-TT Residence Time for DS-TT1or UE1and PDB1as a UL PDB of PDU Session1is less than 10 ms needs to be applied. FIG.11is a flowchart of a process of supporting inter-UE time synchronization, according to an embodiment of the disclosure. Referring toFIG.11, when the TSN GM exists on the side of a UE as shown inFIG.7, the UE may transmit Sync Frame to another UE, and thus Time Synchronization may be supported. For convenience of explanation, UE1or UE2performance of the above-described Time Synchronization support will now be described. However, DS-TT1of UE1or DS-TT2of UE2may each perform the above-described Time Synchronization support. When UE1receives Sync Frame from TSN Node0as an external TSN node, UE1records a reception time, based on the 5G GM. When UE1transmits Sync Frame to an UPF, UE1also transmits a value recorded based on the 5G GM. For example, UE1may send reception time information by adding a special Ingress Timestamp field to Sync Frame. In addition, UE1may periodically measure Link Delay1as a delay time with respect to Link1as a link with TSN Node0as a neighboring external TSN node, calculate an average time, based on the measured delay times, and manage the calculated average time. When UE1transmits Sync Frame to the UPF, UE1also transmits Link Delay1. For example, UE1may add the value of Link Delay1to a Correction field of Sync Frame, and transmit a result of the addition. When Link Delay1is added to the Correction field, Link Dealy1needs to be converted based on the TSN GM, and thus rateRatio1may be applied. rateRatio1is a value obtained by dividing “5G Clock Frequency” by “Local Clock Frequency of UE1”. While transmitting Sync Frame to the UPF, UE1also transmits rateRatio1. For example, UE1updates the value of a rateRatio field of Sync Frame with rateRatio1and delivers a result of the updating. UE1may set a reception time of Sync Frame as “a Correction field value including Link Delay1converted based on TSN GM+TSN GM”, and thus may synchronize with the TSN GM. The UPF processes Sync Frame via UPF-inside Local Switching between PDU Session1and PDU Session2and transmits the processed Sync Frame to UE2. At this time, there is no change in the contents of Sync Frame. After UE2receives Sync Frame from the UPF, UE2calculates Residence Time as a residence time in the 5G network and thus updates the Correction field, before transmitting the received Sync Frame to TSN Node3that is an external TSN node. UE2calculates Residence Time by subtracting an Ingress Timestamp value received from UE1, from a time point when Sync Frame is transmitted to the outside. Before adding the Residence Time value to the Correction field, UE2applies rateRatio1to the Residence Time value to convert a basis from the 5G GM to the TSN GM. UE2removes the specially-added Ingress Timestamp field before transmitting Sync Frame to the outside. UE2may set a time point when Sync Frame is transmitted, as “TSN GM+Correction field value”, and thus may synchronize with the TSN GM. At this time, Residence Time at a single TSN Node is unable to exceed 10 ms, and thus a QoS requirement that a sum of UE-DS-TT ResiTime1as UL UE-DS-TT Residence Time and PDB1as a UL PDB and a sum of UE-DS-TT ResiTime2as DL UE-DS-TT Residence Time and PDB2as a DL PDB are all less than 10 ms needs to be applied. Because PDB1is actually applied to a QoS for PDU Session1and PDB2is actually applied to a QoS for PDU Session2, a QoS that simultaneously takes into account the two PDU sessions is applied to deliver Sync Frame between UEs. For example, a QoS requirement that a sum of PDB1as a UL PDB and PDB2as a DL PDB is less than 10 ms may be converted into two QoS requirements that PDB1needs to be less than 5 ms and PDB2needs to be less than 5 ms, and the two QoS requirements may be applied to PDU Session1and PDU Session2, respectively. FIG.12is a flowchart of a process of supporting time synchronization by transmitting Sync Frame in a multi-cast manner, according to an embodiment of the disclosure. In detail, referring toFIG.12, when the TSN GM exists on the side of a UE, the UE may transmit Sync Frame to a CN or another UE in a multicast manner to thereby support Time Synchronization. At this time, the procedures described above with reference toFIGS.10and11may be applied simultaneously. For convenience of explanation, UE1or UE2performance of the above-described Time Synchronization support will now be described. However, DS-TT1of UE1or DS-TT2of UE2may each perform the above-described Time Synchronization support. When UE1receives Sync Frame from TSN Node0as an external TSN node, UE1records a reception time, based on the 5G GM. When UE1transmits Sync Frame to an UPF, UE1also transmits a value recorded based on the 5G GM. For example, UE1may send reception time information by adding a special Ingress Timestamp field to Sync Frame. In addition, UE1may periodically measure Link Delay1as a delay time with respect to Link1as a link with TSN Node0as a neighboring external TSN node, calculate an average time, based on the measured delay times, and manage the calculated average time. When UE1transmits Sync Frame to the UPF, UE1may also transmit Link Delay1. For example, UE1may add the value of Link Delay1to a Correction field of Sync Frame, and transmit a result of the addition. When Link Delay1is added to the Correction field, Link Dealy1needs to be converted based on the TSN GM, and thus rateRatio1may be applied. rateRatio1is a value obtained by dividing “5G Clock Frequency” by “Local Clock Frequency of UE1”. While transmitting Sync Frame to the UPF, UE1may also transmit rateRatio1. For example, UE1updates the value of a rateRatio field of Sync Frame with rateRatio1and delivers a result of the updating. UE1may set a reception time of Sync Frame as “a Correction field value including Link Delay1converted based on TSN GM+TSN GM”, and thus may synchronize with the TSN GM. The UPF applies multicast so that a single Sync Frame passes through the UPF and then a NW-TT and directly heads toward the external TSN node and another Sync Frame heads toward DS-TT2or UE2. After the UPF receives Sync Frame heading to the external TSN node, the UPF calculates Residence Time as a residence time in the 5G network and thus updates the Correction field, before transmitting the received Sync Frame to TSN Node4that is an external TSN node. The UPF calculates Residence Time by subtracting an Ingress Timestamp value received from UE1, from a time point when Sync Frame is transmitted to the outside. Before adding a Residence Time value to the Correction field, the UPF applies rateRatio1to the Residence Time value to convert a basis from the 5G GM to the TSN GM. The UPF may remove the specially-added Ingress Timestamp field before transmitting Sync Frame to the outside. The UPF may set a time point when Sync Frame is transmitted, as “TSN GM+Correction field value”, and thus may synchronize with the TSN GM. At this time, Residence Time at a single TSN Node is unable to exceed 10 ms, and thus a QoS requirement for Uplink Flow that a sum of UE-DS-TT ResiTime1as UL UE-DS-TT Residence Time and PDB1as a UL PDB is less than 10 ms may be applied. After UE2receives another Sync Frame from the UPF, UE2calculates Residence Time as a residence time in the 5G network and thus updates the Correction field, before transmitting the received Sync Frame to TSN Node3that is an external TSN node. UE2calculates Residence Time by subtracting an Ingress Timestamp value received from UE1, from a time point when Sync Frame is transmitted to the outside. Before adding a Residence Time value to the Correction field, UE2applies rateRatio1to the Residence Time value to convert a basis from the 5G GM to the TSN GM. UE2may remove the specially-added Ingress Timestamp field before transmitting Sync Frame to the outside. UE2may set a time point when Sync Frame has been transmitted, as “TSN GM+Correction field value”, and thus may synchronize with the TSN GM. At this time, Residence Time at a single TSN Node is unable to exceed 10 ms, and thus a QoS requirement that a sum of UE-DS-TT ResiTime1as UL UE-DS-TT Residence Time and PDB1as a UL PDB and a sum of UE-DS-TT ResiTime2as DL UE-DS-TT Residence Time and PDB2as a DL PDB are all less than 10 ms may be applied. Because PDB1is actually applied to a QoS for PDU Session1and PDB2is actually applied to a QoS for PDU Session2, a QoS that simultaneously takes into account the two PDU sessions is applied to deliver Sync Frame between UEs. For example, a QoS requirement that a sum of PDB1as a UL PDB and PDB2as a DL PDB is less than 10 ms may be converted into two QoS requirements that PDB1needs to be less than 5 ms and PDB2needs to be less than 5 ms, and the two QoS requirements may be applied to PDU Session1and PDU Session2, respectively. The newly calculated requirements are stricter than the previously calculated requirement that the sum of PDB1and PDB2is less than 10 ms, and thus need to be satisfied for simultaneous satisfaction. FIG.13is a flowchart of a process in which, when a GM of TSN exists on the side of a UE, an SMF performs a Best Master Clock Algorithm (BMCA) by managing information, according to an embodiment of the disclosure. In detail, referring toFIG.13, when the GM of TSN exists on the side of the UE, the SMF may comprehensively manage information and thus the BMCA may be performed. As described above with reference toFIG.9, interoperation between a 5G network and TSN Management may be achieved via a TSN AF by considering the 5G network as a single TSN Bridge. In the 5G network, information about PDU Session1related with UE1and information about PDU Session2related with UE2may be separately managed. For example, to form a Tree via which Sync Frame is delivered, when a BMCA is performed for each TSN Clock Domain, a process of synthesizing information of PDU Session1and information of PDU Session2may be needed. The BMCA is an algorithm for determining what TSN Node is determined as a Grand Master, and a Port having received an Announcement Message from a node close to the Grand Master expresses its own state as a Slave (S) state. Because it is possible for only one Port to have an S state within one bridge, when several Ports are candidates for an S state, information within the bridge is synthesized, and thus only one Port from among the candidates is determined to have an S state. A 5G Logical Bridge also participates in the BMCA by serving as one bridge, and a UE or DS-TT and a UPF or NW-TT participate in the BMCA by serving as a single Port of the 5G Logical Bridge. A TSN GM may be set for each Work Clock Domain. When TSN is in an S state, this may mean that TSN has a lower priority to be designated as the Grand Master, than when TSN is in a Master (M) state. For convenience of explanation, DS-TT1or UE1, DS-TT2or UE2, and an NW-TT or UPF corresponding to DS-TT1/UE1, DS-TT2/UE2, and NW-TT/UPF, respectively, will now be described. When TSN Node0as an external TSN node receives Announcement Frame, DS-TT/UE1sets its own Port as a provisional S state and transmits Announcement Frame to the UPF. The UPF broadcasts Announcement Frame to other ports within a Logical TSN Bridge of the 5G network. In other words, one Announcement Frame may pass through the NW-TT and head toward an external TSN node, and the UPF may make another Announcement Frame undergo Local Switching within the UPF and head toward DS-TT2/UE2. In response to the Announcement Frame, the NW-TT sets its own Port in an M state, and transmits the Announcement Frame to TSN Node4as an external TSN node. Likewise, in response to the Announcement Frame received from DS-TT1/UE1via the UPF, DS-TT2/UE2provisionally sets its own Port in an M state, and transmits the Announcement Frame to TSN Node3as an external TSN node. After passing through a process of delivering these Announcement Frames, DS-TT1/UE1, DS-TT2/UE2, and the NW-TT/UPF updates the SMF with information of the Ports via a PDU Session Modification procedure. A UE enabling Data communication by passing through a UPF is referred to as a PDU Session, and a process of changing information of the PDU Session is a PDU Session Modification procedure. In PDU Session Modification, the DS-TT1/UE1and the NW-TT/UPF delivers information about PDU Session1and the DS-TT2/UE2or the NW-TT/UPF delivers information about PDU Session2, but the SMF may manage the information about PDU Session1and the information about PDU Session2by knowing that PDU Session1and PDU Session2belong to the same TSN Management. The SMF may classify information to be comprehensively managed, by utilizing the fact that PDU Session1and PDU Session2having the same TSN Logical Bridge ID and the same TSN Working Clock Domain ID. After the SMF determines final Port Configuration information that is to be actually reflected, based on synthesized Port information, the SMF transmits the final Port Configuration information to DS-TT1/UE1, DS-TT2/UE2, and NW-TT/UPF by using a PDU Session Modification procedure and an N4Update procedure with respect to PDU Session1and PDU Session2. The determined final Port Configuration information is also reflected in a Policy Control Function (PCF) and a TSN AF via a Notification procedure. FIG.14is a flowchart of a process in which, when a GM of TSN exists on the side of a UE, a UPF performs a BMCA by managing information, according to an embodiment of the disclosure. In detail, referring toFIG.14, as described above with reference toFIG.9, interoperation between a 5G network and TSN Management may be achieved via an TSN AF by considering the 5G network as a single TSN Bridge. In the 5G network, information about PDU Session1related with UE1and information about PDU Session2related with UE2may be separately managed. For example, to form a Tree via which Sync Frame is delivered, when a BMCA is performed for each TSN Clock Domain, a process of synthesizing information of PDU Session1and information of PDU Session2may be needed. When TSN Node0as an external TSN node receives Announcement Frame, DS-TT1/UE1sets its own Port as a provisional S state and transmits Announcement Frame to the UPF. The UPF may broadcast Announcement Frame to other ports within a Logical TSN Bridge of the 5G network. In this example, one Announcement Frame may pass through a NW-TT and head toward an external TSN node, and the UPF may make another Announcement Frame undergo Local Switching within the UPF and head toward DS-TT2/UE2. In response to the Announcement Frame, the NW-TT provisionally sets its own Port in an M state, and transmits the Announcement Frame to TSN Node4as an external TSN node. Likewise, in response to the Announcement Frame received from DS-TT1/UE1via the UPF, DS-TT2/UE2provisionally sets its own Port in an M state, and transmits the Announcement Frame to TSN Node3as an external TSN node. After passing through such a process of delivering the Announcement Frames, DS-TT1/UE1and DS-TT2/UE2may update an SMF with information of the Ports via a PDU Session Modification procedure, and the SMF delivers the information to the UPF via an N4update procedure. In PDU Session Modification, the DS-TT1/UE1and the NW-TT/UPF delivers information about PDU Session1and the DS-TT2/UE2or the NW-TT/UPF delivers information about PDU Session2, but the UPF may manage the information about PDU Session1and the information about PDU Session2by knowing that PDU Session1and PDU Session2belong to the same TSN Management. The UPF may classify information to be comprehensively managed, by utilizing the fact that PDU Session1and PDU Session2having the same TSN Logical Bridge ID and the same TSN Working Clock Domain ID. After the UPF determines final Port Configuration information that is to be actually reflected, based on synthesized Port information, the UPF transmits the final Port Configuration information to DS-TT1/UE1and DS-TT2/UE2by using an N4Update procedure and a PDU Session Modification procedure with respect to PDU Session1and PDU Session2. In addition, the UPF may update a configuration of the NW-TT or UPF according to the final Port Configuration information. The determined final Port Configuration information is also reflected in a PCF and a TSN AF via a Notification procedure. FIG.15is a flowchart of a process in which, when a GM of TSN exists on the side of a UE, a TSN AF performs a BMCA by managing information, according to an embodiment of the disclosure. As described above with reference toFIG.9, interoperation between a 5G network and TSN Management may be achieved via the TSN AF by considering the 5G network as a single TSN Bridge. In the 5G network, information about PDU Session1related with UE1and information about PDU Session2related with UE2may be separately managed. For example, to form a Tree via which Sync Frame is delivered, when a BMCA is performed for each TSN Clock Domain, a process of synthesizing information of PDU Session1and information of PDU Session2may be needed. When TSN Node0as an external TSN node receives Announcement Frame, DS-TT1/UE1sets its own Port as a provisional S state and transmits Announcement Frame to a UPF. The UPF broadcasts Announcement Frame to other ports within a Logical TSN Bridge of the 5G network. In other words, one Announcement Frame may pass through an NW-TT and head toward an external TSN node, and the UPF may make another Announcement Frame undergo Local Switching within the UPF and head toward DS-TT2/UE2. In response to the Announcement Frame, the NW-TT provisionally sets its own Port in an M state, and transmits the Announcement Frame to TSN Node4as an external TSN node. Likewise, in response to the Announcement Frame received from DS-TT1/UE1via the UPF, DS-TT2/UE2provisionally sets its own Port in an M state, and transmits the Announcement Frame to TSN Node3as an external TSN node. After passing through a process of delivering these Announcement Frames, DS-TT1/UE1, DS-TT2/UE2, and the NW-TT/UPF updates the TSN AF with information of the Ports via a PDU Session Modification procedure. In PDU Session Modification, the DS-TT1/UE1and the NW-TT/UPF delivers information about PDU Session1and the DS-TT2/UE2or the NW-TT/UPF delivers information about PDU Session2, but the TSN AF may manage the information about PDU Session1and the information about PDU Session2by knowing that PDU Session1and PDU Session2belong to the same TSN Management. The TSN AF may classify information to be comprehensively managed, by utilizing the fact that PDU Session1and PDU Session2having the same TSN Logical Bridge ID and the same TSN Working Clock Domain ID. After the TSN AF determines final Port Configuration information that is to be actually reflected, based on synthesized Port information, the TSN AF transmits the final Port Configuration information to DS-TT1/UE1, DS-TT2/UE2, and NW-TT/UPF by using a PDU Session Modification procedure and an N4Update procedure with respect to PDU Session1and PDU Session2. The determined final Port Configuration information is also reflected in a PCF via a Notification procedure. FIG.16is a flowchart of a process in which, when a GM of TSN exists on the side of a UE, a PCF performs a BMCA by managing information, according to an embodiment of the disclosure. In detail, referring toFIG.16, as described above with reference toFIG.9, interoperation between a 5G network and TSN Management may be achieved via a TSN AF by considering the 5G network as a single TSN Bridge. In the 5G network, information about PDU Session1related with UE1and information about PDU Session2related with UE2may be separately managed. For example, to form a Tree via which Sync Frame is delivered, when a BMCA is performed for each TSN Clock Domain, a process of synthesizing information of PDU Session1and information of PDU Session2may be needed. When TSN Node0as an external TSN node receives Announcement Frame, DS-TT1/UE1sets its own Port as a provisional S state and transmits Announcement Frame to a UPF. The UPF broadcasts Announcement Frame to other ports within a Logical TSN Bridge of the 5G network. In this example, one Announcement Frame may pass through an NW-TT and head toward an external TSN node, and the UPF may make another Announcement Frame undergo Local Switching within the UPF and head toward DS-TT2/UE2. In response to the Announcement Frame, the NW-TT provisionally sets its own Port in an M state, and transmits the Announcement Frame to TSN Node4as an external TSN node. Likewise, in response to the Announcement Frame received from DS-TT1/UE1via the UPF, DS-TT2/UE2provisionally sets its own Port in an M state, and transmits the Announcement Frame to TSN Node3as an external TSN node. After passing through a process of delivering these Announcement Frames, DS-TT1/UE1, DS-TT2/UE2, and the NW-TT/UPF updates the PCF with information of the Ports via a PDU Session Modification procedure. In PDU Session Modification, the DS-TT1/UE1and the NW-TT/UPF delivers information about PDU Session1and the DS-TT2/UE2or the NW-TT/UPF delivers information about PDU Session2, but the PCF may manage the information about PDU Session1and the information about PDU Session2by knowing that PDU Session1and PDU Session2belong to the same TSN Management. The PCF classifies information to be comprehensively managed, by utilizing the fact that PDU Session1and PDU Session2having the same TSN Logical Bridge ID and the same TSN Working Clock Domain ID. After the PCF determines final Port Configuration information that is to be actually reflected, based on synthesized Port information, the PCF transmits the final Port Configuration information to DS-TT1/UE1, DS-TT2/UE2, and NW-TT/UPF by using a PDU Session Modification procedure and an N4Update procedure with respect to PDU Session1and PDU Session2. The determined final Port Configuration information is also reflected in a TSN AF via a Notification procedure. FIG.17is a flowchart of a process in which an SMF applies TSCAI to a 3GPP network by managing information in order to support TSC from a UE to another UE, according to an embodiment of the disclosure. A CNC server may obtain Configuration information for each Port of a 5G network via a TSN AF. Because the Configuration information for each Port includes information of neighboring ports for each port, the CNC server may ascertain a Port through which Stream is to pass, between a Talker and a Listener as End-Stations. Because the Configuration information for each Port includes a Scheduling capability of each TSN node or TSN Bridge, the CNC server may determine Schedule information of Stream for each Port between Talker and Listener, and inform the determined Schedule information. The CNC server delivers the Schedule information of Stream for each Port to the 5G network via the TSN AF. The Stream Schedule information is delivered from the TSN AF to the SMF via a PCF. When the Stream Schedule information is delivered from the TSN AF via the PCF, the Stream Schedule information is delivered to a thing related with a PDU Session for one of DS-TT1/UE1and DS-TT2/UE2. For example, the Stream Schedule information may be delivered to a PCF and an SMF that correspond to selected PDU Session1corresponding to DS-TT1/UE1. The SMF converts Burst Arrival Time1(BAT1) of TSCAI corresponding to DS-TT1/UE1into 5G GM reference information, based on the TSN GM, and delivers the 5G GM reference information to a gNB via a PDU Session Modification procedure. Simultaneously, the SMF calculates BAT2of TSCAI corresponding to DS-TT2/UE2. BAT2is calculated by adding Uplink PDB of PDU Session1, Local Switching Delay in a UPF, and Downlink CN-PDB of PDU Session2to BAT1. The SMF delivers BAT2to the gNB via a process of updating only a RAN Parameter during a PDU Session Modification procedure with respect to PDU Session2. In addition, the SMF may inform the PCF and the TSN AF that the corresponding Scheduling Information has been delivered to the gNB, through Notification for PDU Session2. FIG.18is a flowchart of a process in which a UPF applies TSCAI to a 3GPP network by managing information in order to support TSC from a UE to another UE, according to an embodiment of the disclosure. A CNC server ascertains Configuration information for each Port of a 5G network via a TSN AF. Because the Configuration information for each Port includes information of neighboring ports for each port, the CNC server may ascertain a Port through which Stream is to pass, between a Talker and a Listener. Because the Configuration information for each Port includes a Scheduling capability of each TSN node or TSN Bridge, the CNC server may determine Schedule information of Stream for each Port between Talker and Listener, and inform the determined Schedule information. The CNC server delivers the Stream Schedule information to the 5G network via the TSN AF. The Stream Schedule information is delivered from the TSN AF to an SMF via a PCF. When the Stream Schedule information is delivered from the TSN AF via the PCF, the Stream Schedule information is delivered to a thing related with a PDU Session for one of DS-TT1/UE1and DS-TT2/UE2. For example, a PDU Session corresponding to DS-TT1/UE1may be selected, and the Stream Schedule information may be delivered to a PCF and an SMF that correspond to the selected PDU Session. The SMF again updates the UPF with the Stream Schedule information via N4. The UPF maps DS-TT1/UE1and DS-TT2/UE2in Port-in information and Port-out information and delivers information of the mapping to the SMF via N4Report. At this time, the UPF may individually deliver N4Report corresponding to DS-TT1/UE1and N4Report corresponding to DS-TT2/UE2to the SMF. The SMF converts BAT1of TSCAI corresponding to DS-TT1/UE1into 5G GM reference information, based on the TSN GM, and delivers the 5G GM reference information to a gNB via a PDU Session Modification procedure. Simultaneously, the SMF calculates BAT2of TSCAI corresponding to DS-TT2/UE2. BAT2is calculated by adding Uplink PDB of PDU Session1, Local Switching Delay in a UPF, and Downlink CN-PDB of PDU Session2to BAT1. The SMF delivers BAT2to the gNB via a process of updating only a RAN Parameter during a PDU Session Modification procedure with respect to PDU Session2. In addition, the SMF may inform the PCF and the TSN AF that the corresponding Scheduling Information has been delivered to the gNB, through Notification for PDU Session2. FIG.19Ais a flowchart of a process in which a TSN AF applies TSCAI to a 3GPP network by managing information in order to support TSC from a UE to another UE, according to an embodiment of the disclosure. A CNC server ascertains Configuration information for each Port of a 5G network via the TSN AF. Because the Configuration information for each Port includes information of neighboring ports for each port, the CNC server may ascertain a Port through which Stream is to pass, between a Talker and a Listener. Because the Configuration information for each Port includes a Scheduling capability of each TSN node or TSN Bridge, the CNC server may determine Schedule information of Stream for each Port between Talker and Listener, and inform the determined Schedule information. The CNC server delivers the Stream Schedule information to the 5G network via the TSN AF. The Stream Schedule information is delivered from the TSN AF to an SMF via a PCF. The TSN AF finds DS-TT1/UE1and DS-TT2/UE2corresponding to Port-in and Port-out, and delivers the Stream Schedule information to a PCF and an SMF that correspond to each of PDU Session1and PDU Session2respectively corresponding to DS-TT1/UE1and DS-TT2/UE2. The PCF may be different or the same for each PDU session. The SMF may be different or the same for each PDU session. In response to information for PDU Sesion1, the SMF converts BAT1of TSCAI corresponding to DS-TT1/UE1into 5G GM reference information, based on the TSN GM, and delivers the 5G GM reference information to a gNB via a PDU Session Modification procedure. In response to information for PDU Sesion2, the SMF calculates BAT2of TSCAI corresponding to DS-TT2/UE2. BAT2is calculated by adding Uplink PDB of PDU Session1, Local Switching Delay in a UPF, and Downlink CN-PDB of PDU Session2to BAT1. The SMF delivers BAT2to the gNB via a process of updating only a RAN Parameter during a PDU Session Modification procedure corresponding to PDU Session2. FIG.19Bis a flowchart of a process in which a TSN AF manages schedule information for a new PCF session, according to an embodiment of the disclosure. Referring toFIG.19B, the TSN AF may generate Schedule information for PDU Session2by adding DS-TT-UE Residence Time and an UL PDB to Schedule information for PDU Session1, and may deliver the generated Schedule information to an SMF. The Schedule information may further include UPF switching delay. The SMF may change the received Schedule information, based on a 5GS Clock, and may add a CN PDB to a result of the changing. Referring toFIG.19B, translate (schedule2) has been represented. At this time, the TSN AF may change DS-TT-UE Residence Time and a UL PDB based on a 5GS GM Clock to those based on a TSN GM Clock. FIG.20Ais a flowchart of a process in which a PCF applies TSCAI to a 3GPP network by managing information in order to support TSC from a UE to another UE, according to an embodiment of the disclosure. A CNC server ascertains Configuration information for each Port of a 5G network via a TSN AF. Because the Configuration information for each Port includes information of neighboring ports for each port, the CNC server may ascertain a Port through which Stream is to pass, between a Talker and a Listener. Because the Configuration information for each Port includes a Scheduling capability of each TSN node or TSN Bridge, the CNC server may determine Schedule information of Stream for each Port between Talker and Listener, and inform the determined Schedule information. The CNC server delivers the Stream Schedule information to the 5G network via the TSN AF. The Stream Schedule information is delivered from the TSN AF to an SMF via the PCF. When the Stream Schedule information is delivered from the TSN AF via the PCF, the Stream Schedule information is delivered to a thing related with a PDU Session of one of DS-TT1/UE1and DS-TT2/UE2. For example, when a PDU Session corresponding to DS-TT1/UE1is determined, the Stream Schedule information is delivered to a PCF corresponding to the determined PDU Session. The PCF finds DS-TT1/UE1and DS-TT2/UE2corresponding to Port-in and Port-out, and delivers the Stream Schedule information to an SMF that corresponds to each of PDU Session1and PDU Session2respectively corresponding to DS-TT1/UE1and DS-TT2/UE2. At this time, when the PCF is unable to directly give information to the SMF corresponding to PDU Session2, namely, when an SMF for PDU Session1and an SMF for PDU Session2are different and PCFs respectively connected to the SMFs are different, a PCF bound with an SMF is searched for and information is delivered by a found PCF to the SMF. In response to information for PDU Sesion1, the SMF converts BAT1of TSCAI corresponding to DS-TT1/UE1into 5G GM reference information, based on the TSN GM, and delivers the 5G GM reference information to a gNB via a PDU Session Modification procedure. In response to information for PDU Sesion2, the SMF calculates BAT2of TSCAI corresponding to DS-TT2/UE2. BAT2is calculated by adding Uplink PDB of PDU Session1, Local Switching Delay in a UPF, and Downlink CN-PDB of PDU Session2to BAT1. The SMF delivers BAT2to the gNB via a process of updating only a RAN Parameter during a PDU Session Modification procedure with respect to PDU Session2. FIG.20Bis a flowchart of a process in which a PCF manages schedule information for a new PCF session, according to an embodiment of the disclosure. Referring toFIG.20B, the PCF may generate Schedule information for PDU Session2by adding DS-TT-UE Residence Time and an UL PDB to Schedule information for PDU Session1, and may deliver the generated Schedule information to an SMF. The SMF may change the received Schedule information, based on a 5GS Clock, and may add a CN PDB to a result of the changing. Referring toFIG.20B, translate (schedule2) has been represented. At this time, the PCF may change DS-TT-UE Residence Time and a UL PDB based on a 5GS GM Clock to those based on a TSN GM Clock. The above-described processes ofFIGS.10through20Bare also applicable to a case where, when a TSN GM is located on the side of a UE, Time Synchronization is achieved from a UE to a plurality of UEs. For example, to perform time synchronization with respect to a plurality of UEs, the above-described processes ofFIGS.10through20Bmay be applied by increasing the number of PDU Sessions by the number of UEs. FIGS.17through20Bare also applicable to a case where the TSN GM is not located on the side of a UE but on the side of a UPF. Regardless of whether a GM is located on the side of a UE or a UPF, inter-UE information delivery based on Stream information is necessary for TSC communication. In other words, a process of delivering TSCAI between UE1corresponding to Port-in and UE2corresponding to Port-out is necessary. InFIGS.17through20B, UPF Local Switching delay may be calculated as 0 in some cases. BAT1of TSCAI is calculated by adding UE-DS-TT Residence of UE1/DS-TT1to a result of changing schedule information based on translate (Arrival Time at UE1)=translate (schedule)=TSN GM Clock to schedule information based on the 5GS GM Clock. Referring toFIGS.17through20B, Residence Time within the UPF may be separately considered and calculated. In other words, a value obtained by excluding UL UPF Residence Time from a UL PDB for PDU Session1is used as a UL PDB, a value obtained by excluding DL UPF Residence Time from a DL CN-PDB for PDU Session2is used as a DL CN-PDB, and UE-to-UE UPF Residence Time is used instead of UPF Local Switching delay, and thus these values may be calculated using the same manner as described above. At this time, when the SMF does not know UPF Residence Time inFIGS.19B and20B, the TSN AF previously and additionally subtracting DL UPF Residence Time for PDU Session2from Schedule2when calculating Schedule2, and the SMF calculates BAT2of TSCAI by using an existing DL CN-PDB from which DL UPF Residence Time has not been subtracted. UPF Residence Time refers to a time period from a time point when Packet enters the UPF to a time point when Packet exits from the UPF. The processes ofFIGS.9through20Butilize a PDU Session Modification procedure, information of the UPF is delivered to the TSN AF along a path of UPF-SMF-PCF-TSN AF, and information of the TSN AF is delivered to the UPF along a path of TSN AF-PCF-SMF-UPF. However, when the UPF is connected directly to a Network Function (NF) and a Control Plane, the UPF and the TSN AF may directly communicate with each other, and thus the processes ofFIGS.9through20Bmay be applied. FIG.21is a block diagram of a UE according to an embodiment of the disclosure. Referring toFIG.11, the UE may include a transceiver2110, a memory2120, and a processor2130. The processor2130, the transceiver2110, and the memory2120of the UE may operate according to the above-described communication method of the UE. However, components of the UE are not limited thereto. For example, the UE may include more or fewer components than those shown inFIG.21. Furthermore, the processor2130, the transceiver2110, and the memory2120may be implemented as a single chip. The transceiver2110, which collectively refers to a receiver of the UE and a transmitter of the UE, may transmit or receive signals with a base station or network entities. The signal transmitted or received to or from the base station may include control information and data. To this end, the transceiver2110may include a radio frequency (RF) transmitter that up-converts and amplifies the frequency of a signal to be transmitted, and an RF receiver that low-noise-amplifies a received signal and down-converts the frequency of the received signal. However, this is only an embodiment of the transceiver2110, and the components of the transceiver2110are not limited thereto. The transceiver2110may receive a signal and output the signal to the processor2130through a wireless channel, and may transmit the signal output by the processor2130through a wireless channel. The memory2120may store data and programs necessary for operations of the UE. Furthermore, the memory2120may store control information or data included in a signal obtained by the UE. The memory2120may include storage media, such as read-only memory (ROM), random access memory (RAM), a hard disk, compact disc (CD)-ROM, and a digital versatile disc (DVD), or a combination thereof. The processor2130may control a series of processes so that the UE may operate according to the embodiments of the disclosure. The processor2130may include one or more processors. For example, the processor2130may include a communication processor (CP) performing control for communication, and an application processor (AP) controlling an upper layer, such as an application program. FIG.22is a block diagram of a network entity according to an embodiment of the disclosure; and Referring toFIG.22, the network entity may include a transceiver2210, a memory2220, and a processor2230. The processor2230, the transceiver2210, and the memory2220of the network entity may operate according to the above-described communication method of the network entity. However, components of the network entity are not limited thereto. For example, the network entity may include more or fewer components than those shown inFIG.22. Furthermore, the processor2230, the transceiver2210, and the memory2220may be implemented as a single chip. The network entity may include NFs such as an Access and Mobility management Function (AMF), a Session Management Function (SMF), a Policy and Charging Function (PCF), a Network Exposure Function (NEF), a Unified Data Management (UDM), a User Plane Function (UPF) described above. Also, the network entity may include a base station. The transceiver2210, which collectively refers to a receiver of the network entity and a transmitter of the network entity, may transmit or receive signals with the UE or other network entities. The transmitted or received signals may include control information and data. To this end, the transceiver2210may include an RF transmitter that up-converts and amplifies the frequency of a signal to be transmitted, and an RF receiver that low-noise-amplifies a received signal and down-converts the frequency of the received signal. However, this is only an embodiment of the transceiver2210, and the components of the transceiver2210are not limited thereto. The transceiver2210may include a wired/wireless transceiver, and may include various components for transmitting and receiving signals. The transceiver2210may receive a signal and output the signal to the processor2230through a communication channel (e.g., a wireless channel), and may transmit the signal output by the processor2230through a communication channel. The memory2220may store data and programs necessary for operations of the network entity. Furthermore, the memory2220may store control information or data included in a signal obtained by the network entity. The memory2220may include storage media, such as read-only memory (ROM), random access memory (RAM), a hard disk, compact disc (CD)-ROM, and a digital versatile disc (DVD), or a combination thereof. The processor2230may control a series of processes so that the network entity may operate according to the embodiments of the disclosure. The processor2230may include one or more processors. The methods according to embodiments of the disclosure as described in the specification or in the following claims may be implemented as hardware, software, or a combination of hardware and software. When implemented as software, a computer-readable storage medium storing one or more programs (e.g., software modules) may be provided. The one or more programs stored in the computer-readable storage medium are configured for execution by one or more processors in an electronic device. The one or more programs include instructions directing the electronic device to execute the methods according to embodiments of the disclosure as described the specification or in the following claims. The programs (e.g., software modules or software) may be stored in non-volatile memory (including RAM or flash memory), ROM, electrically erasable programmable read only memory (EEPROM), a magnetic disc storage device, a CD-ROM, DVDs, another optical storage device, or a magnetic cassette. Alternatively, the programs may be stored in memory including a combination of some or all of the above-mentioned storage media. A plurality of such memories may be included. In addition, the programs may be stored in an attachable storage device accessible through any or a combination of communication networks such as Internet, Intranet, a local area network (LAN), a wide LAN (WLAN), and a storage area network (SAN). Such a storage device may access the electronic device performing the embodiments of the disclosure via an external port. Furthermore, a separate storage device on the communication network may access the electronic device performing the embodiments of the disclosure. In the above-described embodiments of the disclosure, the elements included in the disclosure are expressed in a singular or plural form according to the proposed specific embodiment of the disclosure. However, the singular or plural expression is appropriately selected for ease of description according to the presented situation, and the disclosure is not limited to a single element or plural elements. Those elements described in a plural form may be configured as a single element, and those elements described in a singular form may be configured as plural elements. Although the present disclosure has been described with various embodiments, various changes and modifications may be suggested to one skilled in the art. It is intended that the present disclosure encompass such changes and modifications as fall within the scope of the appended claims.
61,209
11943101
DETAILED DESCRIPTION In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed. Some embodiments of the invention provide a method implementing a software-defined private mobile network (SD-PMN) for an entity (e.g., a corporation, an educational institution, etc.). To implement the SD-PMN, the method first deploys, at a physical location of the entity, a first set of control plane components for the SD-PMN, including a security gateway, a user-plane function (UPF), an AMF (access and mobility management function), and an SMF (session management function). At an SD-WAN (software-defined wide area network) PoP (point of presence) belonging to a provider of the SD-PMN, the method deploys a second set of control plane components for the SD-PMN, the second set of control plane components comprising a subscriber database that stores data associated with users of the SD-PMN. The method uses an SD-WAN edge router located at the physical location of the entity and a SD-WAN gateway located at the SD-WAN PoP to establish a connection from the physical location of the entity to the SD-WAN PoP. In some embodiments, the security gateway is configured to establish an IPsec (Internet protocol security) tunnel with each access point in a set of access points deployed in the physical location. The access points, in some embodiment, provide a connection between user devices operating in the physical location and the SD-PMN. In some embodiments, the access points receive data message traffic from the user devices as radio waves and convert the radio waves to bits and bytes which are encapsulated and sent to the security gateway via the IPsec tunnels. The security gateway forwards the encapsulated traffic to the UPF, in some embodiments, which decapsulates the traffic and forwards the decapsulated traffic as IP (Internet protocol) traffic to the SD-WAN edge routers. The connection established by the SD-WAN edge router with the SD-WAN gateway at the SD-WAN PoP is a DMPO (dynamic multipath optimization) tunnel, according to some embodiments. In some embodiments, the SD-WAN edge router is one of multiple SD-WAN edge routers at multiple physical locations (e.g., branch sites) of the entity, and the SD-WAN gateway is one of multiple SD-WAN gateways at multiple SD-WAN PoPs that each include an instance of the second set of control plane components. Each SD-WAN edge router at each physical location, in some embodiments, is assigned at least a primary SD-WAN gateway associated with a first SD-WAN PoP and a secondary SD-WAN gateway associated with a second SD-WAN PoP for accessing both the internet and components of the SD-WAN PoPs. In some embodiments, each instance of the control plane shares a same common IP address such that the common IP address can be reached at any of the multiple SD-WAN PoPs. Also, in some embodiments, the SD-PMN is a multi-tenant SD-PMN, and the common IP address associated with the control plane components at the SD-WAN PoPs is the same for all tenants receiving the SD-PMN service. By having a common IP address for the control plane components at each SD-WAN PoP, some embodiments are able to provide SD-WAN resiliency for the SD-PMN, as well as seamless failover between PoPs. The SD-PMN is centrally managed, in some embodiments, by a private mobile network orchestrator (PMNO). In some such embodiments, the PMNO receives (e.g., from a network administrator through a user interface (UI) provided by the PMNO) for each physical location in a set of physical locations spanned by the SD-PMN, a tracking area code (TAC) defined for the physical location, as well as a data network name (DNN) defined for each data network in a set of data networks within the SD-PMN. Each data network in the set of data networks span the set of physical locations, according to some embodiments. After receiving the TACs and DNNs, the PMNO of some embodiments stores the TACs and the DNNs in a core of the SD-PMN for use in managing data message traffic and user devices in the SD-PMN. Some embodiments of the invention also provide a method for deploying a private mobile network for an entity in particular geographic area. For each physical location in a set of physical locations within the particular geographic area (e.g., homes in a municipality), the method deploys an SD-WAN (software-defined wide area network) customer premise appliance, such as an SD-WAN edge router enabled with LTE, 4G, or 5G. The method establishes, via a RAN (radio access network), a set of connections between each SD-WAN customer premise appliance and at least one physical access point in a set of physical access points deployed in the particular geographic area. Each physical access point has a connection to a central aggregation point that includes a UPF (user plane function) and an SD-WAN edge router that connect the physical access nodes to an SD-WAN PoP (point of presence). The method then uses the established set of connections to provide a private mobile networking service for the particular geographic area. FIG.1conceptually illustrates a first example of an architecture diagram of a multi-tenant PMN of some embodiments. In this architecture diagram100, components of the control plane are split between locations on customer premises101and102and the SD-WAN PoP105. For instance, in this particular architecture100, each customer premises101and102includes a security gateway124, a UPF126, and an AMF/SMF128, while the SD-WAN PoP105includes the multi-tenant 5G unified data management (UDM) for storing user data. It should be noted that the components illustrated in the architecture inFIG.1are components associated with a 5G network, and other embodiments of the invention can include components associated with other network types, such as MMEs (mobility management entities) for 4G and LTE solutions. The user devices130and135, in some embodiments, can include mobile telephones, tablets, computers (e.g., desktop computers, laptop computers, etc.), and any other devices belonging to users of the PMN that have the ability to connect to the internet. The access nodes122and152in the local RAN's120and150receive control plane and data plane data message traffic from the user device130-135. The access nodes122and152are physical access points (i.e., base stations, radio units, etc.) that are part of the RAN, in some embodiments, and provide user devices with access to the PMN. For instance, for LTE (long term evolution) networks, the access nodes122and152are eNodeBs (eNBs), while for 5G networks, the access nodes122and152are gNBs (next generation NodeBs). In some embodiments, the data message traffic is transmitted from the user devices130and135to the access nodes122and152via radio signals. Upon receiving the data message traffic from the user devices130-135, the access nodes122and152process the data message traffic, convert the radio signals into collections of bits and bytes, encapsulate the data message traffic (i.e., the bits and bytes) and forward the encapsulated traffic to the security gateways124and154via tunnels between the access nodes122and152and the security gateways124and154. The tunnels through which the access nodes122and152forward data message traffic to the security gateways124and154are IPsec tunnels established by the security gateways124and154, according to some embodiments. In some embodiments, the data message traffic is encapsulated as GPRS (general packet radio service) tunneling protocol (GTP) before being forwarded through the IPsec tunnels between the access nodes and security gateways. The security gateways124and154decapsulate the received encapsulated data message traffic, and depending on the traffic type (i.e., control traffic or dataplane traffic), forward the data message traffic as IP traffic to either their respective UPFs126and156, or their respective AMFs/SMFs128and158. While illustrated as a single element, it should be noted that the AMF and SMF, as will be described below, are separate functions that perform different functionalities of the control plane and in other embodiments of the invention may be illustrated separately. The UPFs126and156handle data plane traffic, according to some embodiments, while the AMFs/SMFs128and158are responsible for control plane traffic. Additional details regarding the UPFs, AMFs, and SMFs will be provided below by reference toFIGS.4-5. In order to send traffic to destinations external to the customer premises101and102, and, in some embodiments, to certain destinations within the customer premises101and102, the UPFs126and156and the AMFs/SMFs128and158forward the data message traffic to the SD-WAN edge routers140and145. The SD-WAN edge routers140and145each connect to the SD-WAN gateway110to provide connections between the customer premises101and102and the SD-WAN PoP105, as shown. The SD-WAN edge routers140and145of some embodiments establish DMPO tunnels to the SD-WAN gateway110, as well as to other SD-WAN gateways (not shown), and, in some embodiments, with each other and other SD-WAN components (e.g., other SD-WAN edge routers at other physical locations of the entity, and SD-WAN hubs). In some embodiments, once an SD-WAN edge router140-145detects a WAN (wide area network) link (e.g., a WAN link between the SD-WAN edge router and an SD-WAN gateway), the SD-WAN edge router140-145establishes a DMPO tunnel with the SD-WAN gateway110and run bandwidth tests using short bursts of bi-directional traffic with the SD-WAN gateway110(or a different SD-WAN gateway (not shown) that is closer in proximity to the SD-WAN edge router than the SD-WAN gateway110). Because the SD-WAN gateway110is deployed at the SD-WAN PoP105, the SD-WAN gateway110can identify real public IP addresses of WAN links in cases where a NAT (network address translation) or PAT (port address translation) device sits in front of the SD-WAN edge router, or where the WAN link is a private link, according to some embodiments. Once the DMPO tunnels are established, in some embodiments, DMPO performs uni-directional performance metric measurements for metrics such as loss, latency, and jitter, for every data message on every DMPO tunnel between two DMPO endpoints (i.e., the SD-WAN edge routers140-145and the SD-WAN gateway110). In some embodiments, the DMPO tunnel header used to encapsulate each packet sent through the DMPO tunnel includes performance metrics such as sequence number and timestamp to enable the DMPO endpoints to identify lost packets and out-of-order packets, as well as to compute jitter and latency bi-directionally. These performance metrics are communicated between the DMPO endpoints at an order of every 100 ms, according to some embodiments, and when there is no active data message traffic being sent through the DMPO tunnels, the DMPO endpoints instead send active probes every 100 ms, or every 500 ms after a certain period (e.g., 5 minutes) of no high-priority data message traffic. Additionally, for any private WAN links for which a class of service (CoS) agreement is defined, DPMO is configured in some embodiments to take the CoS agreement into account for all traffic steering decisions (e.g., monitoring traffic, data plane application traffic, etc.). DMPO can also add Forward Error Correction (FEC) for certain classes of traffic, according to some embodiments. Once the SD-WAN gateway110receives data message traffic from the SD-WAN edge routers140and145, the SD-WAN gateway110determines whether the data message traffic is application traffic or control plane traffic, and either forwards the data message traffic to the internet or to the UDM112. In some embodiments, before forwarding internet traffic to the internet, the SD-WAN gateway110forwards the traffic to the SASE services114for processing. The SASE services114, in some embodiments, are provided by a third-party vendor and can include services such as firewall as a service, secure web gateway, zero-trust network access, and other threat detection services. FIG.2conceptually illustrates a process of some embodiments performed to implement an SD-PMN such as the SD-PMN described above forFIG.1. The process200starts when the process deploys (at210) a security gateway, UPF, AMF, and SMF to each physical location in a set of physical locations across which the SD-PMN is being deployed. Each physical location of the entity also includes a SD-WAN edge router deployed to the physical location in order to provide a connection between the physical location and SD-WAN PoPs of the provider of the SD-PMN, as well as between the physical location and other physical locations and hub datacenters of the entity. As mentioned above, the SD-WAN edge routers and SD-WAN gateways utilize SD-WAN services such as DMPO, in some embodiments, to optimize the data message traffic sent between SD-WAN devices implemented in the SD-PMN. The process200deploys (at220) a subscriber database (i.e., UDM) that stores data associated with users of the PMN that belong to the particular entity for which the PMN is being deployed to each of a set of SD-WAN PoPs belonging to a provider of the PMN. For instance, in the architecture diagram100described above, the UDM112is located in the PoP105while the security gateways124and154, UPFs126and156, and AMFs/SMFs128and158are located on the customer premises101and102. The user data stored by the UDM, in some embodiments, includes customer profile information, customer authentication information, and, in some embodiments, a set of encryption keys for the information. As the UDM (i.e., instances of the UDM) is deployed to each PoP, the SD-WAN edge routers can connect to any SD-WAN gateway for any of the SD-WAN PoPs. For each SD-WAN edge router deployed at each physical location in the set of physical locations across which the SD-PMN is being deployed, the process200provides a list of SD-WAN gateways to which the SD-WAN edge router can connect to access elements of the SD-WAN PoP. For example,FIG.3conceptually illustrates a SD-PMN300that includes multiple branch sites310,312, and314and multiple SD-WAN PoPs320,322, and324. Each branch site310-314includes a respective SD-WAN edge router330,332, and324and set of machines350,352, and354(e.g., user devices and other network devices deployed to the branch sites), as shown. Each of the SD-WAN PoPs320-324includes a respective SD-WAN gateway340,342, and344, and a set of resources360, which are the same for each SD-WAN PoP. Each of the branch sites310-314includes a respective gateway list370,372, and374provided to the SD-WAN edge routers330-334for use in connecting their respective branch sites310-314to resources360located in the PoPs320-324, with each gateway list370-374including a primary gateway and a secondary gateway. Accordingly, each of the SD-WAN edge routers330-334has established a connection (e.g., DMPO tunnel)380to their primary SD-WAN gateways and a connection385to their secondary SD-WAN gateways. As will be described in the embodiments further below, the secondary gateways are for use by the SD-WAN edge routers330-334for failover. Returning to the process200, after the gateway lists have been provided to the SD-WAN edge routers, the process200ends. As mentioned above regarding the architecture diagram100ofFIG.1, the UPFs126and156handle data plane traffic, while the AMFs/SMFs128and158are responsible for control plane traffic.FIG.4conceptually illustrates an architecture diagram showing a data plane flow through an SD-PMN of some embodiments. The diagram400includes a customer premises401and an SD-WAN PoP405. The customer premises401includes user devices430, a local RAN420that includes at least one access node422, a security gateway424, a UPF426, an AMF/SMF428, an SD-WAN edge440, and on-premise destinations450. The SD-WAN PoP405includes an SD-WAN gateway410, a multi-tenant 5G UDM412, and SASE services414. For user devices430that are already authenticated with the SD-PMN, the data message traffic sent from these devices is transmitted via radio waves from the user devices430to the access nodes422that are part of the local RAN420. The access node422processes the received data message traffic radio waves and converts the radio waves into bits and bytes, and encapsulates the data to be transmitted to the security gateway424via an IPsec tunnel established by the security gateway424between the access node422and security gateway424. When the data message traffic (i.e., the encapsulated bits and bytes) arrives at the security gateway424, the security gateway424decapsulates the traffic and forwards the decapsulated traffic to the UPF426. In some embodiments, the decapsulated traffic is GTP traffic. When the UPF426receives the GTP traffic from the security gateway424, the UPF426then removes the GTP header from the data message traffic and sends the data message traffic out as IP traffic. In some embodiments, data messages having destinations within the customer premises401are sent directly from the UPF426to their intended destinations. In other embodiments, the UPF426sends the data message traffic to the SD-WAN edge router440, which then forwards the data message traffic to, e.g., an on-premise destination450. Similarly, for data message traffic associated with destinations external to the customer premise401, such as the SD-WAN PoP405, the UPF426forwards the data message traffic to the SD-WAN edge router440, which forwards the traffic to a next-hop forwarding elements, such as the SD-WAN gateway110through a DMPO tunnel established between these endpoints, which provides access to the SD-WAN PoP405, and also forwards traffic having destinations external to the SD-PMN, such as the internet460. The SD-WAN gateway410, in some embodiments, determines whether the received data message traffic requires processing by, e.g., the SASE services414, or any other packet processing pipelines deployed to the SD-WAN PoP405(e.g., middlebox service engines). For data message traffic that does require processing by the SASE services414, or other processing performed within the SD-WAN PoP405, the SD-WAN gateway410forwards the data message traffic for processing, and then subsequently forwards the processed data message traffic to its destination, such as via the internet460. FIG.5conceptually illustrates an architecture diagram showing a control plane flow through an SD-PMN of some embodiments. Like the diagram400, the diagram500includes a customer premises501and an SD-WAN PoP505. The customer premises501includes user devices530, a local RAN520that includes at least one access node522, a security gateway524, a UPF526, an AMF528, an SMF528b, and an SD-WAN edge540. The SD-WAN PoP505includes an SD-WAN gateway510, a multi-tenant 5G UDM512, and SASE services514. When a user device530attempts to connect to the SD-PMN, the user device530tries to associate with an access node522that is part of the local RAN520to request to join the network, according to some embodiments. The access node522then communicates with the security gateway524with the intention of the communications reaching the AMF528a. The AMF528ais responsible for authenticating the user device530. In some embodiments, the AMF528aperforms the user authentication by performing a look-up in a database that stores user information to determine who is trying to join the network, as well as what type of service should be provided to the user device530upon authentication. Accordingly, the AMF528asends a control message (e.g., an authentication request) destined to an IP address associated with the UDM512to the SD-WAN edge router540, which sends the control messages via a DMPO tunnel to the SD-WAN gateway510at the SD-WAN PoP505that hosts the UDM512. The IP address associated with the UDM512is a common IP address at every single SD-WAN PoP connected by the SD-PMN, according to some embodiments. Additional details regarding the use of a common IP address for each SD-WAN PoP will be described in further detail by reference to embodiments below. Upon receiving the authentication request, the UDM512performs user authentication by identifying in its database which user is trying to join based on an identifier associated with the user, whether the user is authorized to join the network, as well as the kind of service should be provided to the user if the user is authenticated. For user devices that are not authorized, the UDM512indicates to the AMF528athat the device is not authorized, and the AMF528asubsequently provides that response back to the unauthorized user device. Otherwise, when the UDM512determines that the user is allowed to join the network, the UDM512sends a reply to the AMF528athat indicates that the user device530is authorized to use the network and provides any necessary information about the user and/or user device. In response to a user device530being authorized, the SMF528bthen communicates with the UPF526to direct the UPF526to set up a bearer for the newly authorized user device530. The UPF526then provides the user device with an IP address, enables a particular QoS (quality of service) for the user device's communications, and indicates which subnets (e.g., VLANs) to put the user device's traffic on. In some embodiments, when applicable, the SMF528bdirects the UPF526to also create certain network slices for the newly authorized user device. Once the UPF526has completed these steps, the user device530can communicate using the SD-PMN. In some embodiments, a different control plane architecture is utilized for implementing an SD-PMN.FIG.6conceptually illustrates a second example of an architecture diagram of a multi-tenant PMN of some embodiments. The architecture diagram600includes customer premises601and602for first and second customers respectively, as well as an SD-WAN PoP605. Each customer premises601and602includes respective user devices630and635, a local RAN620and650that includes at least one access node622and652, a UPF624and654, and an SD-WAN edge router640and645. The SD-WAN PoP605includes an SD-WAN gateway610, SASE services618, a security gateway612, and a multi-tenant control plane616that includes an AMF660, SMF665, and UDM614. That is, unlike the embodiments described above, the security gateway, AMF, and SMF are deployed to the PoPs rather than to each customer premise. The architecture diagram600will be described in more detail below by reference toFIG.7. FIG.7conceptually illustrates a process of some embodiments for establishing an SD-PMN such as the SD-PMN illustrated in the diagram600. The process700starts when the process deploys (at710) a UPF to each physical location in a set of physical locations belonging to a particular entity for which the SD-PMN is being established. The UPF624and654operates as an interconnecting point between the local RAN620and650and components of the data network (e.g., the SD-WAN edge router640and645), performs routing and forwarding for data messages sent on the control plane and data plane, and performs various other functions such as setting up bearers for newly authorized user devices. For instance, because the AMF660and SMF665are deployed to the SD-WAN PoP605instead of to each customer premise601and602, the UDF624and654receives and forwards control data messages, such as requests from new user devices looking to access the SD-PMN. When a new user device attempts to access the SD-PMN, the request is received by the access point622and652that is part of the local RAN620and650, which converts the request from radio waves to bits and bytes, encapsulates the converted request, and forwards the encapsulated request via a GTP tunnel to the UPF624and654. The UPF624and654sends the request as IP traffic to the SD-WAN edge router640and645, which forwards the request via a DMPO tunnel between the SD-WAN edge router640and645and the SD-WAN gateway610. The SD-WAN gateway610forwards the request to the security gateway612, which provides the request to the multi-tenant 5G control plane616. Returning to the process700, the process deploys (at720) a security gateway, AMF, SMF, and subscriber database that stores data associated with users of the SD-PMN that belong to the particular entity (an any other entity that uses the multi-tenant SD-PMN) to each of a set of SD-WAN PoPs belonging to a provider of the SD-PMN. That is, rather than just deploying the UDM614to the SD-WAN PoPs605, the AMF660and SMF665are also deployed to the SD-WAN PoPs605. As such, to continue to process an authentication request for a new user, the security gateway612provides the request to the AMF660that is deployed to the multi-tenant 5G control plane616. Upon received an authentication request, the AMF660performs a lookup in the UDM614to determine whether the requesting device is allowed to access the SD-PMN. For requesting devices that are not authorized, the AMF660sends a response back to the UPF624and654via the security gateway612, SD-WAN gateway610, and SD-WAN edge640and645, which then notifies (i.e., via the local RAN620and650) that the device is not authorized to access the SD-PMN. For devices that are authorized, the SMF665then communicates with the UPF624and654to direct the UPF to set up a bearer for the newly authorized and authenticated user device by providing the user device with an IP address, enabling a particular QoS for the user device's communications, and indicating which subnet(s) to put the user device's traffic on. For each SD-WAN edge device deployed at each physical location in the set of physical locations, the process700provides (at730) a list of SD-WAN gateways that provide access to the SD-WAN PoPs to which the SD-WAN edge router can connect to access the SD-WAN PoPs. While only one SD-WAN gateway610at one SD-WAN PoP605is illustrated in the diagram600, other embodiments of the invention include multiple SD-WAN PoPs each having a respective SD-WAN gateway. Each SD-WAN edge router at each branch site (i.e., customer premise) then receives a list indicating at least a primary and secondary SD-WAN gateway for the SD-WAN edge router to connect to in order to access services and components of the SD-WAN PoPs, such as the control plane deployed to the PoPs. Following730, the process700ends. The distributed control plane architecture described above forFIG.6is applicable for both small cell and disaggregated RANs, according to some embodiments.FIG.8conceptually illustrates an architecture diagram in which a multi-tenant SD-PMN is implemented across customer premises of some embodiments that have a disaggregated RAN. As shown, the diagram800includes two customer premises801and802and an SD-WAN PoP805. The SD-WAN PoP805includes an SD-WAN gateway810, security gateway812, multi-tenant 5G control plane816, and SASE services818. Similar to the diagram600, the multi-tenant 5G control plane816includes a UDM814, an AMF860, and an SMF865. Each customer premises801and802includes a respective local RAN820having at least one access point, a distributed unit (DU)822and852, a central unit (CU)824and854, a configuration server826and856, X86 microprocessors828and858, a UPF860and865, and an SD-WAN edge router840and845. On each customer premise801and802, the local RANs820and850act as the radio units (RUs) for the disaggregated RAN and enable geographical coverage using radio functions. The DUs822and852realize baseband processing functions across the different physical locations spanned by the SD-PMN as virtualized network functions that run on hardware, and allow for possible hardware acceleration. The CUs824and854centralize data message processing functions, and, like the DUs, realize these functions as virtualized network functions that run on hardware. The configuration servers826and856receive configuration data for the disaggregated RAN from a manager (e.g., management server) for the SD-PMN. When a new user device attempts to access the SD-PMN, the user device communicates via radio waves with the local RAN820and850, which converts the radio waves to bits and bytes, encapsulates the bits and bytes, and transmits the encapsulated bits and bytes through a tunnel to the UPF860and865. The UPF860and865then transmit the authentication request to the SD-WAN edge router840and845as IP traffic, and the SD-WAN edge router840and845uses the DMPO tunnel to the SD-WAN gateway810to forward the request to the SD-WAN PoP805. The AMF870performs a look-up with the UDM814to determine whether the requesting user device is authorized to use the SD-PMN. For devices that are not authorized, the AMF870sends a response back toward the UPF860to indicate the requesting device is not authorized, and the UPF860notifies the requesting device that the request is denied. For devices that are authorized, the SMF875then communicates with the UPF860to have the UPF860set up a bearer for the new user device to enable the new user device to use the SD-PMN. The UPF860and865then provides the user device with an IP address, enables a particular QoS (quality of service) for the user device's communications, and indicates which subnets (e.g., VLANs) to put the user device's traffic on. It should be noted that in the embodiments described above and below that for the core network, the data plane components are located on customer premises, the control plane components are split between customer premises and the SD-WAN PoPs, and, as will be described below, the management plane components are centrally located in the cloud. Additionally, for the RAN, the data plane components and control plane components are located on customer premises, and the management plane components are centrally located it the cloud (or multiple clouds such as in the case of multiple RAN vendors as will be described further below). As will be described in some embodiments below, the core network management plane components and the RAN management plane components of some embodiments are located in separate clouds that connect to a unified and centralized management system for the SD-PMN. In some embodiments, the IP address assigned to the control plane components (i.e., the control plane components for the core network) deployed to the SD-WAN PoPs is a common IP address that is used for those components across all of the PoPs, regardless of location or customer. In other words, all of the control plane components for the multi-tenant SD-PMN are reachable at the same IP address in each SD-WAN PoP. As a result, the SD-PMN is able to provide resiliency and seamless failover between SD-WAN PoPs.FIG.9conceptually illustrates a failover process performed by an SD-WAN edge router of some embodiments when connecting to an SD-WAN PoP. The process900will be described in detail below with references toFIG.10, which conceptually illustrates a set of diagrams showing failover from a first PoP to a second PoP. The process900starts when the SD-WAN edge router receives (at910) a list of SD-WAN gateways associated with SD-WAN PoPs of the SD-PMN to which the SD-WAN edge router can connect to access an SD-WAN PoP. In some embodiments, when the SD-PMN is established, each SD-WAN edge router at each branch location of an entity for which the SD-PMN is implemented receives a gateway list that specifies at least a primary SD-WAN gateway and secondary SD-WAN gateway to which the SD-WAN edge router can connect to access an SD-WAN PoP. In the diagrams1001and1002, for instance, the SD-WAN edge router1005at the branch site1015is illustrated as having a gateway list1010specifying gateway 11020as a primary gateway and gateway1025as a secondary gateway. In some embodiments, the gateway lists are determined based on proximity of the gateways to the SD-WAN edge router (i.e., the primary gateway being the closest gateway to the edge router). The process900establishes (at920) SD-WAN tunnels with primary and secondary SD-WAN gateways identified in the received list. In the diagram1001, the SD-WAN edge router1005has a first tunnel1030to its primary SD-WAN gateway1020and a second tunnel1035to its secondary SD-WAN gateway1025(drawn with a dashed line to indicate it is the tunnel to the secondary gateway). The process900uses (at930) the SD-WAN tunnel established with the primary SD-WAN gateway to forward data messages addressed to a particular IP address associated with the control plane components (i.e., core) located in the SD-WAN PoP. As illustrated in the diagrams1001and1002, each both the first PoP1040and the second PoP1045include two active instances of the core1050. Each instance of the core1050is identical to each other instance of the core1050, is stateless, and is reachable at the same common IP address, as described in the embodiments above. As such, in some embodiments, if one active core in a PoP becomes unavailable, the SD-WAN gateway for that PoP can automatically switch to the other active core for forwarding data messages associated with the common IP address assigned to the core, thereby providing resiliency and seamless failover within each PoP. The process900determines (at940) that the SD-WAN tunnel between the SD-WAN edge router at the branch location and the primary SD-WAN gateway is down. In the diagram1002, for instance, the tunnel1030from the SD-WAN edge router1005to the SD-WAN gateway is indicated as being down. The SD-WAN tunnel between the SD-WAN edge router and the SD-WAN gateway is a DMPO tunnel, according to some embodiments. Accordingly, in some embodiments, DMPO detects the outage, or brownout (e.g., when a particular link cannot provide SLA for a particular application), while performing its continuous monitoring, and can alert the SD-WAN edge router of the tunnel failure to cause the SD-WAN edge router to use the secondary SD-WAN gateway for reaching a particular IP address in an SD-WAN PoP. Based on the determination that the SD-WAN tunnel to the primary SD-WAN gateway is down, the process900uses (at950) the SD-WAN tunnel established with the secondary SD-WAN gateway to forward the data message to the destination IP address located the second SD-WAN PoP. As illustrated in the diagram1002, the tunnel1035is now shown as a solid line to indicate this tunnel is now the active tunnel being used by the SD-WAN edge router1005to connect its branch site1015to the core1050. In some embodiments, the SD-WAN edge router1005continues to use the tunnel1035to the secondary SD-WAN gateway1025at the second PoP1045to reach the core1050until the tunnel1030is back up and running. In other embodiments, the SD-WAN edge router1005uses the tunnel1035until the tunnel1035experiences an outage or brownout. Following950, the process900ends. In some embodiments, the SD-PMN is managed and configured by a centralized management plane.FIG.11conceptually illustrates a diagram of a SD-PMN having a centralized management plane, in some embodiments. As shown, the diagram1100includes a central orchestrator1110located in a cloud1105, one or more RAN management OAM (operations, administration, and maintenance) servers1114located in a cloud1112, a 5G core OAM server1118located in a cloud1116, an on-premise enterprise location1120, an SD-WAN PoP1122, and clouds1124. The on-premise enterprise location1120includes endpoints1130with SIM-enabled modems, a small cell RAN1132, edge applications1134, an SD-WAN edge router1136, and a 4G/5G packet core1138. The SD-WAN PoP1122includes SASE services1140, an SD-WAN gateway1142, and a 5G core1144that includes the UDM for the SD-PMN. The clouds1124include a public cloud1150that hosts multiple applications1152, and a private cloud datacenter1154that includes an SD-WAN edge router1156for connecting to the SD-WAN PoP1122and other elements external to the private datacenter1154. Because the components of the RAN and core are distributed, in some embodiments, with the RAN on each customer's premises and the core split between each customers' premises and the SD-WAN PoPs of the SD-PMN provider, information associated with each portion of the RAN and core needs to be centralized into one management layer in order for, e.g., certain parameters to be assigned. Accordingly, the PMNO1110, in some embodiments, centrally stores tenant-specific 5G core and access point (i.e., RAN) information and pushes this information to respective OAM servers1114and1118as configuration updates are made. In some embodiments, the OAM servers1114and1118are deployed across multiple administrative domains in one or more public clouds. Also, in some embodiments, the PMNO1110is configured to support multiple RAN vendors, including multiple RAN vendors for a single customer premise. In some such embodiments, the PMNO1110invokes vendor-specific APIs against the corresponding RAN OAM server based on both the type of access point associated and the site at which that access point is deployed. In some embodiments, all sessions between the on-premise components, such as the RAN1132and 4G/5G packet core1138, and their respective OAM servers in the cloud(s), such as the RAN OAM server(s)1114located in the cloud1112and 5G core server1118located in the cloud1116, are initiated by the on-premise components rather than by the cloud components. That is, the cloud components will not establish new connections, according to some embodiments. Additionally, the connection1160between the RAN1132and RAN OAM server(s)1114, and the connection1165between the 4G/5G packet core1138and 5G core OAM1118are encrypted connections, according to some embodiments. FIG.12conceptually illustrates a diagram of the OAM architecture of some embodiments for a multi-tenant SD-PMN. The diagram1200includes a PMNO1205, RAN OAM servers1210aand1210bfor a first vendor, RAN OAM servers1212aand1212bfor a second vendor, core OAM servers1214aand1214b, a load balancer1220for connecting to an access point1230of the first RAN vendor on a first customer's premise, a load balancer1222for connecting to an access point1234of the second RAN vendor on a second customer's premise, and a load balancer1224for connecting to the distributed 4G/5G packet core components including the 4G/5G packet core components1232on the first customer's premise, the 4G/5G packet core components1236on the second customer's premise, and the 4G/5G UDM1238located in the SASE PoP of the SD-PMN provider (e.g., an SD-WAN PoP). Each OAM server pair has full redundancy as illustrated and are configured as active/active pairs, according to some embodiments. It should be noted that in some embodiments, rather than terminating at the load balancers1220and1222, the connections from the access points1230and1234instead terminate at security gateways associated with the respective RANs. The security gateways in some such embodiments provide unique identifiers for each access point1230and1234to their respective RAN OAM servers1210a-1210band1212a-1212bso that each customer can be uniquely identified through the security gateway. As mentioned above, the PMNO1205is configured to support multiple RAN vendors based on preferences of the managed service provider (MSP). When a customer is onboarding, in some embodiments, an MSP can select one or more RAN vendor templates based on deployment criteria (or criterion). Based on the selected RAN vendor template(s), the PMNO1205associates the RAN OAM server's IP address with the customer and site, according to some embodiments. Once the associations have been made, any subsequent changes to the 5G core and/or RAN deployment from a user portal would trigger the PMNO1205, in some embodiments, to invoke vendor-specific APIs against the OAM server endpoints1210a-1214b. The parameters assigned by the centralized management system include TACs for different customer premise locations, and DNNs for the different data networks operating in the SD-PMN.FIG.13conceptually illustrates an architecture diagram of a centrally managed SD-PMN of some embodiments after TACs and DNNs have been assigned. As shown, the diagram1300includes multiple sites1330,1332, and1334each having a respective UPF1340,1342, and1344deployed for the site and at least one respective access point1350,1352, and1354. In addition to the multiple sites, multiple multi-tenant 5G control planes1320,1322, and1324are deployed to PoPs1360,1362, and1364throughout the SD-PMN and that connect to a single core1310that is centrally managed by a PMN orchestrator1305. The single core1310is a master source for the control plane instances1320-1324and syncs with each control plane instance1320-1324as indicated. The diagram1300will be further described below by reference toFIG.14, which conceptually illustrates a process of some embodiments for centrally managing a SD-PMN. The process1400is performed in some embodiments by a centralized management server for the SD-PMN, such as the PMN orchestrator1305in the diagram1300. The process1400starts when for each physical location spanned by the SD-PMN, the process receives (at1410) a TAC defined for the physical location. In some embodiments, the PMN orchestrator1305is a server that provides a user interface (UI) through which a user (e.g., network administrator) can provide input such as TACs defined for physical locations spanned by the SD-PMN. The UI, in some embodiments, includes multiple selectable UI items for providing the input and configuring other aspects of the SD-PMN through, e.g., drop down menus, radio buttons, selection boxes, text fields, etc. For instance, in some embodiments, the UI includes a particular text field or set of text fields for defining TACs for each of the physical locations. The UI, in some embodiments, requires the TACs to follow a particular format, such as by using hexadecimal values that is two octets in length. In the diagram1300, each of the sites1330-1334includes a unique TAC. For example, the first site1330is assigned the TAC 315010:10008, the second site1332is assigned the TAC 315010:10009, and the last site1334is assigned the TAC 315010:10nnn. Each TAC is an identifier of the physical location area within the SD-PMN and is unique across all of the physical locations. The TACs, in some embodiments, are also associated with the access points1350-1354deployed in the respective physical locations such that the TACs can be used to identify a physical location and/or one or more access points in a physical location. For each data network running with the SD-PMN, the process1400receives (at1420) a DNN defined for the data network. In some embodiments, multiple VLANs (virtual local area networks) are implemented within the SD-PMN and are each assigned a respective DNN that spans all of the physical locations spanned by the SD-PMN. The UI provided by the management server of some embodiments includes multiple UI items for defining the DNNs in addition to the multiple UI items for defining the TACs as mentioned above. In some embodiments, a portion of the DNN is determined by the type of data network being named (e.g., “VLAN”), and a text field is provided to enable the user to further define the DNN by, e.g., adding a number or series of numbers. Because the data networks span all of the physical locations, each physical location of some embodiments may be associated with a particular data network and DNN or set of data networks and DNNs. For example, in the diagram1300, the first site1330is associated with DNNs VLAN2 and VLAN2, the second site1332is associated with DNNs VLAN 1 and VLAN3, and the last site1334is associated with DNNs VLAN1 and VLANn, as illustrated. Because each data network (and associated DNN) spans all of the physical locations, the VLAN1 is included in the DNNs associated with each site. While each site1330-1334is shown as having two associated DNNs, other sites in other embodiments can have more associated DNNs or fewer associated DNNs than illustrated. The process1400stores (at1430) the received TACs and DNNs in a core of the SD-PMN. The PMN orchestrator1305of some embodiments stores the received TACs and DNNs in the core1310, which syncs with the control plane instances1320-1324deployed to the PoPs1360,1362, and1364. Because each control plane instance1320-1324is the same as each other control plane instance1320-1324, each control plane instance1320-1324is aware of the TAC defined for and assigned to each physical site, as well as the DNNs defined for and assigned to each data network in the SD-PMN. The process1400assigns (at1440) TACs and DNNs to user devices as the user devices join the network. In some embodiments, the UPF assigns the new user device's traffic to a particular DNN or set of DNNs. When a user device moves to a new primary location, in some embodiments, an updated TAC or set of TACs is subsequently assigned to the user device based on the new location of the user device. For instance, a user device of some embodiments can join the SD-PMN while being primarily located at the first site1330in the diagram1300, and later change its primary location to the last site1334. As such, the TAC associated with the user device would be updated from 315010:10008 to 315010:10nnn in some such embodiments. Following1440, the process1400ends. In some embodiments, an SD-PMN may be implemented as a fixed wireless network for a particular geographic location. For example, a rural area might require internet access, while managed service providers (e.g., internet service providers, mobile network service providers, etc.) decline to deploy service for that rural area due to factors such as cost to the service provider.FIG.15conceptually illustrates a process of some embodiments for implementing an SD-PMN as a fixed wireless network for a particular geographic location. The process1500will be described below with references toFIG.16, which conceptually illustrates the architecture of a fixed wireless network of some embodiments. The process1500starts when for each physical location in a set of physical locations within the particular geographic area, the process deploys (at1510) an SD-WAN customer premise appliance. In the architecture diagram1600, SD-WAN edge routers1610are deployed to homes1615within a particular geographic area for which the SD-PMN is being implemented. The SD-WAN edge routers1610are enabled with LTE (long term evolution), 4G, or 5G SIM, according to some embodiments. Each home premise within the geographic area for which the SD-PMN is being implemented, in some embodiments, will have an SD-WAN edge router to stretch the SD-PMN to that home premise. The process1500establishes (at1520), via a RAN, a set of connections between each SD-WAN customer premise appliance and at least one physical access point deployed to the particular geographic area. Each of the SD-WAN edge routers1610at the home premises1615has a connection to the network of towers1620in the diagram1600, as shown. The connections between the SD-WAN edge routers1610and the towers (i.e., access points)1620are LTE, 4G, or 5G connections via CBRS (citizens broadband radio service), in some embodiments. In some embodiments, the SD-WAN edge routers deployed to home premises (or other location types in the geographic area) are provided by the service provider of the SD-PMN and act as general residential broadband customer premise equipment (CPEs), while the access points (i.e. base stations, antennas, towers, etc.) deployed to the geographic area are provided by a third-party network equipment vendor and paid for by the entity for which the SD-PMN is being implemented. Examples of such entities can include corporations, educational campuses, and municipalities (e.g., towns, cities, etc.), according to some embodiments. The process1500connects (at1530) each physical access point deployed to the particular geographic area to a central aggregation point to enable internet access for the particular geographic area. The network of towers1620, for example, connects to the UPF1630that is part of the edge compute stack1625. The access points in the network of towers1620, in some embodiments, aggregate layer 2 (L2) traffic that terminates at the distributed UPF1630. From the UPF1630, the traffic goes through an aggregator SD-WAN edge router1635toward the SD-WAN gateway1650at the SASE PoP1640, which also includes a 5G core1655. The SASE PoP1640provides optimized internet connectivity, according to some embodiments. The process1500uses (at1540) the established set of connections to provide SD-PMN service to the particular geographic area. That is, once the connections have been established, user devices at the home premises1615can access the internet1660through the series of connections that start from the SD-WAN edge routers1610. The SD-PMN is controlled and managed as described in the embodiments above. Following1540, the process1500ends. The fixed wireless solution described above differs from the architectures described by other embodiments of the invention in that the fixed wireless network1600uses an SD-WAN edge router as customer premise equipment to connect to the network, thus creating two layers of SD-WAN. The first layer provides the connection (e.g., a VCMP (VeloCloud multipath) tunnel) between the SD-WAN edge routers1610at the home premises1615(or other premises types for other entities) and the SD-WAN gateway1650at the SD-WAN/SASE PoP1640of the SD-PMN provider, and the second layer provides the connection between the SD-WAN edge router1635that is part of the edge compute stack1625and the SD-WAN gateway1650at the SD-WAN/SASE PoP1640of the SD-PMN provider. Despite the two layers of SD-WAN, the SD-WAN edge router1635that is part of the edge compute stack1625does not utilize double tunneling and instead uses the first tunnel established by the SD-WAN edge routers1610on the underlay, while still protecting against failures on multiple WAN links and providing optimized fixed wireless access for WAN users. The tunnel established from the SD-WAN edge routers1610to the SD-WAN gateway1650is optimized, in some embodiments, using DMPO, like in the other embodiments described above. In some embodiments, the SD-WAN edge router1635only sends traffic for which DMPO or any other optimization is desired to the SD-WAN gateway1650, and sends any other traffic to its destination without going through the SD-WAN gateway1650, while in other embodiments, all traffic is sent to its destination through the SD-WAN gateway1650. In some embodiments, sending all traffic through the SD-WAN gateway1650is desirable based on the 5G core1655for the SD-PMN also being located in the PoP1640. Some embodiments of the invention implement joint orchestration across an SD-PMN, SD-WAN, and edge compute stacks to enable customers to describe desired edge applications to be deployed alongside connectivity and QoS requirements, and to use the provided descriptions to orchestrate the edge application, connectivity, and QoS requirements across the SD-PMN, SD-WAN, and edge compute stack to yield the desired end-to-end connectivity and QoS for the desired edge application and any devices accessing the desired edge application.FIG.17conceptually illustrates a diagram of a joint orchestration platform of some embodiments that orchestrates applications end-to-end across an SD-PMN, SD-WAN, and edge compute stack. As shown, the joint orchestration platform diagram1700includes a PMN orchestrator (PMNO)1705, an edge compute stack (ECS) management system1710, a RAN/Core management system1720, an SD-WAN management plane1730, and a SASE management plane1740. Examples of an ECS management system, in some embodiments, include VMware Telco Cloud Automation (TCA) and VMware Tanzu Kubernetes Grid (TKG). An example of both the SD-WAN management plane and the SASE management plane are orchestrators, in some embodiments, is the VeloCloud Orchestrator (VCO). As shown, the PMNO1705has northbound intent-based APIs1750(application programming interfaces) to collect edge application connectivity requirements. The northbound APIs1750can include edge application connectivity requirements such as edge application workload compute, storage, and networking requirements; device groups that need connectivity to the application and at what QoS level; and any QoS requirements needed between the edge application and the cloud. The northbound intent-based APIs1750are defined by a user (e.g., network administrator) that manages the joint orchestration platform1700, according to some embodiments. After receiving these requirements, the PMNO1705takes these requirements and uses southbound APIs in some embodiments to deploy the workload on the ECS, make a subscriber group for devices that need connectivity to the edge application and configure appropriate data networking for that subscriber group (e.g., VLAN, QoS, etc.), and program business policies in the orchestrator (e.g., management server) for the SD-WAN. As illustrated, the intent-based API1752is sent to the ECS management system1710, the intent-based API1754is sent to the RAN/Core management system1720, the intent-based API1756is sent to the SD-WAN management plane1730, and the intent-based API1758is sent to the SASE management plane1740. As a result of the joint orchestration platform1700, end-to-end SLAs (service-level agreements) can be met for the entire system. FIG.18illustrates a joint orchestration platform diagram that includes a set of example intent-based APIs for orchestrating a video analytics application, in some embodiments, across an SD-PMN, SD-WAN, and edge compute stack. The joint orchestration platform1800includes a PMNO1805, ECS management system1810, RAN/Core management system1820, SD-WAN management plane1830, and SASE management plane1840. In this example, the PMNO1805receives (i.e., from a network administrator) an intent-based API1850that is defined for a video analytics application “X”. The intent-based API1850includes a URL for the application's container image (i.e., an unchangeable, stand-alone, static file that includes executable code and well-defined assumptions about the application's run-time environment), specifications indicating the application requires high priority compute resources and high QoS connectivity between cameras and the application, specifications indicating the application requires high priority internet communication, and an indication that internet traffic associated with the application requires URL filtering. After the PMNO1805receives the intent-based API1850, the PMNO1805generates intent-based APIs for each of the management systems and planes1810-1840to implement and orchestrate the video analytics application “X”. Each requirement defined in the intent-based API1850corresponds to a different orchestration platform within the joint orchestration platform1800. For example, definitions relating to the application itself (e.g., the container image URL) and its compute resources are directed to the ECS management system, definitions regarding QoS requirements are directed to the RAN/Core management system, definitions regarding traffic priority are directed to the SD-WAN management plane, and definitions regarding URL filtering are directed to the SASE management plane, according to some embodiments. As such, based on the application container image URL and high priority compute resources defined in the intent-based API1850, the PMNO1805generates the intent-based API1852to direct the ECS management system1810to deploy the application image with required resources for the application. Based on the high QoS connectivity requirement between the cameras and the video analytics application defined in the intent-based API1850, the PMNO1805generates the intent-based API1854to direct the RAN/Core management system1820to define cameras as a subscriber group and apply the required QoS policy for the RAN and core. Based on the requirement for high priority Internet communication defined for the application in the intent-based API1850, the PMNO1805generates the intent-based API1854to direct the SD-WAN management plane1830to identify application traffic and add a business policy for prioritizing that traffic. Lastly, based on the URL filtering for Internet traffic defined in the intent-based API1850, the PMNO1805generates the intent-based API1858to direct the SASE management plane1840to create a URL filtering security policy for the application and service chain cloud web security (CWS). After each management system and plane1810-1840receives its respective intent-based API1852-1858, the video analytics application “X” is implemented across the SD-PMN. Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer-readable storage medium (also referred to as computer-readable medium). When these instructions are executed by one or more processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions. Examples of computer-readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. The computer-readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections. In this specification, the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage, which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs. FIG.19conceptually illustrates a computer system1900with which some embodiments of the invention are implemented. The computer system1900can be used to implement any of the above-described hosts, controllers, gateway, and edge forwarding elements. As such, it can be used to execute any of the above described processes. This computer system1900includes various types of non-transitory machine-readable media and interfaces for various other types of machine-readable media. Computer system1900includes a bus1905, processing unit(s)1910, a system memory1925, a read-only memory1930, a permanent storage device1935, input devices1940, and output devices1945. The bus1905collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the computer system1900. For instance, the bus1905communicatively connects the processing unit(s)1910with the read-only memory1930, the system memory1925, and the permanent storage device1935. From these various memory units, the processing unit(s)1910retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s)1910may be a single processor or a multi-core processor in different embodiments. The read-only-memory (ROM)1930stores static data and instructions that are needed by the processing unit(s)1910and other modules of the computer system1900. The permanent storage device1935, on the other hand, is a read-and-write memory device. This device1935is a non-volatile memory unit that stores instructions and data even when the computer system1900is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device1935. Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device. Like the permanent storage device1935, the system memory1925is a read-and-write memory device. However, unlike storage device1935, the system memory1925is a volatile read-and-write memory, such as random access memory. The system memory1925stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory1925, the permanent storage device1935, and/or the read-only memory1930. From these various memory units, the processing unit(s)1910retrieve instructions to execute and data to process in order to execute the processes of some embodiments. The bus1905also connects to the input and output devices1940and1945. The input devices1940enable the user to communicate information and select commands to the computer system1900. The input devices1940include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices1945display images generated by the computer system1900. The output devices1945include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as touchscreens that function as both input and output devices1940and1945. Finally, as shown inFIG.19, bus1905also couples computer system1900to a network1965through a network adapter (not shown). In this manner, the computer1900can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet), or a network of networks (such as the Internet). Any or all components of computer system1900may be used in conjunction with the invention. Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra-density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter. While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application-specific integrated circuits (ASICs) or field-programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself. As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms “display” or “displaying” mean displaying on an electronic device. As used in this specification, the terms “computer-readable medium,” “computer-readable media,” and “machine-readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral or transitory signals. While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.
64,793
11943102
DETAILED DESCRIPTION Some embodiments of the present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the disclosure are shown. Indeed, the disclosure may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. The terms “or” and “optionally” are used herein in both the alternative and conjunctive sense, unless otherwise indicated. The terms “illustrative” and “exemplary” are used to be examples with no indication of quality level. Like numbers refer to like elements throughout. The components illustrated in the figures represent components that may or may not be present in various embodiments of the disclosure described herein such that embodiments may comprise fewer or more components than those shown in the figures while not departing from the scope of the disclosure. The term “computing device” or “client device” or “electronic device” used interchangeably hereinafter, to refer to any or all of programmable logic controllers (PLCs), programmable automation controllers (PACs), industrial computers, desktop computers, personal data assistants (PDAs), laptop computers, tablet computers, smartbooks, palm-top computers, personal computers, barcode readers, scanners, indicia readers, imagers, Radio-frequency identification (RFID readers or interrogators), vehicle-mounted computers, wearable barcode scanners, wearable indicia readers, a point of sale (POS) terminal, headset devices, and similar electronic devices equipped with at least a processor configured to perform the various operations described herein. In some example embodiments, the computing device can refer to an electronic device with more processing and data storage capabilities, as compared to the electronic device. In this regard, in some example embodiments, the electronic device can correspond to an electronic device that can operate on low power and with lesser computational and data storage resources. The various embodiments are described herein using the term “computing platform” or “master device” used interchangeably for the purpose of brevity. The term “computing platform” can be used herein to refer to any computing device or a distributed network of computing device capable of functioning as a server, such as a master exchange server, web server, mail server, document server, or any other type of server. A computing platform may be a dedicated computing device or a computing device including a server module (e.g., running an application which may cause the computing device to operate as a server). A server module (e.g., server application) may be a full function server module, or a light or secondary server module (e.g., light or secondary server application) that is configured to provide synchronization services among the dynamic databases on computing devices. A light server or secondary server may be a slimmed-down version of server type functionality that can be implemented on a computing device, such as a smartphone, thereby enabling it to function as an Internet server (e.g., an enterprise e-mail server) only to the extent necessary to provide the functionality described herein. In some example embodiments, the computing platform may correspond to any of, an industrial computer, a cloud computing-based platform, an external computer, a standalone computing device, and/or the like. In some example embodiments, the master device or the computing platform, can also refer to any of the electronic devices, as described herein. The term “access point” refers to a gateway device in a network of electronic devices that can be capable of communicating directly with one or more electronic devices and can also be capable of communicating (either directly or alternatively indirectly via a communication network such as the Internet) with a network establishment service (e.g. Internet service provider). The network establishment service can refer to a server system that can manage the deployment of one or more electronic devices throughout a physical environment. Network establishment service may be distributed systems where multiple operations are performed by utilizing multiple computing resources deployed over a network and/or a cloud-based platform or cloud-based services, such as any of a software-based service (SaaS), infrastructure-based service (IaaS) or platform-based service (PaaS) and/or like. According to some example embodiments described herein, any electronic device can operate as an access point having capabilities described herein. Typically for configuring electronic devices, configuration data can be provided to an electronic device by an administrator (e.g. a server or a remote device). For example, in some cases, the electronic device can be configured by bringing the electronic device to a service center or a support center and performing configuration manually. To this end, manually configuring the electronic device can include physically connecting a master device (e.g. a laptop computer or PDA) to the electronic device and uploading configuration data from the master device to the electronic device. However, this can be challenging and undesirable, as it requires a readily accessible connection interface (such as a serial port, USB port, or a parallel port). Further, it also requires a latest or desired version of configuration data to be available at the master device at a given point of time, and typically a reasonable degree of technical expertise to implement. Alternatively, in some instances, configuration data can be provided to the electronic device by the master device (e.g., a remote device or a server) over a communication network. In such cases, the electronic device can be self-configured, as the electronic device receives configuration data from the remote device. However, providing configuration data over the communication network for configuring electronic devices presents practical difficulties, for instance, failure or fault at a remote device, unavailability of the remote device, or issues with network bandwidth, and/or the like. Various example embodiments described herein relate to a method for configuring a plurality of electronic devices, using a computing device that is from amongst the plurality of electronic devices. Said differently, according to example embodiments described herein, a first electronic device in a batch of multiple electronic devices can be configured first and can cause configuring of remaining of the electronic devices in a networked environment. In this regard, a first computing device (e.g., but not limited to, a first industrial device) of the plurality of electronic devices can be configured to initiate a communication network (e.g., but not limited to, a wireless access point or a wireless hotspot). In this regard, using the communication network remaining of the plurality of electronic devices (e.g. other industrial devices) may communicate with the first computing device. In some examples, the first computing device can initialize the communication network based on at least a configuration parameter (e.g. a network name, a secure service set identifier (SSID), and/or a network security type (e.g. WPA2 PSK). The configuration parameter can be encrypted and known to remaining electronic devices. To this end, in some example embodiments, the communication network can be initialized based on an initiation of a first instance of an application (e.g. a mobile application) at the first computing device. Also, the pre-defined configuration parameter can be associated with the first instance of the application on the first computing device. Further, in some example embodiments, the first computing device can identify an initialization of a second instance of the application at a second computing device. Said differently, the first computing device can identify that a second instance of the same application (that may be executed before on the first computing device) is initialized at the second computing device. In this regard, in response to identification of the second instance of the application at the second computing device, the first computing device may send configuration settings of the second computing device over a secured communication network to the second computing device. According to some example embodiments described herein, the secured communication network is established between the first computing device and the second computing device using a secured key exchange process, details of which would be described later in the description. By way of implementation of various example embodiments described herein, a plurality of electronic devices can be configured and commissioned in batches, at any instance of time. The electronic devices can be configured using various instances of a same application that can be executed on respective of the plurality of electronic devices102-10N. To this end, according to various example embodiments described herein, a first instance of the application can be initiated on the first computing device (for instance, in response to an initial boot or manually based on user's inputs) to configure the first computing device. As the first computing device is configured, remaining electronic devices can be automatically or self-configured based on (a) initialization of various instances of the same application starting on the remaining devices and (b) finding of information of the initialization of second and subsequent instances of the same applications at other devices, from the first instance of the application running on the first computing device. In this regard, in some examples, various instances of the same application and be self-initialized, in response to an initial boot of the remaining of the plurality of electronic devices102-10N respectively. FIG.1illustrates an exemplary system100comprising a plurality of electronic devices (102,104. . .10N) that may be in a networked environment, in accordance with some example embodiments described herein. The plurality of electronic devices102-10N can comprise a computing device102that is to be configured, e.g., for a first use or based on a periodic change in configuration settings, in accordance with some example embodiments described herein. In accordance with various example embodiments, upon configuration, the computing device102can operate as a master device to share configuration settings to configure remaining of the plurality of electronic devices102-10N. According to some example embodiments, the networked environment referred herein, may correspond to a peer-to-peer (P2P) network of electronic devices102-10N that may be communicatively coupled over a communication network103. To this end, the plurality of electronic devices102-10N in the networked environment may correspond to such devices that may be commissioned and/or configured to perform a particular functionality in a working environment, e.g. industrial environment. For instance, in some examples, the plurality of electronic devices102-10N may correspond to industrial devices e.g. handheld devices, indicia scanners, RFID readers, PDTs and/or the like, that may be used by workers working in the industrial environment, e.g. but not limited to, a warehouse, a manufacturing plant, or a distribution center. As illustrated, one or more of the plurality of electronic devices102-10N can be communicatively coupled to remaining of the plurality of electronic devices102-10N over a communication network103. The communication network103, in some example embodiments can correspond to a medium through which content and messages can flow between various electronic devices in the system100or the networked environment (e.g., the computing device102and/or the plurality of electronic devices102-10N). In some example embodiments, the communication network103may include, but are not limited to, a Wireless Fidelity (Wi-Fi) network, a Piconet, a Personal Area Network (PAN), Zigbee, and a Scatternet. In some examples, the communication network103may correspond to a short range wireless network through which the plurality of electronic devices102-10N may communicate with each other using one or more communication protocols such as, but are not limited to, Wi-Fi, Bluetooth, Bluetooth low energy (BLE), Zigbee, and Z-Wave. In some examples, the communication network103can correspond to a network in which the plurality of electronic devices102-10N may communicate with each other using other various wired and wireless communication protocols such as Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), and 2G, 3G, or 4G communication protocols. In some examples, the communication network103can correspond to any communication network such as, but not limited to, LORA, cellular (NB IoT, LTE-M, Leaky Feeder Coax, etc.) According to various example embodiments described herein, any electronic device (e.g., but not limited to, the computing device102) of the plurality of electronic devices102-10N can initialize the communication network103through which one or more of the plurality of electronic devices102-10N may communicate with the computing device102. In some example embodiments, the communication network103can be initialized by the first computing device102-1. For instance, in some examples, upon booting the first computing device or after a system reset or after a periodic system update, an instance of a mobile application or a system process or a service, on the first computing device may facilitate initialization or setting up the communication network103. Alternatively, and/or additionally, in some example embodiments, the computing device102may initialize the communication network103based on triggering of an event, such as, but not limited to, a user input, or scanning of some configuration indicia, etc. In this regard, in some examples, the user can provide inputs indicative of the computing device102to operate as a master device, thereby initializing the communication network103. To this end, according to some examples, an electronic device of the plurality of electronic devices102-10N that initializes the communication network103can be configured as an access point (e.g. a Wi-fi access point or a Wi-fi hotspot) to which remaining of the plurality of electronic devices may establish connection. In accordance with some example embodiments, any of one or more of the plurality of electronic devices102-10N may cause configuration of remaining of the plurality of electronic devices102-10N that are communicatively coupled with each other via the communication network103. In this regard, in some example embodiments, the computing device102may operate as a master device and can share configuration settings to configure remaining of the plurality of electronic devices102-10N. To this end, according to some example embodiments described herein, the configuration settings may be shared using a secured communication network105(other than the communication network103) that can be configured to ensure confidentiality and maintain the integrity of the configuration settings shared over the secured communication network105. In some example embodiments, the secured communication network105can correspond to the communication network103itself, however, in such a case data such as, configuration settings can be communicated using a secured communication protocol (i.e. based on a secret key exchange process) between a sender (e.g. the first computing device102) and receiver (e.g. remaining of the plurality of electronic devices102-10N), details of which are further described. Further details of initialization of the communication network103and an establishment of the secured communication network105are described in reference toFIGS.2-10. Illustratively, the system100can also comprise a computational platform106. In some example embodiments, one or more of the plurality of electronic devices102-10N may be communicatively coupled to the computing platform106, via the communication network103. In some examples, the computational platform106can correspond to a remote server or an electronic device used by an administrator in an industrial environment. In some example embodiments, the computational platform106can correspond to a data analytics platform that can be configured to receive data from one or more of the plurality of electronic devices102-10N, perform analysis on the data, and provide actionable insights to the plurality of electronic devices102-10N. In some example embodiments, the computational platform106may generate configuration settings or an update to the configuration settings for the plurality of electronic devices102-10N. According to some example embodiments, the computing device102can comprise at least one of a processing unit110, a sensor unit108, a memory112, an input/output circuit114, and a communication circuit116. In some examples, one or more of the sensor unit108, the memory112, the input/output circuit114, the communication circuit116may be communicatively coupled to the processing unit110. Further, the computational platform106may also comprise a processing unit118and/or one or more of similar components like the memory112, the sensor unit108, the communication circuit110, the input/output circuit114, and/or the like as described in reference to the computing device102. According to some example embodiments, the processing unit116referred herein, can correspond to any computing processing unit or device comprising, but not limited to comprising, single-core processors; single-processors with software multithread execution capability; multi-core processors; multi-core processors with software multithread execution capability; multi-core processors with hardware multithread technology; parallel platforms; and parallel platforms with distributed shared memory. Additionally, the processing unit116can refer to an integrated circuit, an application-specific integrated circuit (ASIC), a digital signal processor (DSP), a field-programmable gate array (FPGA), a programmable logic controller (PLC), a complex programmable logic device (CPLD), a discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. In some examples, the processing unit116can also exploit Nano-scale architectures such as, but not limited to, molecular and quantum-dot based transistors, switches and gates, in order to optimize space usage or enhance performance of user equipment. In accordance with some example embodiments, the sensor unit108may comprise a plurality of sensors, for example, imaging devices, like, a color camera and/or a depth camera, photo eyes, fullness sensors, volume sensors, speed sensors, RFID interrogator, scan engine, barcode scanner, indicia reader, and/or the like. Further, the communication circuit116referred herein, may be configured to receive and/or transmit data from/to a network and/or any other device, circuitry, or module in communication over the communication network103. To this end, in some example embodiments, the communications circuit116referred herein, may be any means such as a device or circuitry embodied in either hardware or a combination of hardware and software associated with the respective component of the system100. In some examples, the communications circuit116may comprise a network interface for enabling communications with a wired or wireless communication network. For example, the communications circuitry may comprise one or more network interface cards, antennae, buses, switches, routers, modems, and supporting hardware and/or software, or any other device suitable for enabling communications via the communication network103. Additionally, or alternatively, the communications circuit116may comprise the circuitry for interacting with the antenna(s) to cause transmission of signals via the antenna(s) or to handle receipt of signals received via the antenna(s). These signals may be transmitted by any of the components of the system100and/or the processing unit110over the communication network103, using a number of wireless personal area network (PAN) technologies, such as, but not limited to, Bluetooth® v1.0 through v3.0, Bluetooth Low Energy (BLE), infrared wireless (e.g., IrDA), ultra-wideband (UWB), induction wireless transmission, and/or the like or via a wired communication technology, such as a data field bus, cables etc. In some example embodiments, the I/O circuit114may, in turn, be in communication with the processing unit110to provide output to a user and, in some embodiments, to receive an indication of user input. The I/O circuit114may comprise a user interface and may comprise a display that may comprise a web user interface, a mobile application, a client device, and/or the like. In some embodiments, the I/O circuit114may also comprise a keypad, a joystick, a touch screen, touch areas, soft keys, a microphone, a speaker, or other input/output mechanisms. In some examples, the processing unit110and/or a user interface circuitry comprising a processor associated with the I/O circuit114may be configured to control one or more functions of one or more user interface elements associated with the I/O circuit114through computer program instructions (e.g., software and/or firmware) stored on the memory112accessible to the processing unit110. Further details of the components of the computing device102and/or the plurality of electronic devices (102-10N) are also described in reference toFIGS.9-10. FIGS.2through5illustrate, example flowcharts of the operations performed by an apparatus, such as the plurality of electronic devices (102. . .10N) ofFIG.1, in accordance with example embodiments of the present invention. It will be understood that each block of the flowcharts, and combinations of blocks in the flowcharts, may be implemented by various means, such as hardware, firmware, one or more processors, circuitry and/or other devices associated with execution of software including one or more computer program instructions. For example, one or more of the procedures described above may be embodied by computer program instructions. In this regard, the computer program instructions which embody the procedures described above may be stored by a memory of an apparatus employing an embodiment of the present invention and executed by a processor in the apparatus. As will be appreciated, any such computer program instructions may be loaded onto a computer or other programmable apparatus (e.g., hardware) to produce a machine, such that the resulting computer or other programmable apparatus provides for the implementation of the functions specified in the flowcharts' block(s). These computer program instructions may also be stored in a non-transitory computer-readable storage memory that may direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable storage memory produce an article of manufacture, the execution of which implements the function specified in the flowcharts' block(s). The computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operations to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide operations for implementing the functions specified in the flowcharts' block(s). As such, the operations ofFIGS.2through5, when executed, convert the computer or processing circuitry into a particular machine configured to perform an example embodiment of the present invention. Accordingly, the operations ofFIGS.2through5can define an algorithm for configuring a computer or processor, to perform an example embodiment. In some cases, a general-purpose computer may be provided with an instance of the processor which performs the algorithm ofFIGS.2through5to transform the general-purpose computer into a particular machine configured to perform an example embodiment. Accordingly, blocks of the flowchart support combinations of means for performing the specified functions and combinations of operations for performing the specified functions. It will also be understood that one or more blocks of the flowcharts, and combinations of blocks in the flowchart, can be implemented by special purpose hardware-based computer systems which perform the specified functions, or combinations of special purpose hardware and computer instructions. FIG.2illustrates an example flowchart representing a method200of configuring a computing device104, in accordance with some example embodiments described herein. Illustratively, the method200starts at step202. At step204, the computing device102(referred hereinafter, as the first computing device102) of the plurality of electronic devices102-10N may comprise means such as, the processing unit110to cause initialization of the communication network103based on a pre-defined configuration parameter. In some examples, the pre-defined configuration parameter herein corresponds to such parameters (like, but not limited to, SSID, network name, network security type, etc.) based on which an electronic device of the plurality of electronic devices (102-10N) can be configured for using the communication network103. In some examples, the pre-defined configuration parameter can correspond to configuration parameters provided or selected by a user of the master device. i.e. the first computing device102. According to some example embodiments, the pre-defined configuration parameter can be associated with a first instance of an application on the first computing device102. In this regard, in some examples, the application may correspond to a mobile application or a system process that can be programmed for configuring network settings of the first computing device102. In some examples, the application may be automatically executed, in response to an initial boot or a firmware update of the first computing device and/or remaining of the plurality of electronic devices102-10N. Further, in some examples, the application can provide a user interface for setting up the communication network103for the first computing device102. Further details of the user interface of the application are described in reference toFIG.11. In some example embodiments, the pre-defined configuration parameter may correspond to a network configuration parameter used for configuring network settings of a communication network through which the electronic device can communicate. For instance, in some examples, the pre-defined configuration can comprise at least one of a network name, a service set identifier (SSID), and a network security type. In this regard, the communication network103can be set up based on the pre-defined configuration parameters. In some examples, the communication network103can be initialized based on an occurrence of an event. For instance, in some examples, the processing unit110may cause to perform initialization of the communication network103based on receiving the pre-defined configuration parameters, as an input from a user. In this regard, the processing unit110may receive, via the input/output circuit114, the inputs corresponding to the pre-defined configuration parameters from the user. In some example scenarios, the communication network103can be initialized in a networked environment, where multiple electronic devices can be commissioned by an administrator such that the electronic devices can be used to perform a series of defined steps of an industrial operation. In this regard, the pre-defined parameters for setting up the communication network103may be pre-shared or known amongst trusted electronic devices within the networked environment, so that only trusted devices can utilize the communication network103for data communication. Moving to step206, the first computing device102may comprise means such as, the processing unit110to identify an initialization of a second instance of the application at a second computing device104of the plurality of electronic devices (102-10N). In this regard, in some examples, the first computing device102may identify the initialization of the second instance of the application at the second computing device104based on receiving a connection request from the second computing device104. To this end, according to some examples, the initialization of the communication network103at step204, may cause the first computing device102to operate as a wireless access point. In this regard, remaining electronic devices of the plurality of electronic devices102-10N can attempt to establish a connection with the wireless access point. Accordingly, at step206, the first computing device102, may receive such connection requests for establishing a connection with the first computing device102and to communicate via the communication network103, from one or more of the plurality of electronic devices102-10N. Moving to step208, the first computing device102may comprise means, such as the processing unit110that can cause to send configuration settings for the second computing device104, via a secured communication network105. In this regard, in some example embodiments, the configuration settings may comprise at least the pre-defined configuration parameter based on which the first computing device102initializes the communication network103. Said differently, the first computing device102can share same configuration settings based on which it initialized the communication network103with the second computing device104, so that the second computing device104can also use the same communication network103for data communication. Said differently, in accordance with some example embodiments, the second computing device104may receive the configuration settings to configure its communication circuit (e.g. similar to the communication circuit116) so that the second computing device104can connect over the communication network103and utilize it for data communication. In some examples, the communication network103can correspond to a wireless network. To this end, initialization of the wireless network may involve configuring a Wi-fi hotspot at the first computing device102. In this aspect, the second computing device104may send a connection request to the first computing device102to connect with the Wi-Fi hotspot. Further, in response to connecting with the Wi-Fi hotspot, the first computing device102can share configuration settings (e.g. network settings that can include SSID, network name, network security type, a passcode etc.) with the second computing device104via the secured communication network105. The configuration settings can be used at the second computing device104for configuring the second computing device104. The method200stops at step210. FIG.3illustrates an example flowchart representing a method300of configuring the first computing device102to initiate the communication network103used for communication with the plurality of electronic devices (102-10N) that can be trusted by the first computing device102, in accordance with some example embodiments described herein. The method starts at step302. At step304, the first computing device102can comprise means such as, a user interface of the input/output circuit114to provide a pre-defined configuration parameter to the processing unit110. The pre-defined configuration parameter may comprise parameters for associating with an access point (e.g. a wireless access point) provided by the first computing device102. For example, the pre-defined parameter can be network configuration settings comprising at least one of, a network name, a user name, a password, a network security type, an SSID, and/or the like, commonly used for configuring the communication network103. In some examples, the pre-defined parameter may be encrypted, pre-defined, and known to one or more trusted devices in a networked environment so that trusted devices can utilize it for network configuration and non-trusted devices cannot use it. According to various some example embodiments described herein, a scope of the present disclosure may not be limited to the pre-defined parameter to be network parameters, rather, the pre-defined parameter may correspond to any configuration parameter based on which the first computing device102can be configured to operate a desired functionality. For instance, in some examples, the pre-defined parameter provided by the user interface may correspond to a system parameter associated with operations of the first computing device such as, a selection of a system language, a network connection preference to be used as a default setting, sound preferences for the first computing device102, an indicia scanning setting, a barcode configuration setting, and/or the like. Further details of providing the pre-defined configuration parameter with the user interface of the first computing device102are described in reference toFIGS.11-13. At step306, the first computing device102may comprise means such as, the processing unit110to configure the first computing device102based on the pre-defined configuration parameter by the user interface at step304. At step308, the first computing device102may comprise means such as, the processing unit110to initialize the communication network103. Accordingly, in response to initialization of the communication network103, the first computing device102may communicate with one or more of the plurality of electronic devices102-10N. Initialization of the communication network103, according to some example embodiments, can correspond to activation of a network access point (e.g. a Wi-fi hotspot) by the first computing device102. In this regard, in some examples, the access point may be configured to provide a network access (e.g. internet access) provisioned by an Internet service provider (ISP) or a wireless local area network to one or more of the plurality of electronic devices102-10N in a networked environment. Moving to step310, the processing unit110of the first computing device102may cause to share, via the communications circuit116, configuration settings to the second computing device104over the secured communication network103. In this regard, for sharing the configuration settings, according to some example embodiments, the first computing device102may establish a secured connection with the second computing device104so that the configurations settings are not compromised during transit or lose confidentiality. To this end, the configuration settings may be shared over the secured communication network105established between the first computing device102and the second computing device104based on a secured key exchange process details of which are described in reference toFIG.4. The method stops at step312. In accordance with some example embodiments, it may be desired that the communication network103initialized by the first computing device102(e.g., as described at step308ofFIG.3or step204ofFIG.2) may be used by a trusted or commissioned device of the plurality of electronic devices102-10N, and not by a malicious or non-recognized electronic device. Further, it may also be desired to maintain confidentiality and integrity of the configuration settings while the settings are shared at step310between two electronic devices (e.g. the first computing device102and the second computing device104). Said differently, the configuration settings may be shared by the first computing device102, in response to a determination that the second computing device104is a trusted device and not a malicious node of a networked environment. In this regard, in accordance with some example embodiments, the first computing device102may authenticate the second computing device104to be a trusted device, details of which are explained in reference toFIG.4. FIG.4illustrates an example flowchart representing a method400of authenticating the second computing device104by the first computing device102to use the communication network initialized by the first computing device, in accordance with some example embodiments described herein. The method400starts at step402. At step404, the first computing device may comprise means such as, the processing unit110to encrypt a pre-defined configuration parameter by a first key shared amongst the plurality of electronic devices102-10N. For example, the first computing device102may encrypt one or more of the network name, the SSID, the passcode, and/or the like, using the first key. In some examples a public key and a corresponding private key pair may be shared and known amongst trusted devices. For instance, the public key and private key information may be shared amongst trusted electronic devices at a time of manufacturing of the electronic devices, by an original equipment manufacturer (OEM) or during a firmware configuration of the electronic devices. In some examples, the first key may correspond to a private key of the first computing device102that can be used by the first computing device to encrypt the pre-defined configuration parameter. According to some example embodiments, the pre-defined configuration parameter may be encrypted by the first computing device102after configuring the initialization of the communication network103. Said differently, the first computing device102may initiate the access point for communication and subsequently can encrypt the configuration parameters that are to be used for connecting with the access point. At step406, the processing unit110, may cause to scan via the communications circuit116, an incoming connection request to establish a connection with the first computing device102over the communication network103. In this regard, the processing unit110can wait for incoming connection requests for connecting with the access point. Moving to step408, the processing unit110may receive via the communications circuit116, the connection request from the second computing device104. In some examples, the incoming connection request may correspond to a TCP/IP handshake request by the second computing device104for establishing a connection with the first computing device. In this regard, the incoming connection request may include fields such as, a source address, a destination address, a physical address (MAC address) of a source, payload information, and other such information. At step410, the first computing device102may comprise means such as, the processing unit110to authenticate the second computing device104based on identification of the pre-defined configuration parameter in the connection request. In this regard, the processing unit110may cause to parse the connection request received at step408, to determine at least the pre-defined configuration parameter to be present in the connection request. For instance, in some examples, the connection request may include at least one of, the SSID, the network name, the passcode, and/or the like, used by the first computing device102to initialize the communication network103. As the pre-defined configuration parameters are encrypted by the first computing device102, identification of the pre-defined configuration parameters in the connection request, at step410, can be indicative of a successful decryption of the pre-defined configuration parameter by the second computing device104. To this end, the second computing device104may cause to decrypt the encrypted configuration parameter using a second key (e.g. a public shared amongst trusted devices) and send the connection request including the decrypted configuration parameter to the first computing device. Accordingly, the first computing device102may authenticate the second computing device104to be a trusted device. In some example embodiments, in response to authentication, the first computing device102may cause to share configuration settings with the second computing device104. The method stops at step412. FIG.5illustrates an example flowchart representing a method500of secured key exchange between the first computing device102and the second computing device104for sharing configuration settings to the second computing device104, in accordance with some example embodiments described herein. In accordance with some example embodiments, based on the secured key exchange, the secured communication network105may be established and utilized by the plurality of electronic devices102-10N, thereby enabling data communication and secured sharing of data amongst the plurality of electronic devices102-10N. At step502, the first computing device102may comprise means such as, the processing unit110, to receive via the communications circuit116, an encrypted temporary key from the second computing device104. In this regard, the encrypted temporary key corresponds to a temporary key generated by the second computing device104and subsequently encrypted by the second computing device104. In some examples, the encrypted temporary key may comprise any of, numeric, alphabetic, alphanumeric, special characters, and/or a combination thereof. At step504, the processing unit110, may cause to decrypt the encrypted temporary key received from the second computing device104. In this regard, in some examples, the encrypted temporary key may correspond to a temporary key encrypted by the second computing device104using a public key shared amongst the plurality of electronic devices. To this end, in some example embodiments, at step504, the processing unit110may decrypted the encrypted temporary key using a private key of the first computing device102. The method at step508may comprise, using by the processing unit110the temporary key decrypted at step506, to encrypt a session key. The processing unit110can encrypt the session key to generate an encrypted session key. In this regard, the session key may be generated by the processing unit110and may be indicative of a session of data communication to be initiated at the first computing device102. At step510, the first computing device102may comprise means such as, the processing unit110to cause sending, via the communications circuit116, the encrypted session key to the second computing device. In this regard, in some example embodiments, the encrypted session key may be received by the second computing device and decrypted using the temporary key known to the second computing device. By decrypting the encrypted session key, the second computing device104may know of the session key and use it for data communication with the first computing device102. For instance, in some example embodiments, the second computing device102may send a configuration request to the first computing device. In some examples, the configuration request may be indicative of a request for sharing configuration settings of the second computing device104. Further, in some examples, at step512, the processing unit110may validate a configuration request received from the second computing device104. In some examples, the configuration request may comprise, a header, a destination address (e.g. IP address or physical address of the first computing device102), a source address (physical address or IP address of the second computing device104), a payload that can comprise a set of configuration parameters associated to requested configuration settings, a flag value identifier that identifies the request to be a request for sharing configuration settings, the session key, and/or the like. In this regard, the processing unit110of the second computing device104may parse the connection request received from the second computing device104and identify the session key in the connection request. Accordingly, the processing unit110may validate the second computing device104to be a trusted electronic device, in an instance when the session key is present in the connection request received from the second computing device104. In some examples, the processing unit110may match the session key identified from the connection request with a local version of the session key generated at step508by the first computing device102. Alternatively, the processing unit110may flag the second computing device104be a non-trusted electronic device in an instance when the session key is not present, or an invalid session key is present in the connection request received at step512. Moving to step514, the processing unit110, in response to a validation of the second computing device104to be a trusted electronic device, the processing unit110may share configuration settings to the second computing device104. Further, in some examples, the configuration settings received from the first computing device102may be used for configuring the second computing device104for a desired use. The method stops at step516. FIG.6illustrates an example message flow diagram600representing a communication between a master device602and a client device604, in accordance with some example embodiments described herein. Said differently, the message flow diagram600illustrates a sequence of messages and data communication between the master device602and the client device604. According to some examples, a finite set of process threads can be executed at each of the master device602and the client device604to perform some operation (for example, but not limited to one or more steps of methods200,300,400, and500as described inFIGS.2-5). In this regard, the message flow diagram600also represents a finite sequence of events for each process thread that can be executed by a processing unit of the respective device. Illustratively, a process thread that can be executed at the master device602is represented by vertical line606and a process thread that can be executed at the client device604is represented by the vertical line608. Further, horizontal lines in the message flow diagram600represents a message communicated between the master device602and the client device604or a data handled at the respective device. According to some example embodiments, the master device602and the client device604can be from amongst the plurality of electronic devices102-10N, as described inFIG.1. For instance, in an example embodiment, the master device602can correspond to the first computing device102and the client device604can correspond to the second computing device104. Accordingly, the master device602can be configured to perform operations as described in reference to the first computing device102inFIGS.1-5and the client device604can be configured to perform operations as described in reference to the second computing device104inFIGS.1-5. In one example scenario, the plurality of electronic devices102-10N can be operated in an industrial environment like a warehouse or inventory. In this regard, the plurality of electronic devices102-10N may be used by workers to increase worker's productivity in performing various operations like, but not limited to, package positioning, refilling shelves, stocking inventory, shipment processing etc. In such cases, workflows comprising steps of a task can be executed in form of visual instructions and/or can be provided to the workers in form of voice-based instructions on the plurality of electronic devices102-10N. In such an example scenario, the master device602can be an electronic device of the plurality of electronic devices102-10N that can operate as a central server administered by a store manager and the client device604can be any electronic device from the plurality of electronic devices102-10N that can be used by the worker. In this regard, the worker may receive instructions for performing various operations within the inventory on the client device604in form of audio and/or visual instructions, to which the worker can respond using the input/output circuit of the client device604. In some example embodiments, the master device602can correspond to an electronic device that may comprise substantially more resources (e.g. memory, battery life, network connectivity, etc.) as compared to resources of the client device604. Further, according to some example embodiments, the master device602may correspond to an electronic device from amongst the plurality of electronic devices102-10N, that can be first configured based on pre-defined configuration settings and upon configuration, can further share configuration settings to remaining of the plurality of electronic devices102-10N. Illustratively, at step610, the master device602can generate configuration settings. The configuration settings, in accordance with some example embodiments, can correspond to settings for: a network set up configuration (e.g. Wi-fi configuration, wired network configuration), network preference for data transmission settings (e.g. Bluetooth, Wi-fi, NFC, etc.), system sound (e.g., ringtone volume, device volume, call volume, alarm volume, etc.), a system language (e.g., English, Chinese, Hindi, etc.), and other similar device settings. According to some example embodiments, the configuration settings may be generated at the master device602, in response to receiving inputs, via the input output circuit114, defining configuration parameters. Alternatively, in some example embodiments, the configuration settings may be auto-generated at a time of initial set up or boot of the master device602. In some example embodiments, the configuration settings may be generated in response to scanning and decoding of configuration indicia, e.g. a QR code or a barcode provided by an OEM. Upon generating the configuration settings, the master device602can be configured to operate in accordance with configuration parameters defined in the configuration settings. For example, in an instance when the configuration settings correspond to network set up settings, upon configuration, the master device602can be initialize a communication network or an access point defined in the configuration settings and can use the communication network for communicating data. According to some examples, the configuration settings can correspond to a wireless network configuration setting based on which the master device602can initialize a wireless communication network (e.g. a Wi-fi hotspot or a near field access point). Accordingly, at step612, the master device602can initiate the Wi-fi hotspot to which one or more of the plurality of electronic devices102-10N can connect for data communication with the master device602. At step614, in some example embodiments, the client device604can generate a connection request to connect with the Wi-fi hotspot initialized at the master device602. In this regard, the connection request can correspond to a message sent from the client device604based on a TCP/IP handshaking protocol. In some examples, the connection request can correspond to a data packet comprising, a header, a source address, a destination address, a request identifier, and/or the like. In some example embodiments, the master device602can receive the connection request from the client device604in a similar fashion as described in reference toFIGS.2-5. In some example embodiments, the connection request may also comprise an encrypted session key as described in reference toFIG.5. In response to receiving the connection request from the client device604, at step616, the master device602can send configuration settings to the client device604. The configuration settings may comprise one or more configuration parameters such as, but not limited to, a network name, SSID, a passcode, a network security type, based on which the client device604can connect with the Wi-fi hotspot initiated by the master device602. In some examples, the configuration settings sent from the master device602to the client device604can also comprise, for example, system sound settings, language setting, power management settings, and/or the like associated with the client device604. Further, at step618, the client device604can apply the configuration settings and can be configured based on the configuration settings. Further, the client device604can disconnect to the Wi-fi hotspot. FIG.7illustrates an example scenario700representing a communication between the first computing device102and remaining of the plurality of electronic devices102-10N, in accordance with some example embodiments described herein. Illustratively, a first view700depicts, in an instance702, a wireless access point initialized by the first computing device102. In some examples, the wireless access point initiated at the first computing device102can correspond to a Wi-Fi hotspot. According to some example embodiments, in response to initializing the wireless access point, the communications circuit116of the first computing device102can utilize the communication network103for data communications. To this end, the first computing device102can initialize the wireless access point, by performing steps as described in reference toFIGS.2-6. For example, in the instance702, configuration parameters such as, but not limited to, SSID, network security type, a network name, and a password can be defined using the input/output circuit114to initialize the wireless access point. In some examples, the wireless access point can be initialized based on encrypted configuration parameters. For instance, in some examples, one or more configuration parameters (like, the network name, the SSID, the password, etc.) can be encrypted for initialization, so that only trusted devices can use the Wi-Fi hotspot for data communication. To this end, keys used for encrypting and decrypting the configuration parameters can be known to trusted devices. In some examples, encrypted configuration parameters can be installed or built-in to all trusted electronic devices by the OEM. In some example embodiments, the connection request from remaining of the plurality of electronic devices102-10N or a new electronic device by the OEM, can automatically be sent in response to an initial boot of the respective device. To this end, an application that causes generation of the connection request, can automatically be executed (e.g. as a back-end service) upon the initial booth of the remaining of the plurality of electronic devices102-10N that may be operating as a client device. The second view720illustrates, the first computing device102waiting for the remaining of the plurality of electronic devices102-10N to connect with the wireless access point (e.g. the Wi-fi hotspot). In this regard, in some examples, the first computing device102can scan for incoming connection requests from one or more of the plurality of electronic devices102-10N to connect to the Wi-fi hotspot, in a similar fashion as described at step406ofFIG.4. To this end, in an instance704, one or more electronic devices, for example, the second computing device104and a computing device10N can send communication requests to connect with the Wi-Fi hotspot initialized by the first computing device102. In some examples, where the second computing device104and the computing device10N are trusted devices, the configuration parameters associated with the Wi-Fi hotspot can be decrypted by second computing device104and the computing device10N to connect with the Wi-Fi hotspot. A third view750schematically illustrates a secret key exchange process amongst the first computing device102and the plurality of electronic devices102-10N that can be performed for sharing configuration settings. In accordance with some example embodiments, upon connecting to the Wi-Fi hotspot (as illustrated in the second view720) configuration settings for remaining of the plurality of electronic devices102-10N can be shared by the first computing device102over a secured communication channel (e.g. the secured communication network105). To this end, the secret key exchange process can be performed amongst the plurality of electronic devices102-10N, illustrated in the third view750(i.e. to initiate data communication pertaining to the configuration settings). Said differently, while the plurality of electronic devices102-10N can initially connect to the Wi-Fi hotspot and communicate over the communication network103provided by the Wi-Fi hotspot, however, for sharing configuration settings, data communication may be performed by using secret keys based on the secret key exchange process as described inFIG.5and illustrated in the third view750. Accordingly, the configuration settings can be shared over the secured communication network105established amongst the plurality of electronic devices102-10N based on the secret key exchange process. This can be performed to ensure that the configuration settings can be shared to trusted devices and a malicious device may not intrude into the data communication. As illustrated, in an instance706, the first computing device102and remaining of the plurality of electronic devices102can perform the secret key exchange process (as described inFIG.5) and in another instance708, establishes a socket connection with the first computing device102. Further details and sharing of the configuration settings are illustrated and described inFIG.8. FIG.8illustrates another example scenario representing a first view800that illustrates sharing of the configuration settings from the first computing device102(e.g. a master device) to remaining of the plurality of electronic devices104-10N.FIG.8also illustrates another example scenario that illustrates, a second view820depicting a configuration set up by the remaining of the plurality of electronic devices104-10N, in accordance with some example embodiments described herein. Illustratively, the first view800depicts sending of encrypted data, at step802, from the first computing device102to the remaining of the plurality of electronic devices104-10N. In this regard, as illustrated, the remaining of the plurality of electronic devices104-10N can be connected (or establish a socket connection) with the first computing device102, over the secured communication network105. Establishing a socket connection can be performed based on a secret key exchange process, using steps as described in reference toFIG.5. In some example embodiments, the encrypted data may correspond to configuration settings or configuration data to configure the remaining of the plurality of electronic devices104-10N shared by the first computing device102. To this end, in some examples, the configuration settings can be encrypted using a session key (e.g. the encrypted session key, as described in reference toFIG.5). Further, as illustrated, at step804, the remaining of the plurality of electronic devices104-10N can decrypt the encrypted data using the session key to access the configuration settings. In some examples, the configuration settings may include but not limited to, SSID, security password etc. shared by the first computing device102to configure the remaining of the plurality of electronic devices104-10N to use the access point or the hotspot. Accordingly, as illustrated in the second view820, the remaining of the plurality of electronic devices104-10N can be configured, at step806, by using the configuration settings (e.g. the SSID and the security password) decrypted at step804. In some examples, based on configuration the remaining of the plurality of electronic devices104-10N can remain connected to the Wi-fi access point or the hotspot initialized at the first computing device102. By way of implementation of the embodiments described herein, in an industrial environment, multiple electronic devices (e.g. the plurality of electronic devices102-10N) can be configured and commissioned simultaneously (e.g. in batches) at an instance of time for a desired use. Said differently, upon configuring one device (e.g. the first computing device102) remaining electronic devices (e.g. the second computing device104, the electronic device104, and so on) can be configured based on configuration settings generated by the first configured device (i.e. the first computing device). In this regard, configuring the plurality of electronic devices102-10N described herein, can include for example, but not limited to, setting up system language, controlling network priorities (e.g. Bluetooth or NFC), adjusting system volume, controlling device location availability, configuring an electronic device to connect with a specific network, installing an application, configuring the electronic device to download some files from a defined local network, rebooting the electronic device, and/or the like. FIG.9illustrates a schematic view900of an example electronic device of the plurality of electronic devices, in accordance with some example embodiments described herein. The electronic device, in some examples, can correspond to a mobile handset. In some example embodiments, the electronic device illustrated inFIG.9, can be any device of the plurality of electronic devices102-10N as described in reference toFIGS.1-8. For example, in some embodiments, the mobile handset illustrated inFIG.9can correspond to the first computing device102or the second computing device104, as described in reference toFIGS.1-8. According to some example embodiments,FIG.9illustrates is a schematic block diagram of an example end-user device such as a user equipment that can be the first computing device102(e.g. a mobile handset) capable of connecting to the communication network (103and/or105) in accordance with some embodiments described herein. Although,FIG.9illustrates a mobile handset, it will be understood that other devices can be any electronic device as described inFIG.1, and that the mobile handset is merely illustrated to provide context for the embodiments of the various embodiments described herein. To this end, the following discussion is intended to provide a brief, general description of an example of a suitable environment900in which the various embodiments can be implemented. While the description includes a general context of computer-executable instructions embodied on a machine-readable storage medium, those skilled in the art will recognize that the various embodiments also can be implemented in combination with other program modules and/or as a combination of hardware and software. Generally, applications (e.g., program modules) can include routines, programs, components, data structures, etc., described herein in accordance with example embodiments, that can perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the methods described herein can be practiced with other system configurations, including single-processor or multiprocessor systems, minicomputers, mainframe computers, as well as personal computers, hand-held computing devices, microprocessor-based or programmable consumer electronics, and the like, each of which can be operatively coupled to one or more associated devices. According to some example embodiments, the first computing device102can typically include a variety of machine-readable media. Machine-readable media can be any available media that can be accessed by the computer and includes both volatile and non-volatile media, removable and non-removable media. By way of example and not limitation, computer-readable media can comprise computer storage media and communication media. Computer storage media can include volatile and/or non-volatile media, removable and/or non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules or other data. Computer storage media can include, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD ROM, digital video disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer. According to some example embodiments described herein, a communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. In this regard, the term “modulated data signal” can correspond to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above may also be included within the scope of computer-readable media. According to some example embodiments, the mobile handset can comprise a processor902for controlling and processing all onboard operations and functions. A memory904interfaces to the processor902for storage of data and one or more applications906(e.g., a video player software, user feedback component software, etc.). Other applications can include voice recognition of predetermined voice commands that facilitate initiation of the user feedback signals. The applications906can be stored in the memory904and/or in a firmware908, and executed by the processor902from either or both the memory904or/and the firmware908. The firmware908can also store startup code for execution in initializing the mobile handset. A communications component910interfaces to the processor902to facilitate wired/wireless communication with external systems, e.g., cellular networks, VoIP networks, and so on. Here, the communications component910can also include a suitable cellular transceiver911(e.g., a GSM transceiver) and/or an unlicensed transceiver913(e.g., Wi-Fi, WiMax) for corresponding signal communications. The mobile handset can be a device such as a cellular telephone, a PDA with mobile communications capabilities, and messaging-centric devices. The communications component910also facilitates communications reception from terrestrial radio networks (e.g., broadcast), digital satellite radio networks, and Internet-based radio services networks. The mobile handset can also comprise a display912for displaying text, images, video, telephony functions (e.g., a Caller ID function), setup functions, and for user input. For example, the display912can also be referred to as a “screen” that can accommodate the presentation of multimedia content (e.g., music metadata, messages, wallpaper, graphics, etc.). The display912can also display videos and can facilitate the generation, editing and sharing of video quotes. A serial I/O interface914is provided in communication with the processor902to facilitate wired and/or wireless serial communications (e.g., USB, and/or IEEE 1394) through a hardwire connection, and other serial input devices (e.g., a keyboard, keypad, and mouse). This support updating and troubleshooting the mobile handset, for example. Audio capabilities are provided with an audio I/O component916, which can include a speaker for the output of audio signals related to, for example, indication that the user pressed the proper key or key combination to initiate the user feedback signal. The audio I/O component916also facilitates the input of audio signals through a microphone to record data and/or telephony voice data, and for inputting voice signals for telephone conversations. The mobile handset can also comprise a slot interface918for accommodating a SIC (Subscriber Identity Component) in the form factor of a card Subscriber Identity Module (SIM) or universal SIM920and interfacing the SIM card920with the processor902. However, it is to be appreciated that the SIM card920can be manufactured into the mobile handset and updated by downloading data and software. The mobile handset can also process IP data traffic through the communication component910to accommodate IP traffic from an IP network such as, for example, the Internet, a corporate intranet, a home network, a person area network, etc., through an ISP or broadband cable provider. Thus, VoIP traffic can be utilized by the mobile handset and IP-based multimedia content can be received in either an encoded or decoded format. A video processing component922(e.g., a camera) can be provided for decoding encoded multimedia content. The video processing component922can aid in facilitating the generation, editing and sharing of video quotes. The mobile handset also includes a power source924in the form of batteries and/or an AC power subsystem, which power source924can interface to an external power system or charging equipment (not shown) by a power I/O component926. According to some example embodiments, the mobile handset can also comprise a video component930for processing video content received and, for recording and transmitting video content. For example, the video component930can facilitate the generation, editing and sharing of video quotes. In some example embodiments, a location tracking component932facilitates geographically locating the mobile handset. As described hereinabove, this can occur when the user initiates the feedback signal automatically or manually. According to some example embodiments, a user input component934facilitates the user initiating the quality feedback signal. In this regard, in some examples, the user input component934can also facilitate the generation, editing and sharing of video quotes. According to various example embodiments described herein, the user input component934can include such conventional input device technologies such as a keypad, keyboard, mouse, stylus pen, and/or touch screen, for example. Referring again to the applications906, a hysteresis component936can facilitate the analysis and processing of hysteresis data, which is utilized to determine when to associate with the access point. A software trigger component938can be provided that facilitates triggering of the hysteresis component938when the Wi-Fi transceiver913detects the beacon of the access point. A SIP client940enables the mobile handset to support SIP protocols and register the subscriber with the SIP registrar server. In some example embodiments, the applications906can also include a client942that provides at least the capability of discovery, play and store of multimedia content, for example, music. In some example embodiments, the mobile handset, as indicated above related to the communications component910, includes an indoor network radio transceiver913(e.g., Wi-Fi transceiver). This function can support the indoor radio link, such as IEEE 802.11, for the dual-mode GSM handset. In some example embodiments, the mobile handset can accommodate at least satellite radio services through a handset that can combine wireless voice and digital radio chipsets into a single handheld device. FIG.10illustrates a schematic view1000of an example computing device1001of the plurality of electronic devices102-10N, in accordance with some example embodiments described herein. The computing device1001, in some examples, can correspond to a computer. In some example embodiments, the computing device1001illustrated inFIG.10, can be any electronic device of the plurality of electronic devices102-10N as described in reference toFIGS.1-9. For example, in some embodiments, the computing device1001illustrated inFIG.10can correspond to the first computing device102or the second computing device104, as described in reference toFIGS.1-9. Referring now toFIG.10, there is illustrated a block diagram of a computing device1001operable to execute the functions and operations performed in the described example embodiments. In some example embodiments, the computing device1001can provide networking and communication capabilities between a wired or wireless communication network and a server and/or communication device. In order to provide additional context for various aspects thereof,FIG.10and the following discussion are intended to provide a brief, general description of a suitable computing environment in which the various aspects of the embodiments can be implemented to facilitate the establishment of a transaction between an entity and a third party. While the description above is in the general context of computer-executable instructions that can run on one or more computers, those skilled in the art will recognize that the various embodiments also can be implemented in combination with other program modules and/or as a combination of hardware and software. According to said example embodiments, program modules include routines, programs, components, data structures, etc., that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the inventive methods can be practiced with other computer system configurations, including single-processor or multiprocessor computer systems, minicomputers, mainframe computers, as well as personal computers, hand-held computing devices, microprocessor-based or programmable consumer electronics, and the like, each of which can be operatively coupled to one or more associated devices. The illustrated aspects of the various embodiments can also be practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules can be located in both local and remote memory storage devices. In accordance with some example embodiments, computing devices typically include a variety of media, which can include computer-readable storage media or communications media, which two terms are used herein differently from one another as follows. According to some example embodiments, a computer-readable storage media can be any available storage media that can be accessed by the computer and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable storage media can be implemented in connection with any method or technology for storage of information such as computer-readable instructions, program modules, structured data, or unstructured data. Computer-readable storage media can include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other tangible and/or non-transitory media which can be used to store desired information. Computer-readable storage media can be accessed by one or more local or remote computing devices, e.g., via access requests, queries or other data retrieval protocols, for a variety of operations with respect to the information stored by the medium. In some examples, communications media can embody computer-readable instructions, data structures, program modules or other structured or unstructured data in a data signal such as a modulated data signal, e.g., a carrier wave or other transport mechanism, and includes any information delivery or transport media. The term “modulated data signal” or signals refers to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in one or more signals. By way of example, and not limitation, communication media include wired media, such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. With reference toFIG.10, implementing various aspects described herein with regards to the end-user device can comprise the computing device1001comprising a processing unit1004, a system memory1006and a system bus1008. The system bus1008can be configured to couple system components including, but not limited to, the system memory1006to the processing unit1004. In some example embodiments, the processing unit1004can be any of various commercially available processors. To this end, in some examples, dual microprocessors and other multi-processor architectures can also be employed as the processing unit1004. According to some example embodiments, the system bus1008can be any of several types of bus structure that can further interconnect to a memory bus (with or without a memory controller), a peripheral bus, and a local bus using any of a variety of commercially available bus architectures. In some examples, the system memory1006can comprise, read-only memory (ROM)1027and random-access memory (RAM)1012. According to some example embodiments, a basic input/output system (BIOS) is stored in a non-volatile memory1027such as ROM, EPROM, EEPROM, which BIOS contains the basic routines that help to transfer information between elements within the computing device1001, such as during start-up. The RAM1012can also comprise a high-speed RAM such as static RAM for caching data. According to some example embodiments, the computing device1001can further comprise an internal hard disk drive (HDD)1014(e.g., EIDE, SATA), which internal hard disk drive1014can also be configured for external use in a suitable chassis (not shown), a magnetic floppy disk drive (FDD)1016, (e.g., to read from or write to a removable diskette1018) and an optical disk drive1020, (e.g., reading a CD-ROM disk1022or, to read from or write to other high capacity optical media such as the DVD). In some examples, the hard disk drive1014, magnetic disk drive1016and optical disk drive1020can be connected to the system bus1008by a hard disk drive interface1024, a magnetic disk drive interface1026and an optical drive interface1028, respectively. According to some example embodiments, the interface1024for external drive implementations can comprise, at least one or both of Universal Serial Bus (USB) and IEEE 1394 interface technologies. Other external drive connection technologies are within contemplation of the subject embodiments. According to some example embodiments described herein, the drives and their associated computer-readable media provide nonvolatile storage of data, data structures, computer-executable instructions, and so forth. For the computing device1001the drives and media accommodate the storage of any data in a suitable digital format. Although the description of computer-readable media above refers to a HDD, a removable magnetic diskette, and a removable optical media such as a CD or DVD, it may be appreciated by those skilled in the art that other types of media which are readable by a computing device1001, such as zip drives, magnetic cassettes, flash memory cards, cartridges, and the like, can also be used in the example operating environment, and further, that any such media can contain computer-executable instructions for performing the methods of the disclosed embodiments. In some example embodiments, a number of program modules can be stored in the drives and RAM1012, including an operating system1030, one or more application programs1032, other program modules1034and program data1036. To this end, in some examples, all or portions of the operating system, applications, modules, and/or data can also be cached in the RAM1012. It is to be appreciated that the various embodiments can be implemented with various commercially available operating systems or combinations of operating systems. According to some example embodiments, a user can enter commands and information into the computing device1001through one or more wired/wireless input devices, e.g., a keyboard1038and a pointing device, such as a mouse1040. Other input devices (not shown) may include a microphone, an IR remote control, a joystick, a game pad, a stylus pen, touch screen, or the like. In some examples, these and other input devices are often connected to the processing unit1004through an input device interface1042that is coupled to the system bus1008, but can be connected by other interfaces, such as a parallel port, an IEEE 1394 serial port, a game port, a USB port, an IR interface, etc. According to some example embodiments, a monitor1044or other type of display device can also be connected to the system bus1008through an interface, such as a video adapter1046. In addition to the monitor1044, the computing device1001can also comprise other peripheral output devices (not shown), such as speakers, printers, etc. According to some example embodiments, the computing device1001can operate in a networked environment using logical connections by wired and/or wireless communications to one or more remote computers, such as a remote computer(s)1048. In some examples, the remote computer(s)1048can be a workstation, a server computer, a router, a personal computer, portable computer, microprocessor-based entertainment device, a peer device or other common network node, and typically includes many or all of the elements described relative to the computer, although, for purposes of brevity, only a memory/storage device1050is illustrated. According to some example embodiments, the logical connections depicted include wired/wireless connectivity to a local area network (LAN)1052and/or larger networks, e.g., a wide area network (WAN)1054. Such LAN and WAN networking environments are commonplace in offices and companies, and facilitate enterprise-wide computer networks, such as intranets, all of which may connect to a global communications network, e.g., the Internet. In some examples, when used in a LAN networking environment, the computing device1001can be connected to the local network1052through a wired and/or wireless communication network interface or adapter1056. The adapter1056may facilitate wired or wireless communication to the LAN1052, which may also include a wireless access point disposed thereon for communicating with the wireless adapter1056. In alternate examples, when used in a WAN networking environment, the computing device1001can include a modem1058, or can be connected to a communications server on the WAN1054or has other means for establishing communications over the WAN1054, such as by way of the Internet. The modem1058, which can be internal or external and a wired or wireless device, is connected to the system bus1008through the input device interface1042. In a networked environment, program modules depicted relative to the computer, or portions thereof, can be stored in the remote memory/storage device1050. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers can be used. According to some example embodiments, the computing device1001can be operable to communicate with any wireless devices or entities operatively disposed in wireless communication, e.g., a printer, scanner, desktop and/or portable computer, portable data assistant, communications satellite, any piece of equipment or location associated with a wirelessly detectable tag (e.g., a kiosk, news stand, restroom), and telephone. This can further comprise at least Wi-Fi and Bluetooth™ wireless technologies. Thus, the communication can be a predefined structure as with a conventional network or simply an ad hoc communication between at least two devices. In accordance with some example embodiments, Wi-Fi, or Wireless Fidelity, allows connection to the Internet from a couch at home, a bed in a hotel room, or a conference room at work, without wires. To this end, Wi-Fi referred herein, is a wireless technology similar to that used in a cell phone that enables such devices, e.g., computers, to send and receive data indoors and out; anywhere within the range of a base station. Wi-Fi networks use radio technologies called IEEE802.11 (a, b, g, n, etc.) to provide secure, reliable, fast wireless connectivity. Further, in accordance with some example embodiments described herein, a Wi-Fi network can be used to connect computers or the plurality of electronic devices102-10N to each other, to the Internet, and to wired networks (which use IEEE802.3 or Ethernet). Wi-Fi networks operate in the unlicensed 2.4 and 5 GHz radio bands, at an 11 Mbps (802.11b) or 54 Mbps (802.11a) data rate, for example, or with products that contain both bands (dual band), so the networks can provide real-world performance similar to the basic “10BaseT” wired Ethernet networks used in many offices. FIG.11illustrates exemplary views (1100,1120, and1150) of user interfaces of various instances of an application used for configuring the first computing device102, in accordance with some example embodiments described herein. Illustratively, a first view1100depicts a user interface of a first instance of an application for initializing the communication network103, as described inFIGS.2-8. In some example embodiments, the input/output circuit114of the first computing device102can be used by a user to provide network configuration parameters (such as, but not limited to, Wi-Fi security, Wi-Fi SSID, Wi-Fi password for setting up an access point (e.g. wireless access point or a Wi-Fi hotspot) at the first computing device102. In some example embodiments, the network configuration parameters may be built in on the plurality of electronic devices102-10N, and the application can automatically initiate network configuration at an electronic device (102-10N) as the device is turned ON, for configuring the device based on the network configuration parameters. Illustratively, the second view1120depicts another user interface of a second instance of the application for configuring network priority settings of the plurality of electronic devices102-10N. In this regard, in some examples, a connection preference (e.g. Bluetooth or NFC) of an electronic device can be configured either manually or automatically by the application. A third view1150illustrates, another user interface of a third instance of the application for configuration of sound preferences (e.g. media volume, call volume, alarm volume, ring volume) of the plurality of electronic devices102-10N. In this regard, in some examples, the sound preferences of the electronic device can be configured either manually or automatically by the application. According to various example embodiments described herein, a first electronic device (e.g. the first computing device102) can be initially configured and can further share configuration settings (illustrated in views800,820, and850) to one or more remaining electronic devices that can be communicatively coupled to the first computing device102. In this regard, the remaining one or more electronic devices (102-10N) can be communicatively coupled to the first computing device102and can receive the configuration settings in a manner as described before in reference toFIGS.2-7. For instance, in some example embodiments, the configuration settings can be shared by an application on the first computing device102, in response to identifying, other instances of same application initiated at remaining of the plurality of electronic devices102-10N and knowing information associated with the other instances of the applications at the remaining electronic devices. In some example embodiments, an application of the plurality of electronic devices102-10N can provide similar user interfaces as illustrated inFIG.11to input configuration files (e.g. XML files, APKs etc.) for configuring the plurality of electronic devices102-10N. For example, in some embodiments, configuration files can be input, via the input/output circuit114, on a first instance of application at the first computing device102to configure the first computing device102. Further, in response to identification of a second instance of the same application to be initialized at the second computing device104, the configuration files can be automatically shared by the first computing device102to the second computing device104, in a manner as described in reference toFIGS.2-8. In some examples, the configuration files can be stored on a remote server, for example, the computational platform106, as described inFIG.1. To this end, the application on the first computing device102can be configured to extract the configuration files from the computational platform106and share with remaining of the plurality of electronic devices102-10N. As used in this application, the terms “system,” “component,” “interface,” and the like are generally intended to refer to a computer-related entity or an entity related to an operational machine with one or more specific functionalities. The entities disclosed herein can be either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. These components also can execute from various computer readable storage media having various data structures stored thereon. The components may communicate via local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems via the signal). As another example, a component can be an apparatus with specific functionality provided by mechanical parts operated by electric or electronic circuitry that is operated by software or firmware application(s) executed by a processor, wherein the processor can be internal or external to the apparatus and executes at least a part of the software or firmware application. As yet another example, a component can be an apparatus that provides specific functionality through electronic components without mechanical parts, the electronic components can comprise a processor therein to execute software or firmware that confers at least in part the functionality of the electronic components. An interface can comprise input/output (I/O) components as well as associated processor, application, and/or API components. Furthermore, the disclosed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term “article of manufacture” as used herein is intended to encompass a computer program accessible from any computer-readable device, computer-readable carrier, or computer-readable media. For example, computer-readable media can include, but are not limited to, a magnetic storage device, e.g., hard disk; floppy disk; magnetic strip(s); an optical disk (e.g., compact disk (CD), a digital video disc (DVD), a Blu-ray Disc™ (BD)); a smart card; a flash memory device (e.g., card, stick, key drive); and/or a virtual device that emulates a storage device and/or any of the above computer-readable media. As it employed in the subject specification, the term “processor” can refer to substantially any computing processing unit or device comprising, but not limited to comprising, single-core processors; single-processors with software multithread execution capability; multi-core processors; multi-core processors with software multithread execution capability; multi-core processors with hardware multithread technology; parallel platforms; and parallel platforms with distributed shared memory. Additionally, a processor can refer to an integrated circuit, an application specific integrated circuit (ASIC), a digital signal processor (DSP), a field programmable gate array (FPGA), a programmable logic controller (PLC), a complex programmable logic device (CPLD), a discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. Processors can exploit nano-scale architectures such as, but not limited to, molecular and quantum-dot based transistors, switches and gates, in order to optimize space usage or enhance performance of user equipment. A processor also can be implemented as a combination of computing processing units. In the subject specification, terms such as “store,” “data store,” “data storage,” “database,” “repository,” “queue”, and substantially any other information storage component relevant to operation and functionality of a component, refer to “memory components,” or entities embodied in a “memory” or components comprising the memory. It will be appreciated that the memory components described herein can be either volatile memory or nonvolatile memory or can comprise both volatile and nonvolatile memory. In addition, memory components or memory elements can be removable or stationary. Moreover, memory can be internal or external to a device or component, or removable or stationary. Memory can comprise various types of media that are readable by a computer, such as hard-disc drives, zip drives, magnetic cassettes, flash memory cards or other types of memory cards, cartridges, or the like. By way of illustration, and not limitation, nonvolatile memory can comprise read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM), or flash memory. Volatile memory can comprise random access memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in many forms such as synchronous RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and direct Rambus RAM (DRRAM). Additionally, the disclosed memory components of systems or methods herein are intended to comprise, without being limited to comprising, these and any other suitable types of memory. In particular and in regard to the various functions performed by the above described components, devices, circuits, systems and the like, the terms (including a reference to a “means”) used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., a functional equivalent), even though not structurally equivalent to the disclosed structure, which performs the function in the herein illustrated example aspects of the embodiments. In this regard, it will also be recognized that the embodiments comprise a system as well as a computer-readable medium having computer-executable instructions for performing the acts and/or events of the various methods. Computing devices typically comprise a variety of media, which can comprise computer-readable storage media and/or communications media, which two terms are used herein differently from one another as follows. Computer-readable storage media can be any available storage media that can be accessed by the computer and comprises both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable storage media can be implemented in connection with any method or technology for storage of information such as computer-readable instructions, program modules, structured data, or unstructured data. Computer-readable storage media can comprise, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other tangible and/or non-transitory media which can be used to store desired information. Computer-readable storage media can be accessed by one or more local or remote computing devices, e.g., via access requests, queries or other data retrieval protocols, for a variety of operations with respect to the information stored by the medium. On the other hand, communications media typically embody computer-readable instructions, data structures, program modules or other structured or unstructured data in a data signal such as a modulated data signal, e.g., a carrier wave or other transport mechanism, and comprises any information delivery or transport media. The term “modulated data signal” or signals refers to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in one or more signals. By way of example, and not limitation, communications media comprise wired media, such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media Further, terms like “user equipment,” “user device,” “mobile device,” “mobile,” station,” “access terminal,” “terminal,” “handset,” and similar terminology, generally refer to a wireless device utilized by a subscriber or user of a wireless communication network or service to receive or convey data, control, voice, video, sound, gaming, or substantially any data-stream or signaling-stream. The foregoing terms are utilized interchangeably in the subject specification and related drawings. Likewise, the terms “access point,” “node B,” “base station,” “evolved Node B,” “cell,” “cell site,” and the like, can be utilized interchangeably in the subject application, and refer to a wireless network component or appliance that serves and receives data, control, voice, video, sound, gaming, or substantially any data-stream or signaling-stream from a set of subscriber stations. Data and signaling streams can be packetized or frame-based flows. It is noted that in the subject specification and drawings, context or explicit distinction provides differentiation with respect to access points or base stations that serve and receive data from a mobile device in an outdoor environment, and access points or base stations that operate in a confined, primarily indoor environment overlaid in an outdoor coverage area. Data and signaling streams can be packetized or frame-based flows. Furthermore, the terms “user,” “subscriber,” “customer,” “consumer,” and the like are employed interchangeably throughout the subject specification, unless context warrants particular distinction(s) among the terms. It should be appreciated that such terms can refer to human entities, associated devices, or automated components supported through artificial intelligence (e.g., a capacity to make inference based on complex mathematical formalisms) which can provide simulated vision, sound recognition and so forth. In addition, the terms “wireless network” and “network” are used interchangeable in the subject application, when context wherein the term is utilized warrants distinction for clarity purposes such distinction is made explicit. Moreover, the word “exemplary” is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the word exemplary is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then “X employs A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. In addition, while a particular feature may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the terms “includes” and “including” and variants thereof are used in either the detailed description or the claims, these terms are intended to be inclusive in a manner similar to the term “comprising.” The above descriptions of various embodiments of the subject disclosure and corresponding figures and what is described in the Abstract, are described herein for illustrative purposes, and are not intended to be exhaustive or to limit the disclosed embodiments to the precise forms disclosed. It is to be understood that one of ordinary skill in the art may recognize that other embodiments having modifications, permutations, combinations, and additions can be implemented for performing the same, similar, alternative, or substitute functions of the disclosed subject matter, and are therefore considered within the scope of this disclosure. Therefore, the disclosed subject matter should not be limited to any single embodiment described herein, but rather should be construed in breadth and scope in accordance with the claims below. It may be noted that, as used in this specification and the appended claims, the singular forms “a,” “an” and “the” comprise plural referents unless the content clearly dictates otherwise. References within the specification to “one embodiment,” “an embodiment,” “embodiments”, or “one or more embodiments” are intended to indicate that a particular feature, structure, or characteristic described in connection with the embodiment is comprised in at least one embodiment of the present disclosure. The appearance of such phrases in various places within the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Further, various features are described which may be exhibited by some embodiments and not by others. It should be noted that, when employed in the present disclosure, the terms “comprises,” “comprising,” and other derivatives from the root term “comprise” are intended to be open-ended terms that specify the presence of any stated features, elements, integers, steps, or components, and are not intended to preclude the presence or addition of one or more other features, elements, integers, steps, components, or groups thereof. Detailed embodiments are disclosed herein; however, it is to be understood that the disclosed embodiments are merely exemplary, which may be embodied in various forms. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims. While it is apparent that the illustrative embodiments described herein disclosed fulfill the objectives stated above, it will be appreciated that numerous modifications and other embodiments may be devised by one of ordinary skill in the art. Accordingly, it will be understood that the appended claims are intended to cover all such modifications and embodiments, which come within the spirit and scope of the present disclosure.
104,274
11943103
DESCRIPTION OF EMBODIMENTS Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In the present specification and drawings, components having substantially the same functional configuration are denoted by the same reference signs and repeated description thereof will be omitted. Description will be given in the following order.1. Outline2. Configuration example of system3. Specific examples of operation in system 1. Outline The present disclosure relates to a system in which a plurality of devices and servers are connected. More specifically, the present disclosure relates to, for example, a system that is used when a broadcasting station produces a TV program. At present, for example, when a TV program relaying sports in a stadium is produced, a relay vehicle is dispatched to a site so that a plurality of cameras disposed in the stadium can be connected to the relay vehicle. In this connection, the connection is generally made by a high-speed serial interface standard called serial digital interface (SDI), and a digital video signal and a digital audio signal are sent from the cameras to the relay vehicle. However, since the relay vehicle and each camera are connected by the SDI at a relay site as described above, distance restrictions are caused. For example, it is difficult to construct the same system in a situation in which the relay vehicle and each camera are several tens of kilometers away from each other. Further, settings of the relay vehicle differ depending on a difference between environments for shooting TV programs (a difference between facilities such as stadiums or concert halls, a difference between camera installation places, a difference in brightness between shooting places, a difference between indoors and outdoors, or the like), and the reality is that a person performs setting of the relay vehicles according to the environment. Further, for example, in a broadcasting station, there are a plurality of studios for shooting various TV programs such as news programs or variety programs. Not all of the plurality of studios are in operation at all times, and studios in which shooting is not performed are not in operation. In this case, devices relevant to a studio not in operation is not effectively utilized. In view of the above, the present disclosure achieves reduction of production costs through minimization in numbers of devices or personnel dispatched to a site and assumes realization of remote integration for eliminating distance restrictions. In view of the above, the present disclosure assumes realization of shared integration (shared production) for achieving an increase in an operating rate of equipment through free combination of a plurality of studios and control rooms and dynamic assignment of resources according to a TV program scale. When remote integration and shared integration are combined, it is possible to perform effective utilization of devices or people, and to greatly reduce production costs of TV programs or the like. In the present disclosure, a mechanism of a facility management software (FMS) system is introduced in order to realize the remote integration and the shared integration described above. In a system using the FMS, a photographing device such as a camera, a device in a studio, or the like and the FMS server are connected by IP, and distance restrictions are solved. Further, since setting of various devices is performed on the basis of setting information of the FMS server, the settings of the devices can be rapidly switched, and effective utilization of a studio or the like not in operation can be realized. Hereinafter, a system using the FMS according to the present disclosure will be described in detail. 2. Configuration Example of System FIG.1is a schematic diagram illustrating a configuration of a system1000according to an embodiment of the present disclosure. As illustrated inFIG.1, this system1000includes an FMS server (network management apparatus)100, an IP router system200, a camera system210, a switcher system220, a server system230, a monitor system240, and a 3rd party system250. The IP router system200, the camera system210, the switcher system220, the server system230, the monitor system240, and the 3rd party system250are connected to the FMS server100by IP. Further, a graphical user interface (GUI)280is connected to the FMS server100. The GUI280may be a client terminal such as a laptop PC. The camera system210is, for example, a system having a device provided by the same supplier as that providing the FMS server100. On the other hand, the 3rd party system250may be a system having a device provided by a supplier different from that providing the FMS server100. The switcher system220is a system having a device having functions such as video switching and CG synthesis. Further, the server system230is a system having a device that performs, for example, slow or replay reproduction, and the monitor system240is a system having a device that allows a video engineer or the like to confirm a video. The FMS server100is configured by software (program) of an FMS application102being installed, and includes a core engine110for performing functions thereof. The FMS server100includes a device recognition unit112, a setting information registration unit114, a setting information holding unit115, a setting execution unit116, and a schedule management unit118as functional blocks thereof. Further, the FMS server100includes a first device driver130that is an interface for connection with the IP router system200, the camera system210, the switcher system220, the server system230, and the monitor system240. Further, the FMS server100includes a second device driver140that is an interface for connection to the 3rd party system250. A plurality of device drivers may be provided for each manufacturer or depending on an OS. Each of the functional blocks including the device recognition unit112, the setting information registration unit114, the setting information holding unit115, the setting execution unit116, and the schedule management unit118is configured of a central processing unit such as a CPU included in the FMS server100and a program (software) for causing the central processing unit to function, or is configured of a circuit (hardware). The device recognition unit112exchanges information with various devices connected to the FMS server100to recognize a connection of each of the various devices and to recognize a setting state of the various devices. InFIG.1, the IP router system200, the camera system210, the switcher system220, the server system230, the monitor system240, and the 3rd party system250are connected to the FMS server100. Therefore, the connection of these devices and the setting state of the devices are recognized by the device recognition unit112. A user can set the devices connected to the FMS server100to a desired setting by operating the GUI280, for example. The setting information registration unit114registers setting information input by the user operating the GUI280in the FMS server100. Further, the setting information registration unit114can also register setting information set using another method in the FMS server100. Further, the setting information holding unit115holds the setting information registered by the setting information registration unit114. The setting execution unit116sends the setting information registered by the setting information registration unit114to each of the devices such as the IP router system200, the camera system210, the switcher system220, the server system230, the monitor system240, and the 3rd party system250to perform setting of each device. The setting of the respective devices such as the camera system210, the switcher system220, the server system230, and the monitor system240is performed via the first device driver130. Further, the setting of the device of the 3rd party system250is performed via the second device driver140. The schedule management unit118manages setting of each device on the basis of time information. For example, when the camera of the camera system210is set to be turned on at a predetermined time, the schedule management unit118manages the predetermined time so that the setting execution unit116turns the camera on at the predetermined time. According to the FMS server100configured as described above, the setting information of the various devices connected to the FMS server100is recognized on the FMS server100side, and the setting information of the user is reflected in each device. Since the FMS server100is connected to each of the devices such as the IP router system200, the camera system210, the switcher system220, the server system230, the monitor system240, and the 3rd party system250by an IP network, it is possible to freely change settings of remote devices. In this case, it is possible to freely set a path to each device such as the camera of the camera system210by changing the setting of the IP router system200. Further, when each device such as the IP router system200, the camera system210, the switcher system220, the server system230, the monitor system240, and the 3rd party system250is connected to a plurality of studios, the settings of each device are optimized so that each device can be shared by a plurality of studios. Thus, it is possible to realize the remote integration and the shared integration described above. 3. Specific Examples of Operation in System Next, specific examples of an operation that is realized by the system1000of the present disclosure will be described.FIGS.2and3are schematic diagrams illustrating specific examples of the operation that is realized by the system1000of the present disclosure. In the examples illustrated inFIGS.2and3, a control base300, a local studio400mainly configured of 4K compatible devices, and a remote studio500mainly configured of HD compatible devices are configured as the bases for using each device. The local studio400is provided at a position relatively close to the control base300, and the remote studio500is provided at a remote place about several tens of kilometers away from the control base300, for example. Further, each device may be provided in a relay vehicle for shooting a live video or the like. The control base300includes a monitor310, a panel320, a multi-viewer330, a system controller340, a network switch350, and a recording server360. The monitor310, the panel320, and the multi-viewer330are provided in the control room of the control base300, and the system controller340, the network switch350, and the recording server360are provided in a machine room of the control base300. The local studio400includes a plurality of cameras410,420,430,440, and450and a camera control unit460that controls the cameras410,420,430,440, and450. The remote studio500includes a surveillance camera510, a video switcher520, and an audio system (not illustrated). According to the system1000of the present disclosure, setting can be performed on various devices included in each of the control base300, the local studio400, and the remote studio500. This setting can be roughly divided into setting of a transmission path based on the setting of the IP router system200(corresponding to the network switch350inFIGS.2and3) described above and setting of each device based on settings of the parameters regarding the control of each device. The setting of the transmission path differs betweenFIGS.2and3. By the transmission path being set, a desired device among the various devices included in each of the control base300, the local studio400, and the remote studio500is connected to the transmission path. InFIGS.2and3, the devices connected to the transmission path are indicated by solid lines, and the devices not connected to the transmission path are indicated by broken lines. In a state illustrated inFIG.2, the transmission path is set, and the monitor310, the panel320, the multi-viewer330, the system controller340, the network switch350, and the recording server360belonging to the control base300are connected to the transmission path. Further, the cameras420,430,440, and450and the camera control unit460among the cameras410,420,430,440, and450and the camera control unit460belonging to the local studio400are connected to the transmission path. Further, in the state illustrated inFIG.2, all the devices belonging to the remote studio500are not connected to the transmission path. Further, in a state illustrated inFIG.3, the recording server360among the devices belonging to the control base300is not connected to the transmission path. Further, in the state illustrated inFIG.3, only the camera410among the cameras410,420,430,440, and450and the camera control unit460belonging to the local studio400is connected to the transmission path. Further, in the state illustrated inFIG.3, the surveillance camera510and the video switcher520among the devices belonging to the remote studio500are connected to the transmission path. As described above, it is possible to freely set a connection between the control base300, the local studio400, and the remote studio500by performing setting of the network switch350from the FMS server100. This makes it possible to realize the remote integration described above. A disposition place of the FMS server100may be the machine room of the control base300, or the remote studio500. Further, according to the system1000of the present disclosure, it is possible to perform setting of parameters for controlling various devices included in each of the control base300, the local studio400, and the remote studio500from the FMS server100. Hereinafter, settings that can be performed by each device will be described. In the device belonging to the control base300, the monitor310can change settings regarding input selection, 4K/HD change, and recall of preset image quality settings. The panel320can change settings regarding switching between functions assigned to buttons and changing a control target. The multi-viewer330can change settings regarding changing a screen layout. The system controller340can change settings regarding changing video and audio paths, changing a crosspoint matrix, changing system synchronization settings, and partially changing settings of a control target device. The network switch350can change settings regarding changing a connection relationship between devices and changing settings for guaranteeing a transmission band. The recording server360can change settings regarding changing a recording codec and changing an operating frequency. Further, in the devices belonging to the local studio400, the plurality of cameras410,420,430,440, and450can change settings regarding changing 4K/HD, changing image quality settings (gain or color tone), and changing an operating frequency. Similarly, the camera control unit460can also change settings regarding changing 4K/HD, changing image quality settings (gain or color tone), and changing an operating frequency. Further, in the devices belonging to the remote studio500, the video switcher520can change settings regarding changing an operating frequency, exchanging moving image data that is used at the time of recording, and registering macro functions. Further, the audio system belonging to the remote studio500can change settings regarding volume adjustment and channel setting. By sending information on parameters regarding the setting described above from the FMS server100to each device, it is possible to freely perform the setting of each device. Therefore, it is possible to realize the shared integration described above. For example, in the state illustrated inFIG.2, the monitor310in the control base300has been set to support 4K compatible settings. On the other hand, in the state illustrated inFIG.3, the monitor310in the control base300has been set to support HD compatible settings. Such setting can be realized by parameters for these settings being sent from the FMS server100. When a TV program or the like is produced by a 4K device, the user can operate the local studio400for 4K by operating the GUI280. Further, when a TV program or the like is produced by an HD device, the user can operate the remote studio500for HD by operating the GUI280. When the local studio400for 4K is operated, videos captured by the cameras410,420,430,440, and450for 4K are displayed on the monitor310of the control base300set to 4K. When the remote studio500for HD is operated, the HD surveillance camera510and the HD video switcher520are controlled such that an HD video is sent to the control base300and displayed on the monitor310. As described above, in each of the control base300, the local studio400, and the remote studio500, it is possible to freely perform setting of each device in a range in which the settings of each device can be changed. A plurality of control bases300may be provided, but the number of control bases300can be made smaller than the number of studios such as the local studio400and the remote studio500. Therefore, it is possible to control a plurality of studios with a number of control bases300smaller than the number of studios. FIG.4is a schematic diagram illustrating setting of parameters of each device according to a TV program (program). A setting value is determined for attributes (combination of one or a plurality of elements such as “studio”, “content type (4K/HD, HDR/SDR, sports/variety, and the like)”, and “relay time period”) of the program in advance. The parameters may be set for each specific program (TV program). InFIG.4, for device X, values a and b of two parameters (setting parameter1and setting parameter2) are determined in the case of the program A. Further, for device Y, values c, d, and e of three parameters (setting parameter3, setting parameter4, and setting parameter5) are determined in the case of program A. As illustrated inFIG.4, modes may be determined instead of the parameter values. For example, when an example of device X is the panel320of the control base300, the panel320can change the settings regarding switching between the functions assigned to the buttons and changing the control target as described above. A parameter regarding switching between the functions assigned to the buttons corresponds to setting parameter1inFIG.4, and a parameter regarding changing the control target corresponds to setting parameter2inFIG.4. Therefore, “switching between the functions assigned to the buttons” is determined according to a value a of setting parameter1, and “changing the control target” is determined according to a value b of setting parameter2. When the setting parameters of each device are determined according to the attributes of the program in advance as described above, the settings of the devices can be optimally performed according to the program. Information on the setting parameter illustrated inFIG.4is held in the setting information holding unit115of the FMS server100. Therefore, when the user designates program A from the GUI300, the setting parameters of each device are read from the setting information holding unit115, and setting of each device is performed by the setting execution unit116. The user can designate a time and a TV program (program) in advance and make a reservation for devices that are used for production of the TV program. At the time designated by the user, the FMS server100connects to the device reserved in advance and changes the settings of each of the devices to the settings of the TV program designated by the user. In this case, the information on the setting parameters illustrated inFIG.4is held in the setting information holding unit115of the FMS server100. Therefore, the setting parameters of each device are read from the setting information holding unit115, and setting of each device is performed by the setting execution unit116. When automatic scheduling is performed, the user only designates the time and the TV program. The setting execution unit116specifies necessary devices and assignable devices from the TV program designated by the user and automatically reserves the devices. The setting execution unit116changes the settings of the specified device to those according to the program at the time designated by the user. The FMS server100may receive a data set of setting values from each device so that the data set is written back. This corresponds to a case in which, when the device itself has a mechanism for backing up a plurality of setting values in their entirety (corresponding to the setting information holding unit115inFIG.1), data is acquired using binary data of the device itself or a text structure such as XML or JSON and written back so that settings are recalled. Further, the device itself may hold the settings as preset settings, and the FMS server100may call the preset settings. This corresponds to a case in which the settings are stored as preset settings inside the device, although the settings cannot be taken out as data. For example, the FMS can designate something like a preset number in order to recall the settings. Although the preferred embodiments of the present disclosure have been described in detail with reference to the accompanying drawings, the technical scope of the present disclosure is not limited to such examples. It is obvious that anyone with ordinary knowledge in the technical field of the present disclosure may conceive various modification examples or change examples within the scope of the technical ideas set forth in the claims and, of course, it is understood that these belong to the technical scope of the present disclosure. Further, effects described in the present specification are merely descriptive or illustrative and are not limited. That is, the technology according to the present disclosure may exhibit other effects apparent to those skilled in the art from the description in the present specification, in addition to or in place of the above effects. The following configurations also belong to a technical scope of the present disclosure.(1) A network system includinga plurality of devices anda network management apparatus connected to the plurality of devices by an IP transmission line, whereinthe network management apparatus includesa setting execution unit configured to execute setting of a path to which at least some of the plurality of devices are connected, and setting of parameters regarding control of the at least some connected devices.(2) The network system according to (1), wherein the network management apparatus includesa setting information registration unit configured to register setting information on settings of the plurality of devices, andthe setting execution unit executes the setting on the basis of the setting information.(3) The network system according to (2), wherein the setting information registration unit registers the setting information on the basis of information input by a user.(4) The network system according to (2), wherein the network management apparatus includesa setting information holding unit configured to hold the setting information.(5) The network system according to any one of (1) to (4), wherein the network management apparatus includesa schedule management unit configured to manage a schedule for execution of the setting in the setting execution unit.(6) The network system according to any one of (1) to (5), wherein the network management apparatus includesa device recognition unit configured to recognize that the plurality of devices are connected, and the setting execution unit executes the setting for the plurality of devices of which the connection is recognized.(7) The network system according to any one of (1) to (6), wherein the plurality of devices include a network switch configured to change the route, and the setting execution unit changes the setting of the network switch according to the setting of the parameters.(8) The network system according to any one of (1) to (7), wherein the plurality of devices include video devices, and the setting execution unit performs changing a 4K and HD setting, changing an image quality setting, or changing an operating frequency through setting of the parameters.(9) The network system according to any one of (1) to (8), wherein the plurality of devices are included in a plurality of studios installed at different positions.(10) The network system according to any one of (1) to (8), wherein the plurality of devices are included in a relay vehicle for capturing a video.(11) A network management apparatus connected to a plurality of devices by an IP transmission line, the network management apparatus including a setting execution unit configured to execute setting a path to which at least some of the plurality of devices are connected and setting of parameters regarding control of the at least some connected devices.(12) A network management method in a network management apparatus connected to a plurality of devices by an IP transmission line, the method including executing setting a path to which at least some of the plurality of devices are connected and setting of parameters regarding control of the at least some connected devices. REFERENCE SIGNS LIST 100FMS server112Device recognition unit114Setting information registration unit115Setting information holding unit116Setting execution unit118Schedule management unit
25,592
11943104
While embodiments are described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that embodiments are not limited to the embodiments or drawings described. It should be understood, that the drawings and detailed description thereto are not intended to limit embodiments to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include,” “including,” and “includes” mean including, but not limited to. When used in the claims, the term “or” is used as an inclusive or and not as an exclusive or. For example, the phrase “at least one of x, y, or z” means any one of x, y, and z, as well as any combination thereof. DETAILED DESCRIPTION Various embodiments of methods and apparatus for intelligent configuration discovery techniques implemented at a network-accessible discovery service, visualization techniques for providing automatically updated views of configuration information, and a migration marketplace service to assist customers of the discovery service as well as migration facilitators in making decisions regarding migration of applications are described. At a high level, the configuration discovery service may enable (among other features) the automated detection of configuration items (such as physical or virtualized compute servers, storage devices, databases, software stack components and the like that make up an application) and distributed application patterns based on raw data collected by a variety of data sources, the assignment of unique identifiers to configuration items, the tracking of interactions (e.g., transactions, network traffic flows etc.) and dependencies among the items, changes in application configuration over time, as well as performance monitoring at desired levels of granularity for complex application execution environments. The configuration discovery service may also be referred to as an application discovery service or as a resource discovery service in some environments, as the configuration items are the components that form an application. The configuration discovery service may implement a variety of programmatic interfaces (e.g., web service application program interfaces, command line interfaces, etc.) which can be used by service clients to obtain responses to configuration-related queries, and can also be used as building blocks by other services including the migration marketplace service and the visualization service to provide higher level functionality. In some embodiments, a visualization service affiliated with or part of the configuration discovery service may serve as one of the primary modes of interaction for clients of the configuration discovery service—e.g., a customer may be able to view customized representations of their application execution environments adapted to the particular client-side display environment being used, issue configuration-related queries via the visualization interface, and/or initiate partial or full application migrations from one set of resources to another. In various embodiments, the migration marketplace may act as an intermediary service that enables clients to identify suitable migration facilitators, and for the migration facilitators to identify candidate clients—e.g., for moving applications from client premises to cloud-based computing environments, or from one cloud-based environment to another. In at least some embodiments, some or all of the services may be implemented at a provider network. Networks set up by an entity such as a company or a public sector organization to provide one or more network-accessible services (such as various types of cloud-based computing or storage services) accessible via the Internet and/or other networks to a distributed set of clients may be termed provider networks herein. A provider network may sometimes be referred to as a “public cloud” environment. The resources of a provider network may in some cases be distributed across multiple data centers, which in turn may be distributed among numerous cities, states and countries. It is noted that while the configuration discovery service, the visualization service and/or the migration marketplace service may be implemented within a particular provider network, some or all of these services may be authorized and granted the appropriate permissions to access information from other provider networks (e.g., from provider network run by a different business organization). For example, a configuration discovery service running at a provider network PN1 run by operator O1 may be able to gather configuration data collected from a provider network PN2 run by operator O2 (as well as from other facilities such as client-owned data centers and PN1's own data centers), a visualization service running at PN1 may enable clients to view distributed application architectures which comprise components running at PN2, and/or a migration marketplace service running at PN1 may be able to provide information about migration facilitators to clients for migrating the components running at PN2 to PN1. In some embodiments, the visualization service and/or the migration marketplace service may be implemented as subcomponents of the configuration discovery service. A configuration discovery service may also be referred to herein simply as a discovery service. A wide variety of data sources may be employed at the configuration discovery service to build up a repository of configuration records. For example, in some embodiments, the data sources may include existing configuration management databases at client data centers (from which configuration data may sometimes be imported in bulk via programmatic interfaces), agents or configuration data collectors installed on behalf of the configuration discovery service at various resources, third-party or industry-standard configuration management tools, and so on. Each data source may provide configuration information, e.g., including some number of attribute-value pairs for some set of configuration items, to the configuration discovery service at one or more points in time. Some data sources may provide raw configuration data at regular intervals, while others may be event-driven in at least some embodiments. The configuration item attributes for which values are obtained at the service (e.g., via agents installed on behalf of the service) in various embodiments may include, among others, user information (such as user names and home directories), group information (such as group names and group membership), lists of installed software packages/programs, and lists of kernel modules. Information about a number of different types of configuration-related events, such as process creations/terminations (with associated process identifiers), Domain Name Service (DNS) queries and responses, packet sends and receives at various layers of networking stacks and so on may also be collected in at least some embodiments. Values of various attributes of the physical and/or virtual network interfaces (including for example the type of network interconnect such as Ethernet which is being used, the maximum bandwidth supported, associated media access control or MAC addresses etc.) may be collected from the devices of the targeted execution environments. The particular network ports, such as TCP (Transmission Control Protocol) or UDP (User datagram Protocol) ports being used at various resources may be identified, and TCP version 4 or version 6 connection attributes (such as the identifiers of the processes at either end of the connections, the connection establishment times, the durations for which the connections remain open, etc.) may be collected. Operating system-related attributes, including for example the specific versions of the operating system in use at various hosts and virtual machines, may be collected in some embodiments. System performance and process performance metrics may be collected at various intervals in different embodiments. In some embodiments, a plurality of agents of the discovery service may be installed at a given host or device to collect respective subsets of the configuration attribute values for one or more configuration items; in other embodiments, a single agent or tool may be able to extract attribute values from several different sources. The configuration discovery service may act as a combiner and curator of potentially out-of-date, conflicting and/or ambiguous raw configuration information collected from various data sources at respective levels of granularity and according to respective schedules. From disparate data sources, in some embodiments the configuration discovery service may be responsible for generating and storing coalesced and curated configuration records; such coalesced records may serve as the authoritative sources of configuration data for the visualization and migration marketplace services (or other services relying on the discovery service). In at least some embodiments, the configuration discovery service may generate and assign unique service-side identifiers to respective configuration items based at least in part on an ontology defined by the service. For example, a given hardware server may be identified by one data source based on one of the server's IP addresses (which may be changed over time), by another data source based on a server name or MAC (media access control) address, by a third data source based on the role (e.g., “web server” or “database server”) being played by the server in a distributed application, and so on. The data sources may each include their own respective identifiers/names for the server in the raw configuration data provided to the configuration discovery service. Such identifiers may be referred to herein as data-source-side identifiers. The configuration discovery service may examine the raw configuration data received from one or more of the different data sources, and generate a unique service-side identifier for the server based on the defined ontology and a naming scheme (which may take a subset of the raw data's attribute values into account). The unique service-side identifier may differ from at least some of the identifiers/names used by the data sources. When a new set of raw configuration data is received or analyzed at the service, the service may be able to determine the uniquely-identified configuration item to which at least a portion of the raw data applies, despite the absence of the unique identifier in the raw data in at least some embodiments. In some embodiments, the service may be responsible for maintaining mappings between the data source-provided identifiers and the unique service-side identifiers, and for resolving ambiguities associated with such mappings (e.g., an ambiguity which may arise if the data source changes its identifier for a given configuration item). Ambiguities may be resolved using a variety of mechanisms in different embodiments—e.g., based on correlation analysis with respect to raw configuration data received from other data sources over some time periods, based on client feedback, and so on. In one example scenario, for example, two distinct unique service-side identifiers may (erroneously) be assigned to the same configuration item initially, e.g., on the basis of respective raw configuration data sets DS1 and DS2 received from two different data sources, and as a result two different coalesced configuration records R1 and R2 with respective distinct service-side identifiers may be stored in the service repository. Later, e.g., after processing one or more additional raw data sets and/or after interactions via a programmatic interface with a client, the error may be detected and corrected. That is, the service may determine that attributes for which values are stored in R2 are actually attributes of the underlying configuration item corresponding to R1. Such a determination may be made, for example, based on an analysis of resource consumption information. If the two configuration items associated with R1 and R2 are initially erroneously assumed to be different hardware servers, but collected metrics regarding CPU utilization levels or network packet outflow for the two items are found to be very similar or identical over a period of time, the records R1 and R2 may be identified as referring to the same server. Some of the information that was stored in R2 may be used to update R1 in such a scenario, and R2 may be deleted (or conversely, information in R1 may be used to modify R2 and R1 then may be deleted). In at least one embodiment, an error-correcting API may be implemented by the discovery service, enabling clients (and/or other authorized entities such as professional service analysts, consultants or partners of the provider network operator) to inform the service regarding such mistakes. The corrections provided via such an API may be used to improve service operations more broadly in various embodiments—e.g., corrections made by one authorized entity with respect to a given set of configuration data of a given service customer may be generalized and used to detect and correct potential errors made with respect to other sets of configuration data of the same customer or other customers. In at least some embodiments, the configuration discovery service may associate respective trust scores with different data sources, and such trust scores may be used when deciding which among a set of potentially conflicting or out-of-date configuration data elements is to be accepted. The trust scores may themselves change over time—for example, if a dump of a client's configuration management database representing a client data center is obtained at the service, the initial trust score for the client's database may be set to a high value, but the score may be reduced as time passes and configuration changes occur at the client data center. Trust scores may be used when generating the coalesced configuration records from the raw configuration data in at least some embodiments—e.g., attribute values obtained from a high-trust data source may be included with a greater probability in a coalesced record than attribute values obtained from a low-trust data source. In a scenario in which an attribute value V1 from a data source DS1 with a current trust score TS1 contradicts or conflicts with an attribute value V2 from a different data source DS2 with a higher current trust score TS2, the attribute value (V2 in this case) from the source with the higher trust score may be included in the coalesced configuration record, and the attribute value from the source with the lower trust score may be excluded. In at least some embodiments, machine learning techniques may be employed to generate and update trust scores over time. Additional details regarding various aspects of the operations of the configuration discovery service, including automated pattern-based grouping and tagging of application components, algorithms for detecting the sources of network packets received via obfuscating intermediaries, associating relevance scores to configuration items, the data models and pre-loading techniques used to increase responsiveness to queries, and the like, are provided below. The visualization service and the marketplace migration service are discussed after the details of the discovery service are discussed. Example System Environment FIG.1illustrates an example system environment in which an intelligent configuration discovery service for multi-data-center applications may be implemented, according to at least some embodiments. As shown, system100may comprise a plurality of provider networks, such as provider networks102A and102B, as well as a customer-premises network172in the depicted embodiment. Within each of the provider networks102, one or more network-accessible services may be implemented by respective provider network operators. For example, provider network102A comprises the configuration discovery service104, a virtualized computing service132, and one or more migration-related services130which may be utilized by potential customers to migrate their applications from execution environments outside provider network102A to provider network102A. Additional details about migration-related services are provided below. Provider network102B may include its own virtualized computing service192, at which a different approach towards virtualizing compute servers may be utilized than is used in virtual computing service132—e.g., different types of hypervisors or virtualization management software stacks may be used, different sets of programmatic interfaces may be supported for acquiring and using virtual machines, and so on. A number of distributed applications may be run on behalf of various customers using the resources of provider networks102A and102B and/or customer-premises network172in the depicted embodiment. The set of resources being used for a given application or a related set of applications may be referred to herein as an application execution environment (AEE)144. A given AEE may comprise a wide variety of resources—e.g., virtual and/or physical compute servers, storage devices, networking devices, multi-layer software stacks and the like. At least some of the resources may comprise configuration items (CIs)136about which respective sets of configuration information (e.g., a collection of attribute values) is collected and stored within the configuration discovery service104. Generally speaking, from the perspective of the configuration discovery service and its clients, a configuration item136may comprise any physical, virtual or logical entity whose configuration settings and/or state information may be useful for managing one or more applications, and can be obtained via programmatic interfaces by the configuration discovery service. Example configuration items may comprise, among others, a non-virtualized hardware server, a virtual machine, a software process or collection of related processes, a storage device such as a rotating magnetic disk or a solid-state drive (SSD), a network device such as a router, and so on. In some embodiments, the configuration discovery service may obtain respective data sets of configuration data about a given configuration item136from one or more configuration data sources (CDSrcs)134iteratively—e.g., at regular intervals or in response to the occurrence of specified events. In the latter scenario, the configuration data stored at the service104may comprise a plurality of timestamped records for the configuration item. A number of different types of configuration data collectors or sources may be employed in various embodiments, such as for example software and/or hardware agents that are installed on behalf of the configuration discovery service104, industry-standard configuration management tools, custom configuration management tools, customer configuration management databases, and the like. Some AEEs, such as AEE144A or AEE144C, may comprise resources within the boundaries of a given network. AEE144A comprises configuration items136Q,136R and136S of customer-premises network172, while AEE144C comprises configuration items136I and136J of provider network102B. Other AEEs may comprise configuration items distributed among multiple networks and/or data centers. For example, AEE144B comprises configuration items136A-136D of provider network102A, as well as configuration item136H of provider network102B. It is noted that over the course of time, at least in some embodiments the mappings between an AEE144and the networks at which configuration items of the AEE are located may change—e.g., one or more configuration items may be migrated to a different provider network, from a customer-premises network to a provider network or from a provider network to a customer-premises network. In the depicted embodiment, each network may comprise a plurality of configuration data sources134, which may communicate with the configuration discovery service104. For example, provider network102A comprises configuration data sources134A-134C, collectively responsible for obtaining and transmitting configuration data sets for configuration items136A-136F to the service104. Similarly, provider network102B comprises data sources134E-134G responsible for reporting on configuration items136H-136L, while customer-premises network172comprises data sources134H and134I responsible for transmitting configuration data sets pertaining to configuration items136N and136P-136S to service104. In some cases, a given configuration data source134may be responsible for collecting configuration data pertaining to a plurality of configuration items136, while in other cases a configuration data source134may report on a single configuration item136. At least for some configuration items136, configuration data sets may be collected by a plurality of configuration data sources134—e.g., at respective levels of granularity and/or at respective layers of the software/hardware stack. In some embodiments, a given configuration data source134may be a subcomponent of a configuration item136—e.g., as a process or thread of execution running at a server which represents the configuration item. For example, data source134G is shown as part of configuration item136L. Some configuration data sources may comprise subcomponents of existing configuration management tools—e.g., in the depicted embodiment, a customer's configuration management database167comprises a data source134G reporting to the service104. The configuration discovery service104may implement one or more sets of programmatic interfaces150in the depicted embodiment, any of which may comprise for example application programming interfaces (APIs), web-based consoles, command-line tools and/or graphical user interfaces. The client-facing programmatic interfaces150A may, for example, be used by customers to identify and/or grant configuration data gathering permissions associated with their application execution environments144, to view configuration information collected by service104(e.g., using a visualization service as discussed below in further detail), to obtain notifications regarding events or conditions which may require client feedback, and so on. A set of data collection and/or service-side programmatic interfaces150B may be used for interactions between configuration data sources134and the service104, as well as for building additional features by migration-related services130and/or other services using the collected configuration data of service104in the depicted embodiment. The configuration discovery service104may comprise several subcomponents in the depicted embodiment, such as a configuration records repository108, components110responsible for coalescing/disambiguating raw configuration data, and/or one or more scoring components112responsible for assigning/modifying respective trust scores to data sources134and/or assigning/modifying relevance scores to configuration records as discussed below. In at least some embodiments, the service may comprise a number of data stores with different performance capabilities and/or data models as discussed below—e.g., configuration records may be pre-loaded into low-latency caches from a central repository108to increase responsiveness to expected types of queries. The configuration data sources134may provide raw configuration data sets in a variety of formats and at different intervals to the configuration discovery service104in the depicted embodiment. In some cases, the raw data received at service104with respect to one or more configuration items136may be stale or out-of-date or inaccurate. Furthermore, the manner in which the configuration items are identified in the raw data sets provided by the different data sources134may in some cases be inconsistent—e.g., if a given hardware server configuration item has a plurality of IP addresses, the server may be referred to using different IP addresses by different configuration data sources, or by name or location (such as “server 5 of rack R1 in room 3 of data center DC1”) by other data sources. The configuration discovery service104may be responsible for consolidating, disambiguating and curating the raw configuration data sets using a variety of techniques in the depicted embodiment. In one such technique, when a set of raw configuration data is received, the service104may attempt to discern whether the data refers to a known configuration item136(an item for which configuration data has been received and recorded previously at the service). If the newly-received data does not appear to correspond to a known configuration item, a naming scheme or algorithm may be used to generate a unique service-side identifier for the configuration item to which the raw data corresponds, based at least in part on an ontology defined at the service104and/or on one or more attribute values of the configuration item which are indicated in the raw data. The unique service-side identifier may, at least in some implementations, differ from the identifier used by the data source in the raw data set. In effect, in such implementations, the service104may be responsible for maintaining mappings between data source-reported identifiers and the unique service-side identifiers. When subsequent raw data sets are received at the service, in some embodiments the coalescing/disambiguating components110may utilize such mappings and/or use correlations of the raw configuration data with previously-seen data to identify the configuration item to which the raw data sets apply. In some embodiments, the service-side identifier assigned to a given configuration item136may be unique within the entire collection of configuration records stored at the service104, while in other embodiments, the identifier may be unique within a particular configuration domain or namespace (e.g., a domain or namespace associated with a given customer). The manner in which available configuration data about a configuration item is analyzed and used to generate a unique service-side identifier may differ in different embodiments. In one embodiment, the raw configuration data, which may be provided in XML (Extensible Markup Language), JSON (JavaScript Object Notation), plain text or a binary format such as CBOR (Concise Binary Object Representation) by different data sources, may first be parsed and normalized into a common format. A search for attribute values provided for keywords (such as “IPAddr” for Internet Protocol Address or “MACAddr” for median access control addresses) that are associated with uniqueness within some namespace may be performed in the raw or normalized data, and the results of the search may be combined/concatenated with object type names (e.g., “database server” or “virtualization host”) to generate the unique service-side identifier (e.g., “DBServer.<DBVendorName>.<IP address>). In one embodiment, a machine learning technique may be used to improve the process of generating unique service-side names for configuration items. For example, a machine learning model for generating the identifiers may be trained using a large anonymized configuration data set collected from various components of a virtualized computing service of a provider network (e.g., the same provider network at which the configuration discovery service runs). Some of the naming decisions made by early versions of the model may be erroneous—e.g., the same underlying configuration item may be given two different unique identifiers, or two configuration items may be given the same identifier. Over time, as the model training progresses with larger input data sets, the error rate may be reduced. In at least some embodiments, respective trust scores may be assigned (e.g., by scoring components112) to respective configuration data sources134, and used to decide, in effect, which of two potentially conflicting sources is likely to be more accurate at a given point in time. For example, some of the data sources may comprise agents of discovery service104, which may have been designed, developed and tested by personnel of the operator of provider network102B before being installed, while the origins and/or testing levels associated with other data sources may be less well-known. In the latter scenario, a higher trust score may sometimes be assigned to the more familiar or better-understood data sources. In some embodiments, a trust score of a given data source may vary based on the attribute whose value is being considered, or the level of the software/hardware stack at which the attribute value was generated For example, data sources DS1 and DS2 may each be providing respective metrics C1 and C2 regarding CPU usage of a given program or process. If DS1 collects its version of CPU utilization measurements C1 at a hypervisor layer, while DS2 collects its version C2 using a tool provided by the operating system, different trust scores may be assigned to the CPU usage attribute values from the two sources. In at least some embodiments in which multiple data sources may provide respective values for the same attribute, each data source (or {data source, attribute} pair) may be assigned a respective weight indicative of a current trust level, and the weights may be used to determine the final value of the attributes to be used and saved by the discovery service. In one embodiment, if and when two different raw data sets corresponding to the same configuration item136are received from respective data source134, and at least one attribute value of one raw data set conflicts with or contradicts an attribute value indicated in the other, a coalesced configuration record which excludes the conflicting attribute value of the data source with the lower trust score may be generated and stored in repository108. The trust scores for different data sources134may be time-weighted in some embodiments—e.g., if raw configuration data was collected at time T1 by one data source CDSrc1 and apparently-conflicting raw data was collected at time T2 (where T2 is later than T1) by another data source CDSrc2, the more recently-collected raw data may be considered more trustworthy. The coalesced data records generated by coalescing/disambiguating components110may be used to provide responses to configuration queries received via programmatic interfaces150A and/or150B (e.g., either from customers or from other services of provider network102A) in various embodiments. In addition to curating or consolidating raw configuration data received from the data sources134, in at least some embodiments components of the discovery service104may perform a number of other functions, such as automatically identifying groups of configuration items which together correspond to a distributed application pattern, assigning roles within such groups to respective configuration items, implementing traffic source detection algorithms for network traffic whose sources may have been obfuscated by intermediary devices, proactively preparing configuration data to support high-performance querying, and so on. Additional details about these and other functions are provided below. As mentioned earlier, in at least some embodiments the configuration discovery service may define and utilize an ontology of configuration items.FIG.2illustrates example components of a discovery service ontology which may be used to organize configuration information, according to at least some embodiments. The ontology202may comprise a plurality of object types, and a list of one or more attributes corresponding to each object type in the depicted embodiments. Respective values for at least some of the attributes of a given attribute list for a given configuration item may be included in the raw configuration data sets transmitted to the configuration discovery service by various configuration data sources. The ontology and the raw attribute values may be use to generate unique service-side identifiers for configuration items in various embodiments. For example, a unique service-side identifier for a configuration item may be constructed by concatenating several attribute values (some of which may be obtained from different data sources) with a service-generated text identifier prefix in some embodiments. Object type204A, for example, corresponds to a physical host or server. The corresponding attribute list205A may include the CPU type, the count of CPUs or cores, the currently-assigned host name, the hypervisor (if any is installed), various elements of operating system information (OSdata), one or more IP addresses, and the like. The value of a given attribute of an attribute list such as205A may itself comprise several distinct elements of data—e.g., the “CPU type” attribute may include information about the instruction set architecture supported by the CPU, the CPU vendor, the CPU's clock frequency, model name and so on. Object type204B represents a process (i.e., a unit of execution at a server). Attribute lost205B for the process may include, among others, the name of the process, the command line used to invoke the process, the path (e.g., directory path or folder path) at a host's operating system corresponding to the location of the executable used for the process and/or to the home directory of the process, the number of threads of the process, and so on. Object type204C represents a network connection (assumed to be established using the Transmission Control Protocol/Internet Protocol or TCP/IP suite in this example). Attribute list205C comprises the source and destination IP addresses (srcIP and destIP respectively) (e.g., with the source being identified as the endpoint which issued the connect( ) call to establish the connection), the source and destination process identifiers (srcProcess and destProcess respectively) and/or the destination port (destPort). Object type204D corresponds to a virtual machine generated using a virtualization framework obtained from a particular technology vendor V1. The attribute list205D for the virtual machine includes a vendor-defined virtual machine identifier (VMID), an identifier of the data center at which the virtual machine is running or has run, and the host at which the virtual machine is currently running, is scheduled to run, or has run. A number of other object types may be defined in the ontology202in various embodiments. For example, respective object types may be defined for storage devices, entities such as database instances, networking device such as load balancers/routers etc. and the like in some embodiments. In one embodiment, respective object types may be defined for geographical or other groupings of resources—e.g., a data center may have its own object type, or a server rack may have its own object type. In some embodiments, the ontology may define hierarchical or containment relationships among various objects—for example, a number of processes may be running at a given host and may therefore be contained within the host, a master process of an application may spawn various other processes which may be designated as child processes of the master, and so on. Relationships among various entities of the ontology may be defined in an object-oriented manner in at least some implementations. Coalesced and Curated Configuration Records FIG.3illustrates an example of coalescing raw configuration information from a plurality of data sources with respective trust scores to produce a curated attribute-value list, according to at least some embodiments. In the depicted embodiment, a plurality of raw configuration data sets320, including data sets320A,320B, and320K pertaining to a given configuration item are transmitted to a discovery service by respective data sources310(e.g., data sources310A,310B and310K). Each raw configuration data set320comprises a respective attribute value list325. For a given host, for example, the attributes and their corresponding values may include “Name:Host100”, “IP Address:a.b.c.d”, “operating system: <OSVersion>” and so on. Not all the attribute values may necessarily correspond to a single configuration item in at least some embodiments—e.g., one or more of the configuration data sources may be reporting on a plurality of configuration items. Different data sets320may represent different levels of granularity—for example, one data set may include application-level information such as the number of database transactions issued or received, while another may include lower-level details such as the number of network packets transmitted or received. Some of the raw configuration data sent by two different data sources may correspond to different times—e.g., data set320A may have been collected at a different time than data set320K. In some cases, two or more of the attribute values pertaining to a given configuration item may conflict with one another—for example, it may be the case that one data set indicates that a particular process with a process identifier PID1 at one host H1 was responsible for communicating with a different host, while another data set may indicate that a process with another process identifier PID2 was responsible for such communications. In some embodiments, at least some of the configuration data sources may generate respective identifiers for the configuration items for which they provide data to the discovery service, and include these identifiers in data sets320. Such identifiers may be referred to as data-source-side identifiers to distinguish them from the identifiers generated by the discovery service. Two data sources may sometimes refer to the same underlying configuration item using distinct data-source-side identifiers—e.g., one data source may refer to a host by name (e.g., “hostK.<domainname>), another may refer to the same host by IP address, and another by function (e.g., “database server DBS1”). The coalescing/disambiguating components360of the configuration discovery service may examine and process all the raw configuration data sets320and update (or create) a respective coalesced configuration record350corresponding to one or more configuration items whose raw data is included in the data sets320in the depicted embodiment. An algorithm that may be used to coalesce two raw configuration data sets from two different sources may comprise at least some of the following steps in one embodiment. First, a decision as to whether each of the data sets comprises attribute values pertaining to the same type of configuration item (such as host, process, virtual machine etc., which are defined as ObjectTypes in the ontology202ofFIG.2) may be made. In order to do so, in some embodiments the attribute names may be compared to the attribute lists (e.g., AttrLists205ofFIG.2) defined for various configuration items in the discovery service's ontology. The attribute lists may indicate synonyms in some cases—e.g., the same attribute name be identified via name AttrName1 by one data source and AttrName2 by another data source. If it is determined that both data sets contain at least some attribute values pertaining to the same configuration item type, those <attribute:value> pairs may be examined for correlations, matches or duplications. For example, if both data sets indicate that (a) the CPU utilization at a host was approximately 75% during a particular time interval, (b) and that 2500 UDP packets were sent during the time interval from that host, this might be interpreted as an indication that the data sets are referring to the same host, even if different data-source side identifiers were used for the same host. If such a match is detected (with some minimum confidence level), a decision to create a single coalesced record for the host may be taken; otherwise, the two data sets may be deemed to refer to two different hosts and separate coalesced records may be generated. Within the single coalesced record, some subset of the <attribute:value> pairs taken from one or both the data sets may be incorporated. For example, redundant/duplicated attribute values may be discarded, some attribute values may not be included in the coalesced record because the information they contain is deducible from other attribute values that are included, or because a more accurate data source for the same data is known. Depending on the kind of data included in the data sets, in some cases one or more elements or attribute values of an existing coalesced configuration record may be updated (or new attributes added to an existing coalesced configuration record) instead of generating a new coalesced configuration record. The coalesced configuration record350may often provide a more complete characterization of the configuration item than would have been possible from any single raw configuration data set320. A coalesced configuration record350may include a unique service-side identifier352for the configuration item, which may differ from respective data-source-side identifiers indicated in the raw data sets320in the depicted embodiment, and may be generated based at least in part on the configuration discovery service's ontology and/or on elements of the raw configuration data sets. In at least some embodiments, a coalesced configuration record350may comprise a curated attribute value list354which may not necessarily include the union of all the attribute value lists325pertaining to the configuration item. Instead, for example, the coalescing/disambiguation components may discard some attribute values from one or more data sources because the values are stale (e.g., because the values have been superseded by newer values for the same underlying attributes obtained from other sources, or simply because the difference between the time that the values were collected and the time that the values are being processed exceeds a threshold). In some embodiments, respective trust scores315(e.g., scores315A-315K) of the different data sources may also or instead be used to determine whether a given attribute value is to be included in the coalesced configuration record. The trust scores may be especially useful when two different data sources provide raw data corresponding to the same attribute: in such a scenario, the attribute value provided by the source with the higher trust score may take precedence. In some embodiments in which each raw data set320has an associated timestamp indicating when the data was collected, a formula which takes both the timestamps and the trust scores into account (in effect, resulting in time-weighted trust scores) may be used to select which attributes should be included in the curated attributed value list354. In some embodiments, if and when the configuration item to which a given item or items within a raw data set320pertain is unclear, the coalescing/disambiguating components360of the configuration discovery service may utilize a pattern-matching approach to identify the configuration item. For example, consider a simple scenario in which raw data set320B and320K both report on approximate outbound network traffic over a given time interval for some configuration item, and that data set320B includes a host name of the configuration item but data set320K does not. In this trivial example scenario, the coalescing/disambiguating components360may attempt to find attribute values contained in data set320K which match attribute values in other data sets for similar time periods. If the outbound network traffic rates match between data sets320K and320B to some threshold level of precision or accuracy, the two data sets (in the absence of any contradicting evidence) may be assumed to refer to the same underlying configuration item. As mentioned earlier, a variety of programmatic interfaces may be used at the configuration discovery service in various embodiments.FIG.4illustrates example application programming interfaces which may be implemented at a configuration discovery service, according to at least some embodiments. Four examples of configuration data ingestion interfaces (used for providing raw configuration data sets to the service) are shown, and one example of a configuration data consumption interface (used for obtaining responses to queries directed at the service) are shown. In at least one embodiment, the configuration discovery service460may provide a bulk import/export application programming interface (API)415A which may, for example, be used to transfer large amounts of information from a client's configuration management database410to the service. In at least some embodiments, the service may provide (e.g., via download) a number of different software agents412which may be installed at various physical or virtual devices from which configuration data is to be obtained. Such agents may use agent APIs415B to communicate with the service. In various embodiments, at least some of the agents412may collect data with respect to particular events (e.g., once every X seconds, a CPU utilization collection event may be scheduled at a server), and as a result the amount of data transmitted at one time via the agent's API415B may be relatively small compared to the amount of data transferred over an export/import API415A. In some embodiments, the configuration discovery service may accept raw configuration data from a variety of configuration tools414, including for example tools that utilize Simple Network Management Protocol (SNMP), Windows Management Instrumentation (WMI), or WBEM (Web-Based Enterprise Management). Tool-specific APIs415C may be implemented for interactions between such tools and the configuration service discovery. Generic reporting APIs415D may also be implemented for custom data sources416(i.e., data sources which are not agents per se, are not associated with third-party configuration tools and are not affiliated with client configuration management databases) which may be developed and deployed in some embodiments. A number of different query APIs416may be implemented for entities which consume the coalesced configuration information of the discovery service. Such entities may include other services of a provider network, such as a visualization service and/or one or more migration-related services including a migration marketplace service or a migration planning service, as well as customers of the provider network at which the configuration discovery service. Some query APIs416may utilize variants of well-known query languages such as Structured Query Language (SQL). In one embodiment, a time series-oriented query language such as the language supported by OpenTSDB may be used for temporal configuration-related queries. Pattern-Based Grouping and Role Assignments FIG.5illustrates examples of application architecture patterns that may be used at a discovery service to automatically detect the roles played by configuration items, according to at least some embodiments. In various embodiments, the service may support queries to search on application, software, and/or hardware configuration patterns which are used to group configuration items. In the depicted embodiment, a configuration item group descriptor database590of the configuration discovery service may comprise a number of group descriptors510, such as510A or510B. Each group descriptor510may include a respective pattern name577(e.g.,577A or577B), such as “Three-tier web application” or “Phased split-and-combine application”, as well as a representation of the relationships between various entities which collectively implement an application or a related set of applications. Each entity may play a specific logical role within the application pattern, and the communication behavior expected to be exhibited by the entities assigned different roles may be indicated in the group descriptor510. For example, group descriptor510A defines four roles: a load balancer (LB) role511, a web server role512, an application server (Appserver) role513, and a database server (DBserver) role514. One or more load balancers such as511A-511C may interact via network packets with one or more web servers such as512A-512N in an instance of a group of configuration items corresponding to descriptor510A. Each of the web servers512may also interact with one or more application servers513(e.g.,513A-513K), and each application server in turn may also interact with one or back-end database servers such as514A-514J. In group descriptor510B, the roles may include a TaskSplitter551responsible for subdividing a task into subtasks, Phase 1 workers552responsible for executing the subtasks, Phase 1 ResultCombiners553responsible for collecting the results of phase 1 tasks and partitioning the results for phase 2 analysis, Phase 2 workers554responsible for analyzing the partitioned results, and a FinalResultCombiner555which gathers the results of the phase 2 analysis. The specific numbers of the configuration items corresponding to at least some roles may differ from one group instance to another. For example, although a single TaskSplitter, Phase 1 ResultCombiner and FinalResultCombiner entity may be instantiated within a configuration item group corresponding to descriptor510B, the numbers of configuration items configured as Phase 1 workers or Phase 2 workers may vary from one implementation example of the descriptor to another. In some embodiments, clients of the discovery service may submit representations of descriptors510to the service via programmatic interfaces, and the service may identify corresponding examples of configuration items that exhibit the patterns indicated in the descriptors. A given descriptor510may comprise indications of various aspects of a distributed application, such as an expected interconnection topology of configuration items associated with the application, an expected item name list (e.g., process names or paths) associated with the application, and/or an expected pattern of communication (e.g., the exchanges of packets representing a particular type of request-response behavior or an initialization/termination handshake procedure) between a pair of configuration items associated with the application. The service may try to match the observed behavior of various configuration items to the descriptor elements to determine the roles being played by the configuration items. For example, in the embodiment depicted inFIG.5, using the collected configuration data from various data sources, the service may have determined that the configuration item with unique service-side identifier582A, represented by the coalesced configuration record580A, is playing the role (e.g., a Web server role) indicated by ItemGroupRoleID588A within a particular instance of one of the group templates identified by ItemGroupID586A (e.g., instance 1 of a four tier web application). Other configuration items, such as the item represented by coalesced configuration record580B, may not necessarily be playing roles associated with any given pattern or group descriptor; the fields ItemGroupRoleID and ItemGroupID for such configuration items may be set to null in the depicted embodiment. The labels used for the ItemGroupRoleID and IetmGroupID may be used as “tags” to refer to multiple configuration items playing the same roles or exhibiting the same behavior patterns in some embodiments. Such tags may be used to identify the operands for various operations requested by clients of the discovery service—e.g., the logical equivalent of the query “list all configuration items in data center DC1 with tag ‘Webserver’” or the command “initiate automated migration of configuration items in data center DC1 with tag ‘DBserver’ to data center DC2” may be issued by a client. Clients may designate tags for various configuration items programmatically in some embodiments, and such tags may be used by the discovery service to then identify larger patterns or group descriptors. In one embodiment, the patterns and/or tags indicated by one customer of the discovery service may be used (e.g., with the permission of the customer who provided the patterns/tags) by the discovery service for grouping and assigning roles among the configuration items of other customers. Automated Detection of Obfuscated Network Traffic Sources In many applications, networking intermediaries such as network address translation (NAT) devices, port translation devices and the like may sometimes modify network packets in such a way that the true source of a given packet may not be immediately detectable to configuration data sources associated with the destinations of those packets.FIG.6illustrates the use of source identity detection algorithms for network packets at a configuration discovery service, according to at least some embodiments. Such source identity detection algorithms may also be referred to as source endpoint detection algorithms herein. As shown, a packet set622from a traffic source endpoint610(which may be representable in the discovery service as a configuration item) may be sent to a traffic destination endpoint628via one or more address-obfuscating intermediaries612in the depicted embodiment. The address-obfuscating intermediaries may also be used for traffic in the reverse direction, e.g., from endpoint628to endpoint610. Both the destination and source endpoints may have one or more configuration data sources associated with them, responsible for transmitting configuration data to the discovery service. However, because of the obfuscation operations (e.g., packet header changes, encapsulation within enveloping packets, or the like) performed by the intermediary or intermediaries612, the identity of the sender of a received packet set623(which correspond to sent packet set622) may be unclear to the data source(s) affiliated with destination endpoint628. From the perspective of the customer on whose behalf the discovery service is being used at the endpoints610and628, or from the perspective of another service (e.g., a migration-related service) which obtains configuration information pertaining to one or both endpoints, discovering the identity of the sender may be important. Any of a number of techniques may be employed by the discovery service to identify the sender endpoint of received packet set623. In at least one embodiment, in which respective data sources to which commands can be issued from the service are running at the two endpoints, a special packet sequence655may be issued from endpoint628to endpoint610via the obfuscating intermediary or intermediaries612as part of the endpoint detection algorithm. For example, N “extra” packets (not part of the normal application traffic) spaced exactly T milliseconds apart may be issued in packet sequence655by the data source associated with endpoint628, and the data sources at various other endpoints including endpoint610may in effect monitor incoming traffic for such exactly-spaced packets. Assuming that there are no networking bottlenecks or problems along the path between the endpoints, it may be possible for the data source at the endpoint610to match the inter-arrival times of the received extra packets with the inter-transmission times of the data source at endpoint628, thereby establishing the identity of the sender of packet set623with a reasonably high probability. In some embodiments, while the IP addresses and/or ports of various packets may be obfuscated at intermediaries612, sequence numbers of the packets may not be modified, and the sequence numbers of packets may be analyzed for matches at the receiver's and sender's data sources to identify the source of the packets in the endpoint detection algorithm. In one embodiment, a sequence of connection establishment requests followed quickly by connection teardown requests may be issued from the endpoint628to endpoint610, and such unusual patterns of administrative requests may be used to identify packet sources in the endpoint detection algorithm. In one embodiment, a data source such as a service agent running at the endpoint628may issue a request for a service-side unique service-side identifier to a data source (such as another service agent) running at endpoint610, and the unique service-side identifier may be used to identify the sender. Regardless of the specific endpoint detection algorithm employed to detect the sender, a coalesced configuration record indicating the identity of the sender may be updated in various embodiments after the sender has been identified. Relevance Scores FIG.7illustrates example factors that may be used at a configuration discovery service to assign relevance scores to configuration items, according to at least some embodiments. The relevance scores may be used to determine at least the initial responses to queries directed at the configuration items—e.g., the responses to general queries such as “list the processes running at host H1” may comprise the processes to which relevance scores above a threshold have been assigned, thereby reducing the total size of the responses and avoiding noisy or low-information responses. Although the specific example factors shown inFIG.7apply to processes, a similar approach may be taken with respect to other types of configuration items as well in various embodiments. A given physical or virtualized compute server may comprise hundreds of processes, including many low-level or background processes which typically do not consume many resources and are used (e.g., at the operating system level or kernel level) to perform background tasks or respond to unusual situations. Many such processes may be present at all the instances of a given version operating system—e.g., in an execution environment comprising fifty Linux servers, a similar set of daemon processes may be running at all fifty servers. At least some consumers of the configuration service's data may be more interested in the processes which are application-specific and therefore do not necessarily show up in the process lists of every server by default. The configuration service may therefore take the repetition frequency712of a given frequency at different hosts or servers into account in its relevance score assignment algorithm710, with those processes that run at every host (or at almost every host) being assigned a lower relevance. Resource usage levels of the process, such as recent CPU usage714, network port usage716(e.g., whether the process is transmitting or receiving traffic via one or more network ports) and/or I/O device usage718may each be positively correlated with relevance scores in the depicted embodiment—e.g., those processes which tend to consume very low levels of resources and/or are disconnected from the network may be considered low-relevance. In at least some embodiments, the configuration discovery service may keep track of the number of queries (e.g., either from a given client, a collection of clients, or all clients/consumers) which specifically target a given kind of process (e.g., queries which are directed to processes named “httpd”). The query history metrics724may also be used to determine the relevance of a process—e.g., if there has been a query directed specifically to a process by name or role within the previous X days, a high relevance score may be assigned to the process. In the depicted embodiment, the relevance scores generated by algorithm710may be used to classify or rank the processes into at least two categories—“more interesting” processes730with scores higher than a selected threshold, and “less interesting” processes732with scores less than or equal to the threshold in the depicted embodiment. Unless a given process-related query specifies that all processes are to be included in a response, or specifically requests information about a particular query which happens to be classified as a “less interesting” process, the list of more interesting processes may be used to prepare the query response in the depicted embodiment. Similar heuristic approaches may be used to cleanse or shorten responses to queries regarding other types of configuration items in at least some embodiments. In various embodiments the scoring components112illustrated inFIG.1may utilize machine learning techniques to generate relevance scores, trust scores and the like. Improving Configuration Query Performance Many customers of the discovery service may have large application execution environments, potentially comprising thousands of configuration items distributed among a number of data centers. Recent changes to the configurations of the application components, and/or recent trends in performance or behaviors of the application components, may often be of greater interest to such customers than older configuration data. Over time, the total volume of configuration data collected and stored at the discovery service for a given application execution environment may become extremely large, potentially slowing query responsiveness, especially for temporal queries for which some traditional data models may not be optimized.FIG.8illustrates on overview of techniques which may be employed at a configuration discovery service to improve responsiveness to temporal queries, according to at least some embodiments. Raw configuration data sets871obtained by the data sources802may be used at the coalescing components804of the discovery service to generate or update timestamped coalesced configuration records872, each associated with one or more unique service-side identifiers of configuration items as discussed earlier. The coalesced configuration records872may be stored at an automatically scaled partitioned data store820which forms part of a set of persistent discovery service repositories810in the depicted embodiment. Each partition may contain a selected maximum amount of configuration record data, such as M gigabytes, in one implementation, with each client of the discovery service initially being assigned one partition. As and when the client's configuration data approaches the maximum partition size of the client's existing partition(s), new partitions with an associated set of resources (e.g., allocated storage space and/or compute capacity) may be created automatically for the client, and in some implementations some subset of the client's data may be moved to the new partitions for load balancing. In some embodiments, a database service and/or storage service implemented by the provider network may be used for persistent repositories810. The persistent repositories may also include an optional store822for previously-generate query results in some embodiments. The persistent repositories810may have an average latency L1 for record retrieval. In the embodiment depicted inFIG.8, at least some configuration records may be proactively pre-loaded into a low-latency repository850optimized for certain expected types of queries, as indicated by arrow875. The records may each comprise creation and/or most-recent-modification timestamps in various embodiments. The records may be loaded in reverse chronological order, e.g., with a higher priority being assigned to pre-loading more recently-updated (or more recently-created) records in at least some embodiments. The average latency L2 to access a record from the repository850may be smaller than the latency L1 of record accesses to repositories810. In at least some implementations, the repository850may comprise portions of volatile memory at various computing devices of the discovery service that are designated for handling client queries, including temporal or time-series queries directed primarily at recent configuration data. A configuration data cache852at repository850may implement a data model specifically targeted towards temporal queries (e.g., the logical equivalent of “list the configuration changes that have occurred in the last hour at servers S1 and S2”) in some embodiments, such as a data model used in OpenTSDB. In at least some embodiments the maximum space available for the cache852may be less than the space available at the persistent records repositories, and as a result older cache entries892may be discarded as required to make room for newer entries. In at least some embodiments, the results of some queries may optionally be stored at the query results store822, and may be re-used as required as indicated by arrow877. In some embodiments, additional layers of storage beyond those shown inFIG.8may be used—for example, a back-end layer of cold storage may be used for configuration data that has reached a threshold age (such as one month or six months). Such a cold storage layer may be lower cost (and in some cases use a more space efficient format for the data) than the primary persistent data repositories; however, the latencies to retrieve records from the cold storage may be greater. In at least some embodiments a set of snapshot-based programmatic interfaces (or other temporally-oriented programmatic interfaces) may be implemented by the configuration discovery service to enable configuration records corresponding to specified timestamps or time periods to be retrieved from any of the different layers of storage being used. In one embodiment respective snapshots of a client's configuration data for different points in time may be created and stored automatically or on demand as distinct objects by the discovery service, which may enable rapid responses to at least some time-based configuration queries. Snapshots corresponding to various points in time may be loaded on demand (or in anticipation of demand) into the cache852in various embodiments from the other layers. In one embodiment the snapshot-based APIs may enable clients to determine whether two snapshot configurations are similar enough for a comparison to be useful, and if so, to provide the results of such a comparison (similar in concept to the results of a “diff” command at the configuration snapshot level). Methods for Supporting Configuration Discovery Service FIG.9is a flow diagram illustrating aspects of operations that may be performed at a configuration discovery service, according to at least some embodiments. As shown in element901, a determination may be made that the automated discovery of configuration information from one or more application execution environments of a client is to be initiated at a discovery service. The execution environments may include resources at one or more provider networks (e.g., at a virtual computing service and/or a storage service at the same provider network at which the discovery service itself is implemented, as well as at other provider networks) and/or at customer-owned or customer-managed premises. The determination that the automated discovery is to be initiated may be made, for example, in response to a request received from a customer via a programmatic interface of the discovery service. For example, the service could expose a Startdatacollection API, which can cause agents of the service to initiate automated discovery. For example, the agents can be configured to poll an API for status changes. When the discovery service changes the status in a database to start collecting data, the agents can receive this status update and begin collecting data. An initial set of configuration data sources may be identified (element904), e.g., by discovery service agents, and network connectivity may be established between the discovery service and the data sources (e.g., via the agents). A variety of data sources may be used, e.g., including existing configuration management databases of the client, third-party configuration management and/or performance management tools, and/or custom data sources generated specifically for the client. The service may implement programmatic interfaces to receive raw configuration data sets from the different categories of data sources, including bulk export/import interfaces, interfaces for event-driven configuration updates, and the like. The agents can be configured to send data to the programmatic interfaces. For example, the agents can be programmed with information that identifies the endpoint(s) for the programmatic interfaces. The service may start collecting raw configuration data sets from the data sources (element907). Each data set may comprise some set of attribute values and some identification information (e.g., identifiers obtained by the data sources) for the associated configuration items. Unique service-side identifiers may be created for various configuration items in the depicted embodiment (element910), e.g., based on a naming scheme combining elements of data source-side identifiers, attribute values, and/or an ontology defined at the discovery service. The service-side identifiers may differ from at least some of the data source-provided identifiers, and in some embodiments may be used to uniquely identify configuration items such as servers during their lifecycle, even if the configuration items are physically moved, re-deployed for different purposes, and so on. In some embodiments, depending on the extent of the configuration changes applied to a particular configuration item, the discovery service may modify the unique service-side identifier over time. The threshold conditions for changing the service-side identifier may differ from one category of configuration item to another. In one example scenario, for example, if memory or disk space is added to a host, the host's service-side unique identifier may not change, but if the CPU or motherboard is swapped out, the unique identifier may be changed. Raw configuration data sets from a variety of data sources that may comprise configuration details about the same underlying entity at different granularities, different times, or using different tools may be coalesced at the discovery service in the depicted embodiment (element913). The raw data sets may not identify the configuration item using a common identifier in some embodiments (for example, different data-source-side identifiers may be provided for the same configuration item in respective raw configuration data sets), and the service may utilize correlations or matches between various attribute values received from the different sources to detect that the configuration data in two different data sets actually refers to the same configuration item. Coalesced configuration records may be generated and stored at one or more persistent repositories of the discovery service. In at least some embodiments, respective trust scores may be associated with different data sources, and such trust scores may be used to resolve conflicts among reported configuration data, and/or to optionally discard some attribute values received from less trustworthy source (element916). The coalesced curated configuration records may thus exclude some subset of attribute values indicated in the raw configuration data sets. Some attribute values may be excluded due to staleness (e.g., because the time that has elapsed since the values were collected exceeds a threshold selected at the service) in addition to or instead of being excluded due to trust scores. The trust scores themselves may be adjusted over time, e.g., using machine learning techniques and/or client feedback. In various embodiments, the discovery service may maintain descriptors for grouping configuration items according to application patterns. If a configuration item's behavior and/or communication pattern matches the behavior or pattern indicated in such a descriptor, the service may automatically tag the configuration item's configuration record with a corresponding role identifier (element919). For example, the configuration data collected (e.g., the pattern of network packet flows) at the service may be sufficient for the service to recognize that a particular server is a web server of a multi-tier web application pattern, that another server is an application server of that pattern, and so on, without being informed by the client of the roles being played by the servers. In some embodiments, at least a portion of the coalesced and curated configuration records may be pre-loaded from the original persistent repository into which they were stored into a low-latency repository at which a data model suited to expected types of queries is implemented (element922). The low-latency repository may comprise a cache (e.g., implemented in volatile memory) from which rapid responses to temporal queries may be provided in some embodiments. The data may be pre-loaded in reverse chronological order (using update timestamps of the coalesced records) in some embodiments, so that queries directed towards more recent changes or metrics are prioritized. The contents of the coalesced records may be provided in response to queries received via the discovery service's programmatic interfaces (element925). Visualization Service for Discovered Configuration Information FIG.10illustrates an example system environment in which a visualization service for configuration records gathered at a discovery service may be implemented, according to at least some embodiments. As shown, system1000comprises provider networks1002A and1002B, as well as a customer-premises network1072. A number of network-accessible services, including a configuration discovery service1004similar to that described above in the context ofFIG.1throughFIG.9, may be implemented at provider network1002A. In the depicted embodiment, a visualization service1006may be implemented as a component of the configuration discovery service1004, e.g., to provide customized graphical representations of configuration data stored at the discovery service1004. In other embodiments, the visualization service1006may be implemented as a standalone service which obtains configuration records from the discovery service1004. System1000comprises numerous configuration items1036, including configuration items1036A-1036C at provider network1002A, configuration items1036F-1036H at provider network1002B, and configuration items1036L-1036N at customer-premises network1072in the depicted embodiment. Raw configuration data sets (e.g., sets of attribute values) associated with the configuration items may be obtained at various configuration data sources (CDSrcs)1034, such as data sources1034A and1034B of provider network1002A, data source1034K at provider network1002B, and data source1034M at customer-premises network1072. The raw configuration data sets may be transmitted to the configuration discovery service1004, where coalesced configuration records may be generated from the raw data as described earlier and stored in one or more repositories. The visualization service1006may provide dynamically updated context-sensitive graphical representations of configurations of complex application environments to clients in the depicted embodiment. When a given client logs on to a visualization console or otherwise sends an indication that a graphical representation of the client's application execution environment is desired, the visualization service may issue one or more queries using the discovery service's programmatic interfaces to identify a set of configuration items associated with the client for which configuration data is to be displayed. The collection of configuration items1036for which information is to be visualized on behalf of a client at a given client-side display environment may be referred to as a visualization target environment (VTE) herein. A given VTE may comprise configuration items distributed among a plurality of data centers of different networks. For example, client C1's VTE1044A may comprise configuration items1036B and1036C at one or more data centers of provider network1002A, as well as configuration items1036F and1036G at one or more data centers if provider network1002B. Client C2's VTE1044B may comprise configuration items1036H at provider network1002B and configuration item1036L at customer-premises network1072in the depicted example. In at least some embodiments, the visualization service1006may be capable of detecting various properties or constraints of the display environments at which the graphical representation of the VTE is to be shown (e.g., the kinds of screens available for the display, the computing power of the client-side devices responsible for rendering the display, etc.), and adjusting the content to be displayed accordingly. A given client may be able to utilize several different display environments—e.g., client C1's display environment1082A comprises a desktop with multiple monitors, while another of client C1's display environments1082B may comprise a tablet computing device. Client C2's display environment1082C comprises a 13-inch laptop with a 1084×768 pixel screen. In some cases, multiple display environments may be used on behalf of a single client concurrently, and the visualization service may transmit information at different levels of granularity to the different display environments. Based at least in part on the set of configuration items which have been identified as part of a given client's VTE1044, and at least in part on the constraints or characteristics of the display environment, the visualization service may select a level of granularity at which the VTE is to be displayed. During a client's interactive session with the visualization service, a number of different views of various subsets (or all) of the client's configuration information available may be provided, with the particular combination or sequence of views provided depending on the objectives of the client for the session. For example, during one session a client may wish to troubleshoot a performance problem, during another session the client may wish to view the changes to an application's configuration over some time period, during a third session a client may wish to identify sources of network packets, and so on. Such sessions or workflows may each comprise a respective sequence of graphical displays or views. Several different modes of displaying configuration data, which may be referred to as “view categories”, may be supported by the visualization service in the depicted embodiment, such as for example a hierarchical or tree-oriented view, a graph or network-oriented view, or a table view. The particular view category to be used for a given display during a given session may be selected automatically by the visualization service in at least some embodiments. The selection may be based at least in part on the specific types of configuration data (e.g., performance measurements, network connectivity information, temporal changes in configurations, hierarchical/containment relationships among configuration items, rankings of particular types of configuration items based on client-specified criteria, etc.) that are deemed most relevant to the current state of the session or workflow, and/or based at least in part on the service's predictions or expectations regarding the objectives of the client. The client may be provided control elements (e.g., buttons, drop-down menus and the like) to override the view category selection if desired—e.g., the client may issue a request to change the view from a table view to a hierarchical view or vice versa. Data and/or instructions that can be used to generate dynamic customized configuration visualizations1022(e.g., visualizations1022A-1022C), each representing at least a portion of a VTE at the selected granularity level, together with the view category to be used, may be transmitted to the devices of the client's display environment. The data corresponding to the VTE components may then be rendered for viewing at the client's devices. The visualization service may initiate the display of several different representations of the same VTE in parallel in at least some embodiments. In at least some embodiments, in addition to initiating the generation of a graphical representation of all or part of the VTE1044, the visualization service may also provide high-priority or high-importance content to be included in a dashboard for the client. The visualization service may, for example, determine the boundaries of a time window, and cause information about at least some configuration changes which occurred during the time window to be displayed in a “recent changes” section of the dashboard. The dashboard may also be used for receiving client feedback with respect to ambiguities identified by the visualization service and/or the discovery service in various embodiments as described below in further detail. The layout and presentation of the dashboard may also be modified based on the constraints and capabilities of the client's display environment in at least some embodiments. As and when new configuration information is gathered at the discovery service1004, the visualization service may automatically update the graphical representations provided to the clients. A number of interactive controls may be made available to the client in the visual interfaces, such as a slider for replaying configuration information as a function of time, as discussed in further detail below. Context-Based View Transitions The visualization service may be able to anticipate steps of client workflows for viewing configuration data, and adjust the displayed content automatically to provide the most useful views in various embodiments.FIG.11illustrates example context-based transitions between views that may be implemented automatically by a visualization service, according to at least some embodiments. Three example view categories are shown: a table view1120A, a tree or hierarchical view1120B, and a graph or network view1120C. In some cases, it may be possible to display about the same set of configuration items, such as configuration items1102A-1102F, using several or all of the view categories. The visualization service may select the particular view category to be used based on various factors in the depicted embodiment—e.g., based on the particular stage of the interaction workflow expected to be entered next by the client, based on the type of configuration data to be displayed, the granularity selected for the display, and so on. Examples of configuration data types which may be used to select the view category in various embodiments may include, among others, performance measurements, transaction flows, temporal changes to configurations, network connectivity indicators such as numbers of active connections, containment/hierarchical relationship information, location-based groupings of configuration items, membership in application patterns such as those shown inFIG.5, and the like. Such automated workflow context-based transitions1105(e.g., transitions1105A-1105C) may be overridden by the client in at least some embodiments—e.g., links or other control elements of the graphical interface being used may be provided for the client to request a change of the view category being used for the displayed data. The visualization service may maintain a knowledge base of frequently-used client workflows in at least some embodiments, with each workflow representing a respective sequence of displays typically provided to the client to achieve the client's objectives during a session with the visualization service. For example, one such workflow may begin with the client logging in to a visualization console, and being provided a table view of configuration items of a selected type, such as all the hosts being used for the client's applications. In the table view, values of various attributes (e.g., hostname, IP address, current up time, average CPU utilization during a recent time interval, average network bandwidth consumed during a recent time interval, etc.) etc. may be provided for the different hosts. The particular configuration item type to be presented initially in a tabular form after the client logs in may be indicated, for example, in preference settings of the client in one implementation. If the client has not specified preferences for the initial post-login display, in one embodiment the visualization service may attempt to find the largest or most inclusive hierarchical configuration item type (e.g., data centers among which the client's resources are distributed, network subnets used by the client, instances of application patterns similar to those shown inFIG.5, or hosts) relevant to the client's visualization target environment, and list configuration items of that type. The view category for the next display of the workflow may be selected based on the interactions of the client with the first display (which may be interpreted as an indication of the objectives of the client for the workflow, e.g., based on matches with previously-used workflows). If the first display comprises a table of host information, and the client requests that the hosts be sorted based on network bandwidth usage metrics, for example, the service may assume that the client wishes to view network connectivity information in the next display. Thus, a network view of a selected set of hosts may be displayed next, indicating for example the network paths between the hosts, the connections open between the hosts, and so on. If the first display comprised a listing of hierarchical containers such as availability containers or data centers, and the client clicks on one of the containers, a hierarchical or tree view category may be selected for the next display, under the assumption that the client wishes to view the contained configuration items. Several factors other than the type of configuration data to be displayed and matches with previously-used workflows may be taken into consideration for automatic view category selection in various embodiments: e.g., the total number of the client's configuration items for which information is available at a given stage of the workflow, the expected size (e.g., in pixels) of the client's display environment, and so on may affect the category view selection. For example, if a client's applications use one thousand hosts, and a smart phone is detected as the client's display environment, a hierarchical or summarized view of the hosts (e.g., based on data center location, or based on subnet membership) may be provided instead of a table view of all one thousand hosts. In one embodiment, clients may request that records of their interactions with the visualization service (which may include client overrides of automated view category selection decisions in some cases) be saved, and such customized records may be used for selecting view categories in subsequent sessions. Example Visualization Interface Components As mentioned earlier, the visualization service may provide both a graphical view of at least a portion of one or more application execution environments, and a dashboard.FIG.12illustrates example elements of a graphical user interface of a visualization service, according to at least some embodiments. As shown, a visualization interface1202may show portions or all of a visualization target environment1205and a dashboard1270in the depicted embodiment. In some embodiments, the graphical user interface may be rendered as part of a browser, while in other embodiments a standalone tool may be employed. The VTE portion1205may show a number of location-based groupings of configuration items. In the depicted example, information about configuration items discovered at a provider network1220and a customer-owned data center1230is included in the VTE portion. In at least some embodiments, a provider network may be organized into a plurality of geographical regions, and each region may include one or more availability containers, which may also be termed “availability zones”. An availability container in turn may comprise portions or all of one or more distinct locations or data centers, engineered in such a way (e.g., with independent infrastructure components such as power-related equipment, cooling equipment, or physical security components) that the resources in a given availability container are insulated from failures in other availability containers. A failure in one availability container may not be expected to result in a failure in any other availability container; thus, the availability profile of a given resource is intended to be independent of the availability profile of resources in a different availability container. Various types of services and/or applications may therefore be protected from failures at a single location by launching multiple application instances in respective availability containers. For at least some provider network customers, providing a visual representation of the distribution of the resources allocated to the customer among different availability containers may be useful. In the scenario depicted inFIG.12, virtual machines allocated to the customer are distributed among at least two availability containers. Virtual machines (VMs)1222A and1222B run within availability container1210A of the provider network1220, while VMs1222K and1222L run within a second availability container1210B. The location-based groups displayed with respect to a provider network by the visualization service may sometimes differ from the groups displayed with respect to customer-premise networks, e.g., based on differing permissions regarding location details. For example, while the virtual machines1222are shown grouped by availability container, the resources shown within customer-owned data center1230are grouped by room and server rack. Room1214of the data center1230comprises two racks1232A and1232B of the customer's VTE. Rack1232A comprises two servers1234A and1234B, while rack1232B comprises server1234C. Network connections established between the configuration items may also be displayed—e.g., VM1222A is shown connected to server1234A and VM1222K, VM1222B is connected to VM1222L and server1234K and so on. The granularity of the location-based groupings may be selected automatically by the virtualization service, e.g., based on the total number of configuration items, the display environment properties, and so on. In at least one embodiment, the location-based boundary categories to be included automatically (without receiving explicit requests to do so) in a particular graphical representation may be selected from a set which includes, for example, availability container boundaries of a provider network, data center boundaries, rack boundaries, room boundaries, network interconnect topology boundaries, physical machine boundaries, processing container boundaries or virtual machine boundaries. Multiple processing containers corresponding to respective isolated application sets may, for example, be supported at a single hardware host, and the visualization service may provide graphical representations of the containers within a server for certain views. The dashboard1270may include two types of information in the depicted embodiment: recent configuration changes (shown for a selected time window in section1271), and a disambiguation section1272. Configuration changes that have occurred in a recent time window may be ranked (e.g., based on relevance scores of the kinds discussed above and/or based on other factors) and displayed in order or rank in section1271in the depicted embodiment. This may enable clients to quickly understand how their application execution environment is changing. In some embodiments, the configuration discovery service could utilize the visualization interface to obtain client feedback to help confirm or resolve questions regarding the identity of configuration items. If a determination is made that client feedback-based disambiguation of the identity of a particular configuration item is to be initiated, a disambiguation request may be shown in section1272of the interface. For example, the request could indicate a proposed identity and/or one or more attribute values of a configuration item with a symbol (e.g., a question mark or a text balloon) requesting confirmation of the proposed identity. The client may confirm the identity by entering feedback (e.g., a checkmark), or provide an alternative identifier or name. If the ambiguous identity is resolved with the help of the client's feedback, the configuration discovery service may update (or mark as verified) its configuration records, and the disambiguation request may be removed from section1272in the depicted embodiment. In some embodiments, the visualization service may enable clients to examine details (e.g., latencies) associated with various types of transactions that were identified by the configuration discovery service.FIG.13illustrates examples of transaction-related information that may be displayed with the help of a visualization service, according to at least some embodiments. As shown, visualization interface1302may include a “show transactions” control element1350(e.g., a button) as well as a zoom control element1349which enables clients to increase or decrease the granularity at which information is displayed. The zoom control element1349may be used, for example, to arrive at a level of granularity in which individual virtual machines1322(within provider network1320) and servers1334(within customer-owned data center1330) are shown. When the client clicks on the “show transactions” button, the display may be updated to show regions1370A and1370B. Region1370A shows that requests for12500transactions (where the definition of a transaction may be service-selected or client-selected) were transmitted during some time period (100 seconds in this example) from server1334B at rack1332A of room1314of the customer-owned data center to virtual machine1322K of availability container1310B of provider network1320. The average throughput was 12.5 transactions per second, and the average transaction latency or response time was 500 milliseconds. Similarly, region1370B shows the count, throughput and latency for transactions submitted from server1334K to virtual machine1322B. A recent transaction list1385may be included in the display in the depicted embodiment. For some number of recent transactions associated with the currently-displayed portion of the visualization target environment, details such as an identifier of the submitter configuration item1387(e.g., a process or host from which the transaction was initiated), the responder1388, the submit timestamp1389, transaction command/request details1390, and the completion time and status (e.g., commit/abort)1391. The client may be able to sort the recent transactions based on selected attributes using the visualization interface, request that transactions for a selected time period be displayed, and/or view additional details about the transaction sender or the transaction responder. In some embodiments, clients may submit transaction descriptors via the visualization service (e.g., indicating the packet flow sequence, the formats of the transaction requests and responses and the like), enabling the configuration discovery service to monitor the transactions as they occur. In other embodiments, the discovery service may be able to detect frequent request/response patterns in the communications between various configuration items and may use these patterns to define transactions. FIG.14illustrates examples of network traffic-related information that may be displayed with the help of a visualization service, according to at least some embodiments. As mentioned earlier, it may sometimes be the case that network traffic pathways may include obfuscating devices such as address translators or port translators that make it harder to detect the true sources from which one or more packets were sent. In the depicted embodiment, the visualization service may provide control elements (such as control element1448) to show statistics regarding the number of network packets received at selected configuration items (e.g., virtual machine1422A). In the example scenario shown, region1470indicates that, during some selected time interval, a total of 4500 packets were received at virtual machine1422A of availability container1410A and provider network1420. Control element1450labeled “show traffic sources” may be used to submit a source detection query for received packets to the configuration discovery service via the visualization service. In response, the discovery service may employ any of several source identity detection algorithms such as those discussed in the context ofFIG.6to ascertain the probable identity of the configuration items which may have transmitted packets via obfuscating intermediaries to the virtual machine1422A. The algorithms may comprise, for example, sending multiple connection establishment and teardown requests in unusual patterns, closely tracking packet inter-arrival times and matching them with inter-transmission times, monitoring sequence numbers and the like. As indicated in regions1471A and1471B, the configuration discovery service has identified server1434A as the likely source of 500 of the 4500 packets, and server1434K as the likely source of the remaining 4000 packets in the depicted example. It is noted that source identification for some packets may not necessarily require the invocation of the kinds of identity detection algorithms discussed here: the sources of packets which do not pass through an obfuscating intermediary may be obtained simply from their packet headers by the configuration discovery service. In some embodiments the visualization interface may include a recent received packets list region1485, in which details such as receive timestamps1487, apparent sender IP addresses1488, receiver IP addresses1489, packet size1490and/or sequence numbers1491may be displayed. Clients may be able to sort and/or rearrange the contents of the recent received packets list region1485as desired via controls provided by the visualization interface. FIG.15illustrates an example of a use of a slider control element to obtain visualizations of configuration changes over time with the help of a visualization service, according to at least some embodiments. The slider control element1571may enable clients to view the state of their visualization target environments at various points in time. In the depicted example, visualization interface1502A shows that a client's target environment includes five configuration items (CIs1522A and1522B at availability container1510A of provider network1520, CIs1522K and1522L at availability container1510B, and CI1522P at external data center1533) as of a point in time indicated in element1574A. As the slider is moved to the right (as indicated by arrow1551), e.g., using a mouse or a fingertip depending on the display environment being used by the client, the time shown in element1574A may advance, and the configuration items shown in the interface may change. At the time corresponding to element1574B, for example, interface1502B shows that two new configuration items have been added to the client's target environment. Configuration item1522R has been added to availability container1510B, while configuration item1522S has been added to the external data center. In at least some implementations, newly-added configuration items may be highlighted (e.g., temporarily shown in a different color) as indicated by arrows1555. Interactive controls other than sliders (e.g., radio-style buttons or fast-forward/rewind controls) may be provided for time-based configuration displays in some embodiments. The interface may also provide additional controls associated with temporal queries, e.g., to enable clients to capture machine-readable snapshots of the state of their visualization target environments at various points in time, to show only the differences in the configurations at specified points in time, to plot the changes on a timeline, and so on. Slider1571and other time-oriented controls of the visualization interfaces may rely on snapshot-based and/or other temporally-oriented APIs of the discovery service (discussed above in the context ofFIG.8) in various embodiments. In some embodiments, the visualization service may provide mechanisms to permit clients to migrate application components from one data center or provider network to another.FIG.16illustrates an example of a use of a visualization service to initiate a phased migration of an application execution environment, according to at least some embodiments. In the depicted scenario, tags indicating the roles played by various configuration items within application patterns may be indicated in the visualization interface1602A. For example, based on the patterns of interactions among the client's configuration items, the configuration discovery service may have identified a multi-tier web application. Configuration item1622Q running at external data center1633may have been identified as a web server of the multi-tier web application, while configuration item1622R may have been identified as a database server of the multi-tier web application. Tags1674A and1674B may have been generated for the configuration items1622Q and1622R respectively. Plans to migrate the components of the multi-tier web application in stages to the provider network1620may have been generated in the depicted embodiment, e.g., at a migration planning and implementation service. Each stage of the migration may involve the transition of configuration items playing a particular role (e.g., “webserver” or “database server”) to the provider network. Migration plan details for each role may be viewed using controls1633A (for database servers) and1633B (for webservers). A control element1675may be provided to enable the client to initiate phased migration for configuration items associated with particular tags. If and when the client requests the migration of configuration items tagged with the label “DB” for database server in the depicted example, the visualization service may transmit corresponding requests programmatically to the discovery service and/or the migration implementation service in the depicted embodiment. After the database server corresponding to configuration item1622R has been transitioned as part of the phase migration to availability container1610B of the provider network1620, the client's view may be updated to show the migrated configuration item (labeled1674C) as shown in interface1602B. In at least some embodiments, the visualization service may also support interfaces enabling before-and-after performance comparisons with respect to migrations. For example, a baseline view of the performance of the application (e.g., throughput, transaction latencies/response times, etc.) prior to a migration may be shown in region1646, while the corresponding post-migration performance statistics may be shown in region1647. If the post-performance statistics are unsatisfactory, the client may initiate the reverse migration (e.g., moving the database server back to the external data center in the depicted example) if desired. In one embodiment, the visualization interface may be used by clients to directly specify tags to be associated with various roles played by configuration items within applications. For example, an “add tag” control may be provided for clients to define new tags or associate existing tags with selected configuration items via the interface. Over time, a library of tags may be accumulated, and clients may use the visualization interface to inspect available tags, issue tagging requests with respect to existing tags of the library, or add new tags to the library. In effect, the client may “teach” the discovery service regarding application patterns using such controls provided by the visualization service. After the client has associated a webserver tag, for example, with one or more configuration items using the visualization service, the discovery service may monitor the behavior (e.g., the patterns of network interactions with other configuration items) of the tagged items. The discovery service may be able to generate heuristics, based on the observed behavior, which can be used to automatically tag other configuration items with the same labels/tags as the examples provided by the clients, without being explicitly requested by the client to do so. For example, after the client has provided some examples of web servers or database servers within an environment implementing a multi-tier web application architecture, the discovery service may be able to identify other web servers and/or database servers within other environments implementing similar application architectures on its own, and display auto-generated tags accordingly. Methods for Supporting a Configuration Data Visualization Service FIG.17is a flow diagram illustrating aspects of operations that may be performed by a visualization service to provide graphical representations of configuration records, according to at least some embodiments. As shown in element1701, a determination may be made, e.g., when a client logs in to a visualization tool or console or when a client issues a visualization request via a programmatic interface, that a graphical representation of configuration information associated with one or more application execution environments of the client is to be provided. The application execution environments may comprise resources distributed among one or more data centers. Some of the data centers may be part of respective provider networks, while others may be at customer-owned premises. Using one or more queries directed to a discovery service's repository of coalesced configuration records, the visualization service may identify the particular visualization target environment or environments for which information is to be displayed (element1704). In some embodiments, the visualization service may be external to the configuration discovery service, while in other embodiments the visualization service may form part of the configuration discovery service. In some cases a given customer account of the visualization service (or the discovery service) may have several different application execution environments associated with the account, and the particular set of configuration items that can be viewed may differ from one client-side display environment to another. From a workstation situated within a customer's offices, for example, a larger subset of the client's application execution environments may be accessed via visualizations than from a tablet computer. In at least some embodiments, an identity and access management system may be used to determine the kinds of configuration items for which a display can be provided. The visualization service may identify various characteristics of the client-side display environment (e.g., using APIs supported by the client-side device operating systems) (element1707). The characteristics may include the number and sizes of the screens available, the computing capacity of the client-side device at which the graphical representation is to be provided, the network bandwidth available for communications between the service and the client device, and so on. Based on the amount of configuration data available for the visualization target environment, and/or on the characteristics of the display environment, a number of decisions may be made at the visualization service regarding the initial graphical representation to be generated for the client. These decisions may include selecting the granularity at which configuration information is to be displayed (e.g., aggregated at the data center level, availability container level, room level, server level, etc.) and the view category to be used (e.g., a tabular view, a graph/network view or a tree/hierarchical view) (element1710). In at least some embodiments a time window with respect to which configuration changes are to be highlighted, e.g., using a dashboard portion of the visualization interface, may be determined (element1713). A dynamically updated display of the visualization target environment using the selected granularity and view category may be initiated at the client-side display environment (element1716). As and when new configuration data becomes available from the discovery service, or in response to requests issued programmatically by the client, the display may be updated (element1719). In one embodiment, the characteristics of the client-side display environment may sometimes be deemed insufficient to display the information requested by the client. For example, the screen space available may be too small to show the level of detail requested by the client, or the network bandwidth available to the client device may be too small to transfer the amounts of data requested in a reasonable amount of time. In some such scenarios, based for example on an estimate of resource usage associated with fulfilling the visualization request via the current client-side display environment, the visualization service may transmit a recommendation to utilize an offline tool (or a different client-side display environment than the one being used currently) for displaying information requested in the visualization request. Migration Marketplace Service As mentioned in the context ofFIG.1, a provider network may implement one or more migration-related services in some embodiments, with the high-level goals of enabling customers to transfer applications or application components from one set of physical or virtual platforms to another as desired, e.g., to help reduce costs, increase the availability or resilience of the applications, simplify management, and the like. The information collected by the discovery service may be useful in making migration-related decisions as discussed earlier. However, due in part to the many dependencies among the components of complex application stacks, the process of transitioning an application from one environment to another may sometimes benefit from expertise that may not be available within the application-owner organizations. In some embodiments, the provider network at which the configuration discovery service runs may act as an intermediary between migration facilitators or experts and application owners by implementing a migration marketplace service. A business entity that is able to help plan and/or implement the migration of applications or application components from one set of execution platforms to another, such as from a customer-owned data center to a provider network or from one provider network to another, may be termed a migration facilitator or a migration practitioner herein. If the migration facilitator differs from the entity which operates the target set of execution platforms (to which the application is to be migrated) and differs from the entity whose application is being migrated, the migration facilitator may be referred to as a third-party migration facilitator. At least some migration facilitators may not be characterized as third parties—e.g., a professional services or consulting organization which is part of (or affiliated with) one of the provider networks to/from which migrations are to be performed may also utilize the migration marketplace service in the role of a migration facilitator in some embodiments. At a high level, the migration marketplace service may enable application owners to learn about potential partners that can help with complex application migration tasks, and may enable migration facilitators to find customers. Migration facilitators may include tool providers (e.g., independent software vendors or ISVs that develop migration tools which can be used by clients to implement application migrations), technical experts who are available to actually plan and implement migrations instead of providing tools as such, professional services organizations, partners of the provider network operator, and so on in various embodiments. FIG.18illustrates an example system environment in which a migration marketplace service which utilizes data collected at a configuration discovery service may be implemented, according to at least some embodiments. As shown, system1800comprises a provider network1802at which a number of network-accessible services are implemented. The services include a virtualized computing service1810, a packaged-program execution service1812, one or more storage or database services1814, a configuration discovery service1804, a migration marketplace service1806, and a migration planning and implementation service1808. Each service may implement a set of programmatic interfaces in the depicted embodiment, which may be used for interactions between the services and their clients, and may also be used for inter-service interactions in some cases. The VCS programmatic interfaces1844associated with virtualized computing service1810may be used to acquire, use and release virtual machines. The PES interfaces1845of the packaged-program execution service1812may be used to submit requests for the execution of programs without explicitly allocating servers to the requesters, and for receiving results of the program executions. The SDS programmatic interfaces1846may be used to store and access data sets associated with various applications. The CDS programmatic interfaces1841of the discovery service1804may be employed to initiate automated discovery of configuration information as discussed earlier, and to view the results of the automated discovery. The MPIS interfaces1843of the migration planning and implementation service1808may be used to generate detailed migration plans and execute the plans. The migration marketplace service1806may utilize some or all of the other services shown inFIG.18in the depicted embodiment. The MMS programmatic interfaces1842of the migration marketplace service may be utilized by at least two types of entities—potential migration clients1834(e.g., owners of applications that may be migrated) and migration facilitators1836. According to one embodiment, a client1834may transmit a request to the migration marketplace service to enable one or more migration facilitators to access at least some subset of the configuration information associated with the client's applications. The configuration information to which access permission is to be granted may, for example, be stored in the coalesced configuration records of a repository maintained at the discovery service1804. As discussed earlier a given application execution environment of the client1834may include configuration items or resources distributed among a plurality of platforms, e.g., including some resources outside provider network1802and/or some resources located within provider network1802. In some embodiments, the client1834may request that at least some of the configuration details be obfuscated or anonymized before being provided to the migration facilitators—that is, the facilitators may be permitted to access some aspects of the configuration information without being provided the full details. In some cases, the clients may allow only a specified set of migration facilitators to examine the configuration information, while in other embodiments any migration facilitator which has been registered or approved by the migration marketplace service may be granted access permission. In response to a client request to share access to the client's configuration record collection, the migration marketplace service1806may initiate the modification of one or more security settings for the configuration records (e.g., either within metadata maintained by the migration marketplace service itself, or within metadata of the discovery service1804). The migration marketplace service1806may receive respective membership requests from migration facilitators via the MMS programmatic interfaces1842in the depicted embodiment. In at least some embodiments, the migration marketplace service may initiate a set of validation procedures (e.g., to verify the identity and business background of the facilitator) before registering a migration facilitator as an authorized member of the marketplace. A registered migration facilitator may transmit a migration candidate matching request1837via interfaces1842in the depicted embodiment. Such a request may include, for example, a description of the facilitator's expertise or capabilities (e.g., the kinds of application stacks whose migrations the facilitator has helped with in the past) and/or characteristics of the kinds of migration clients the facilitator would prefer (e.g., a minimum or maximum size of the to-be-migrated application execution environment, the geographical location of the to-be-migrated application execution environment or the migration destination environment, etc.). A response to the migration candidate matching request1837, identifying one or more potential customers for the facilitator from among clients of the discovery service1804may be generated at the service1806using configuration information to which the migration facilitator has been granted access. In some embodiments a discussed below in further detail, the migration marketplace service1806may execute service-generated algorithms to find matching customers, while in other embodiments the facilitator may supply its own executable code modules to find potential customers and the facilitator-supplied code may be run at the pre-packaged program execution service1812or at some other platforms. The migration facilitator1836may examine the provided information regarding potential migration candidates provided by service1806, and submit a migration proposal via the interfaces1842to the service1806. The proposal may describe various aspects of the assistance that the facilitator is willing to provide, including for example a preliminary cost estimate for migrating a specified subset or all of an application execution environment, a preliminary schedule or implementation plan, and the like. If the migration facilitator is an independent software vendor offering a migration tool rather than a technical expert interested in actually implementing the migration, the name of the tool (which can be used by the client to migrate their application) may be included in the proposal in some embodiments. The service1806may then transmit a representation of the proposal1837to the potential migration client1834via the programmatic interfaces1842. The service1806may receive numerous proposals for a given application environment of the same potential migration client1834in some embodiments, each of which may be programmatically provided to the client. If the client1834finds one of the proposals acceptable and wishes to proceed with detailed migration planning and/or implementation, in some embodiments an approval message may be transmitted to the migration marketplace service via interfaces1842. In response to such an approval message, in one embodiment the migration marketplace service may initiate operations to enable the client1834to execute a workflow of a migration planning/implementation service1808, e.g., by seamlessly and programmatically transferring further migration interactions to MPIS interfaces1842. Thus, in at least some embodiments, the migration marketplace service1806may not necessarily be responsible for detailed planning and actual implementations of migrations; instead, the primary role of the migration marketplace service1806may comprise acting as a trusted conduit of information between potential migration clients and migration facilitators before the detailed work of planning and executing a migration is begun. In other embodiments, the migration marketplace service may be responsible for planning and/or coordinating the implementation of at least some aspects of migration. It is noted that the visualization service discussed herein in the context ofFIG.10-FIG.17may also be used with the migration marketplace service1806and/or migration planning/implementation services1808in some embodiments. The visualization service may provide a unified seamless set of graphical interfaces which can be used to view application execution environment configuration data, identify potential facilitators for application migrations, and plan and implement such migrations as desired in such embodiments. Programmatic Interactions with Migration Marketplace Service FIG.19illustrates example programmatic interactions between a client and a migration marketplace service, according to at least some embodiments. As shown, a client1920may submit a discovery data access grant request1925to one or more computing devices of the migration marketplace service1922via programmatic interfaces. The discovery data access grant request1925may comprise, for example, a set of application environment identifiers1928corresponding to the client's configuration record collections, one or more security constraints1931and/or a facilitator list1934. The facilitator list1934may indicate one or more specific facilitators to which configuration information of the client1920is to be disclosed if needed, or may indicate that any registered/authorized facilitator may be provided the configuration information. Security constraints1931may indicate whether any limits are to be placed on the revealed configuration data (e.g., the client may prefer that details of some types of configuration items not be revealed at the individual configuration item level although aggregated information may be revealed), which aspects of the configuration information if any are to be anonymized or obfuscated, and so on. Based at least in part on the contents of the request1925, the migration marketplace service1922may modify security settings associated with the client's configuration records, e.g., at a marketplace metadata repository1955and/or at the configuration discovery service1957. A confirmation message1975indicating that the requested access permission changes have been applied may be transmitted to the client1920via a programmatic interface. FIG.20illustrates a first set of example programmatic interactions between a migration facilitator and a migration marketplace service, according to at least some embodiments. In the depicted embodiment, a migration facilitator2020may submit a migration candidate identification query2025to the migration marketplace service2022via a programmatic interface. The query2025may include respective descriptors of the facilitator's capabilities2028(e.g., the types of application for whose migration the facilitator is interested in providing assistance) and candidate preferences2031(e.g., the minimum and/or maximum application configuration sizes whose migration the facilitator is willing to assist with, preferred geographical regions or locations from the perspective of the facilitator, and so on). In response to the candidate identification query2025, a filter specification2037to retrieve matching configuration data from the configuration discovery service database2040may be prepared at the migration marketplace service2022. In some embodiments, a configuration data anonymizer2034may participate in the preparation of the filter specification2037so that only the subset of configuration data to which access has been granted by potential migration clients via access grant requests of the kind discussed in the context ofFIG.19is retrieved. In various embodiments, obfuscation criteria (e.g., provided by the potential migration clients, or generated by the migration marketplace service based on heuristics) may be used to prepare the filters to avoid breaches of client configuration security or confidentiality. In one embodiment, instead of or in addition to generating the filter specification, the configuration data anonymizer2034may process the configuration data retrieved from the discovery service database to ensure that any security constraints indicated by the potential migration clients are not violated. A curated candidate environment list2046, indicating application execution environments and/or clients which match the capabilities and preferences of the migration facilitator2020may be transmitted via the service's programmatic interfaces to the facilitator. The facilitator in turn may submit a list2049of migration proposals to the service2022, corresponding to some or all of the candidate environments and clients indicated in list2046. The migration marketplace service may then transmit representations of the migration proposals2051(e.g.,2051A and2051B) to the appropriate clients2080(e.g.,2080A or2080B) in the depicted embodiment via the service's programmatic interfaces. At least some of the proposals2051may include a respective indication of a preliminary cost estimate2054(e.g.,2054A or2054B) for the proposed migration effort in the depicted embodiment. In one embodiment, a proposal2051may also or instead include a migration schedule estimate, or a feedback record (e.g., a review or rating/ranking score) indicative of an earlier migration implemented by the facilitator associated with the proposal. FIG.21illustrates a second set of example programmatic interactions between a migration facilitator and a migration marketplace service, according to at least some embodiments. The primary difference betweenFIG.21andFIG.20is that in the scenario illustrated inFIG.21, the migration facilitator may transmit an executable program code module which can be executed to identify migration candidates which match the facilitator's requirements, instead of relying on the migration marketplace service to use service-generated matching algorithms. Such an approach may, for example, enable the migration facilitators to reduce the amount of detailed information regarding the facilitators' capabilities or constraints that have to be provided to the marketplace service, and may also reduce the burden placed on the marketplace service to develop accurate and efficient matching algorithms. In the embodiment shown inFIG.21, a migration facilitator2120submits a migration candidate identification query2125comprising an indication of an executable candidate-matching algorithm code module2128to the migration marketplace service2122. In some implementations the query2125may include some indication of an execution platform or service at which the candidate-matching code module2128is to be run. The service2122transmits the code2128to a selected execution service2175(e.g., the packaged-program execution service1812ofFIG.18). The use of the packaged-program execution service may have the benefit that no resources may have to be pre-allocated for the migration facilitator; instead, the packaged-program execution service may simply find an available execution platform from among a pool of platforms, run the module on that platform and provide the results of the execution. The migration facilitator may only be responsible for the computation resources actually used for the execution of the module. In some cases, other execution platforms such as virtual machines of the provider network's virtualized computing service may be used. As a result of the execution of the matching algorithm code, a filter specification2178may be transmitted to the configuration discovery service, and a set if matching candidate configuration environments2181may be generated accordingly. The matching algorithm code may use the candidate configuration environments to produce a migration proposal list2183transmitted to the migration marketplace service. The individual proposals2184(e.g.,2184A or2184B) of the list may then be transmitted to the appropriate clients2180(e.g.,2180or2180B) in the depicted embodiment. Marketplace Metadata FIG.22illustrates examples of entries that may be stored at a metadata repository of a migration marketplace service, according to at least some embodiments. As shown, the metadata repository2205may comprise at least two categories of information: migration facilitator records2231and discovery service client records2251. The migration facilitator records2231may include, for example, descriptors of the facilitators' capabilities2233or expertise (e.g., the types of application stacks the facilitators with whose migrations the facilitators have experience). The records2231may also include the facilitators' preferences or requirements2235regarding the characteristics which make migration candidates suitable for the facilitators (e.g., the geographical distribution of the pre-migration or post-migration configuration items, the range of acceptable or preferred sizes of the to-be-migrated application environments, or preferred geographical regions or locations of operation of the facilitators) in various embodiments. In at least one embodiment the records2231may also include feedback2237or testimonials indicating the appraisals of the facilitators' earlier assistance. The feedback may include ratings or rankings (e.g., a number of stars between 1 and 5, with 1 indicating a poor rating and 5 indicating an excellent rating) as well as text reviews in some embodiments. In the depicted embodiment, the records2231may also include migration proposal histories2239(indicating one or more proposals generated by the facilitators in the past) and proposal conversion histories2241indicating the subset of the proposals which were accepted by clients. The discovery service client records2251may include the configuration data access grants2253corresponding to various clients. In addition, in at least one embodiment, entries2255indicating the migration histories of the clients may also be maintained. It is noted that in some embodiments, at least some of the kinds of elements shown inFIG.22may not necessarily be stored at the migration marketplace service. Web-Based Marketplace Service Interface FIG.23illustrates an example web-based interface which may be implemented by a migration marketplace service, according to at least some embodiments. As shown, the interface comprises a web page2302which includes a message area2310, some number of advertisement regions2315(e.g.,2315A-2315D) corresponding to respective registered migration facilitators, and interaction controls (e.g., buttons or web links)2325,2327and2329. The message area2302may inform a client of the migration marketplace service that additional information regarding various migration facilitators can be obtained by links indicated in the advertisements. Each of the advertisements may indicate the application stacks that a particular facilitator is willing to assist with—e.g., facilitator F1's supported application stacks2322A in advertisement region2315A, facilitator F2's supported application stacks2322B in advertisement region2315B, and so on. Ranking/rating feedback2324for the facilitators, as well as example pricing information2325may also be included in the advertisements in some embodiments. Case studies indicating the benefits of migrating applications (e.g., to the provider network from customer-owned data centers) may be made available to clients via button control2325of the web page2302. Additional information (e.g., white papers or online tutorials) regarding automated discovery of configuration items and/or migration planning may be accessed via button control2327. In the depicted embodiment, clients may sign up for recommendations regarding migration facilitators using control2329. In various embodiments, clients may use the migration marketplace service's interfaces to submit requests for assistance with migration. For example, in some embodiments, clients of the provider network who have not yet started using the discovery service, but may potentially be interested in migrating their applications at some time in the future, may transmit requests via the marketplace service's programmatic interfaces to initiate the automated discovery of configuration items. In response to such a request, the migration marketplace service may invoke the programmatic interfaces of the configuration discovery service on behalf of the client so that the retrieval of configuration information from data sources associated with the client's applications can be begun. The collected information may later be used to match the client with appropriate migration facilitators. In at least one embodiment, a client on whose behalf configuration data has already been collected at a discovery service may submit a migration assistance request to the migration marketplace service. The migration marketplace service may execute its own matching algorithms based on the client's configuration data and the metadata stored regarding facilitators (e.g., in repository2205ofFIG.22), and provide facilitator recommendations to the client. Methods for Supporting a Migration Marketplace FIG.24is a flow diagram illustrating aspects of operations that may be performed at a migration marketplace service, according to at least some embodiments. As shown in element2401, a request to enable migration facilitators to access configuration records of a client's application environments may be received at a migration marketplace service of a provider network. The configuration records may have been collected from a variety of data sources affiliated with a configuration discovery service as described earlier. Security metadata for the configuration record collection of the client may be modified accordingly (element2404). As mentioned earlier, in some cases the security settings may be modified at the discovery service, while in other embodiments security metadata may be maintained by the migration marketplace service itself. A migration candidate identification request or query may be received at the marketplace service from a migration facilitator (element2407). A filter specification or query limiting the configuration data retrieved from a discovery service database based on the client's permission grants and/or data obfuscation requirements may be transmitted to the discovery service (element2410). In some embodiments, the migration marketplace service may transmit the filter specification or query. In other embodiments, executable code for a migration candidate matching algorithm may be supplied by the migration facilitator, and may be run at a different service of the provider network (such as a packaged-program execution service which does not require pre-allocation of servers for the code), resulting in the submission of the filter specification or query. The configuration information retrieved in response to the filtering may be used to generate one or more migration proposals (element2413), e.g., indicating that the migration facilitator is interested in assisting with the migration of a client's execution environment. In at least some embodiments the proposal may include a preliminary cost estimate. The marketplace service may transmit a representation of the proposal via its programmatic interfaces to the client to which the proposal applies (element2416). Optionally, in response to a client's acceptance of a proposal via a programmatic interface, the migration marketplace service may initiate a workflow of a separate migration planning or implementation service (element2419). It is noted that in various embodiments, at least some operations other than those illustrated in the flow diagrams ofFIG.9,FIG.17, andFIG.24may be used to implement the configuration discovery-related and migration marketplace-related techniques described above. Some of the operations shown may not be implemented in some embodiments or may be implemented in a different order, or in parallel rather than sequentially. Use Cases The techniques described above, of automated collection, coalescing and visualization of configuration data from a variety of sources at multiple networks, and of implementing migration-related online marketplaces, may be useful in a variety of embodiments. For complex application stacks distributed among customer-owned and provider-owned resources, the described configuration discovery service may be able to combine and curate application configuration data from diverse sources with varying levels of granularity, trust and accuracy. The service may expose the collected data, organized according to a standardized ontology-based naming schema, via easy-to-use programmatic interfaces including APIs that can be used to build higher-level services, such as services that help customers plan and implement the migration of their applications to provider network environments. Visualization components of the configuration discovery service information may make it easier for customers to obtain overviews of their entire application stacks, as well as to drill down into any desired level of detail, which can help with resource capacity planning, debugging performance and troubleshooting. A migration marketplace service may act as an intermediary that can introduce application owners, who may potentially be interested in migrating their applications to provider networks, but may not have the necessary technical expertise to be certain about how to plan and implement the migrations, to migration facilitators or experts that may be able to help the application owners with their migration needs. The migration service may ensure that the information provided regarding a given application environment meets the security criteria of the application owners, and may support a smooth transition to a migration planning and implementation service for customers who have selected a migration facilitator. Illustrative Computer System In at least some embodiments, a server that implements a portion or all of one or more of the technologies described herein, including the techniques to implement components of the configuration discovery service, an associated visualization service and/or a migration marketplace service may include a general-purpose computer system that includes or is configured to access one or more computer-accessible media.FIG.25illustrates such a general-purpose computing device9000. In the illustrated embodiment, computing device9000includes one or more processors9010coupled to a system memory9020(which may comprise both non-volatile and volatile memory modules) via an input/output (I/O) interface9030. Computing device9000further includes a network interface9040coupled to I/O interface9030. In various embodiments, computing device9000may be a uniprocessor system including one processor9010, or a multiprocessor system including several processors9010(e.g., two, four, eight, or another suitable number). Processors9010may be any suitable processors capable of executing instructions. For example, in various embodiments, processors9010may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors9010may commonly, but not necessarily, implement the same ISA. In some implementations, graphics processing units (GPUs) may be used instead of, or in addition to, conventional processors. System memory9020may be configured to store instructions and data accessible by processor(s)9010. In at least some embodiments, the system memory9020may comprise both volatile and non-volatile portions; in other embodiments, only volatile memory may be used. In various embodiments, the volatile portion of system memory9020may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM or any other type of memory. For the non-volatile portion of system memory (which may comprise one or more NVDIMMs, for example), in some embodiments flash-based memory devices, including NAND-flash devices, may be used. In at least some embodiments, the non-volatile portion of the system memory may include a power source, such as a supercapacitor or other power storage device (e.g., a battery). In various embodiments, memristor based resistive random access memory (ReRAM), three-dimensional NAND technologies, Ferroelectric RAM, magnetoresistive RAM (MRAM), or any of various types of phase change memory (PCM) may be used at least for the non-volatile portion of system memory. In the illustrated embodiment, program instructions and data implementing one or more desired functions, such as those methods, techniques, and data described above, are shown stored within system memory9020as code9025and data9026. In one embodiment, I/O interface9030may be configured to coordinate I/O traffic between processor9010, system memory9020, and any peripheral devices in the device, including network interface9040or other peripheral interfaces such as various types of persistent and/or volatile storage devices. In some embodiments, I/O interface9030may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory9020) into a format suitable for use by another component (e.g., processor9010). In some embodiments, I/O interface9030may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface9030may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments some or all of the functionality of I/O interface9030, such as an interface to system memory9020, may be incorporated directly into processor9010. Network interface9040may be configured to allow data to be exchanged between computing device9000and other devices9060attached to a network or networks9050, such as other computer systems or devices as illustrated inFIG.1throughFIG.24, for example. In various embodiments, network interface9040may support communication via any suitable wired or wireless general data networks, such as types of Ethernet network, for example. Additionally, network interface9040may support communication via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks, via storage area networks such as Fibre Channel SANs, or via any other suitable type of network and/or protocol. In some embodiments, system memory9020may be one embodiment of a computer-accessible medium configured to store program instructions and data as described above forFIG.1throughFIG.24for implementing embodiments of the corresponding methods and apparatus. However, in other embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media. Generally speaking, a computer-accessible medium may include non-transitory storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD coupled to computing device9000via I/O interface9030. A non-transitory computer-accessible storage medium may also include any volatile or non-volatile media such as RAM (e.g. SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc., that may be included in some embodiments of computing device9000as system memory9020or another type of memory. Further, a computer-accessible medium may include transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface9040. Portions or all of multiple computing devices such as that illustrated inFIG.25may be used to implement the described functionality in various embodiments; for example, software components running on a variety of different devices and servers may collaborate to provide the functionality. In some embodiments, portions of the described functionality may be implemented using storage devices, network devices, or special-purpose computer systems, in addition to or instead of being implemented using general-purpose computer systems. The term “computing device”, as used herein, refers to at least all these types of devices, and is not limited to these types of devices. Conclusion Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Generally speaking, a computer-accessible medium may include storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM, volatile or non-volatile media such as RAM (e.g. SDRAM, DDR, RDRAM, SRAM, etc.), ROM, etc., as well as transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as network and/or a wireless link. The various methods as illustrated in the Figures and described herein represent exemplary embodiments of methods. The methods may be implemented in software, hardware, or a combination thereof. The order of method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. Various modifications and changes may be made as would be obvious to a person skilled in the art having the benefit of this disclosure. It is intended to embrace all such modifications and changes and, accordingly, the above description to be regarded in an illustrative rather than a restrictive sense.
146,655
11943105
Throughout the drawings, the same or similar reference numerals represent the same or similar element. DETAILED DESCRIPTION Principle of the present disclosure will now be described with reference to some example embodiments. It is to be understood that these embodiments are described only for the purpose of illustration and help those skilled in the art to understand and implement the present disclosure, without suggesting any limitation as to the scope of the disclosure. The disclosure described herein can be implemented in various manners other than the ones described below. In the following description and claims, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skills in the art to which this disclosure belongs. As used herein, the term “communication network” refers to a network that follows any suitable communication standards or protocols such as long term evolution (LTE), LTE-Advanced (LTE-A) and 5G New Radio (NR), and employs any suitable communication technologies, including, for example, Multiple-Input Multiple-Output (MIMO), OFDM, time division multiplexing (TDM), frequency division multiplexing (FDM), code division multiplexing (CDM), Bluetooth, ZigBee, machine type communication (MTC), enhanced Mobile Broadband (eMBB), massive Machine Type of Communication (mMTC) and ultra Reliable & Low Latency Communication (uRLLC) technologies. For the purpose of discussion, in some embodiments, the LTE network, the LTE-A network, the 5G NR network or any combination thereof is taken as an example of the communication network. As used herein, the term “network device” refers to any suitable device at a network side of a communication network. The network device may include any suitable device in an access network of the communication network, for example, including a base station (BS), a relay, an access point (AP), a node B (NodeB or NB), an evolved NodeB (eNodeB or eNB), a gigabit NodeB (gNB), a Remote Radio Module (RRU), a radio header (RH), a remote radio head (RRH), a low power node such as a femto, a pico, and the like. For the purpose of discussion, in some embodiments, the eNB is taken as an example of the network device. The network device may also include any suitable device in a core network, for example, including multi-standard radio (MSR) radio equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), Multi-cell/multicast Coordination Entities (MCEs), Mobile Switching Centers (MSCs) and MMEs, Operation and Management (O&M) nodes, Operation Support System (OSS) nodes, Self-Organization Network (SON) nodes, positioning nodes, such as Enhanced Serving Mobile Location Centers (E-SMLCs), and/or Mobile Data Terminals (MDTs). As used herein, the term “terminal device” refers to a device capable of, configured for, arranged for, and/or operable for communications with a network device or a further terminal device in a communication network. The communications may involve transmitting and/or receiving wireless signals using electromagnetic signals, radio waves, infrared signals, and/or other types of signals suitable for conveying information over air. In some embodiments, the terminal device may be configured to transmit and/or receive information without direct human interaction. For example, the terminal device may transmit information to the network device on predetermined schedules, when triggered by an internal or external event, or in response to requests from the network side. Examples of the terminal device include, but are not limited to, user equipment (UE) such as smart phones, wireless-enabled tablet computers, laptop-embedded equipment (LEE), laptop-mounted equipment (LME), and/or wireless customer-premises equipment (CPE). For the purpose of discussion, in the following, some embodiments will be described with reference to UEs as examples of the terminal devices, and the terms “terminal device” and “user equipment” (UE) may be used interchangeably in the context of the present disclosure. As used herein, the term “cell” refers to an area covered by radio signals transmitted by a network device. The terminal device within the cell may be served by the network device and access the communication network via the network device. As used herein, the term “circuitry” may refer to one or more or all of the following:(a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry); and(b) combinations of hardware circuits and software, such as (as applicable): (i) a combination of analog and/or digital hardware circuit(s) with software/firmware and (ii) any portions of hardware processor(s) with software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions); and(c) hardware circuit(s) and or processor(s), such as a microprocessor(s) or a portion of a microprocessor(s), that requires software (e.g., finnware) for operation, but the software may not be present when it is not needed for operation. This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware. The term circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in server, a cellular network device, or other computing or network device. As used herein, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. The term “includes” and its variants are to be read as open terms that mean “includes, but is not limited to”. The term “based on” is to be read as “based at least in part on”. The term “one embodiment” and “an embodiment” are to be read as “at least one embodiment”. The term “another embodiment” is to be read as “at least one other embodiment”. Other definitions, explicit and implicit, may be included below. As mentioned in hereinbefore, the latency is critical to user experiences regarding those applications, like AR/VR interactive applications; however, it is hard to achieve the latency requirement. The inventor notices that there are many factors affecting the latency, and amongst others, the network topology is one of key factors. For illustrative purposes,FIG.1shows three different network topologies and their associated latencies. Specifically,FIG.1illustrates terminal devices110, base station/network edge device120, application point130, the core network140, Internet150, Application (CSP)160, Application (“Over The Top (OTT)”)170and three network topologies for providing service to terminal devices. In network topology 1, the latency sensitive application is located in the base station/network edge device120and the latency is about 1 ms to 2 ms. In network topology 2, the application is located in the telecommunication cloud and the latency increases up to 20 to 50 ms. In the third topology 3, the application is located in the public cloud and the latency reaches 50 ms to 100 ms. Thus, it can be seen that imperceptible latencies for AR/VR interactive experience can only be achievable when application is hosted at the edge of the network. As a common system for all IP-based rich communication services (RCS), the IMS (IP Multimedia Subsystem) contributes to multimedia content delivery across packet networks and provides a unified service management and session control mechanism for applications. The IMS maintains a set of application servers to offer a set of services to the customer and provides wide and RCS experiences offered by the service providers. Examples of communication services includes but not limited to high-definition (HD) (Voice over Long Term Evolution (VoLTE)), Wi-Fi calling, enriched messaging, enriched calling with pre-call info, video calling, HD video conferencing and web communication (e.g. WebRTC, etc. . . . ). The IMS has the capability to provide real time large data transmission service like VR and AR. Moreover, the Session Border Controller (SBC)/IMS could embed voice, video calls and RCS messages to/from any phone within a VR/AR environment connecting the virtual reality to the real world, rather than typical isolated communication experiences available through the existing VR/AR platforms. In addition, it was brought a concept that uses mobile-edge computing to provide information technology (IT) and cloud-computing capabilities within the Radio Access Network (RAN) in close proximity to mobile subscribers. The mobile-edge computing allows content, services and applications to be accelerated, increasing responsiveness from the edge. Thus, the inventor believes that the RAN edge cloud be used to offer a service environment with ultra-low latency and high-bandwidth as well as direct access to real-time radio network information. Considering IMS's capability of providing unified service experience, it seems an advisable solution to deploy the IMS at the edge cloud of RAN to achieve ultra-low latency for interactive application experience. However, if the IMS is deployed at the edge cloud, the data quantity will be remarkably large and the architecture will be rather complicated, which would lead to huge unnecessary bandwidth and large computing resource usage. Moreover, the service management will also be quite complicated. All these would go away from the original aim to implement edge computing, like reducing latency, ensuring highly efficient network operation and service delivery, etc. Thus, there is a need for a solution of supporting latency sensitive APPs in a simple way. On one hand, it is expected that the architecture shall be simple and the data base shall be small; on the other hand, it will be also desirable if the application server (AS) deployment could support the real-time application service but not occupy too much edge resources. Embodiments of the present disclosure provide a new solution for dynamic management of application servers. The basic idea is to exploit a management function module at network side and predetermined management policies to dynamically manage a calling function of the IMS and application servers on an edge cloud for latency sensitive applications or other applications requiring such management. In this way, it could enable an intelligent decision whether appropriate IMS network elements and AS should be deployed to the edge cloud due to application requirements, thereby utilizing links/resources efficiently. In various embodiments of the present disclosure, the management function module transmits to a network edge computing device a calling function deployment command. The network edge computing device deploys, in response to the calling function deployment command, a calling function instance on the network edge computing device, the calling function instance being configured to connect with application servers for respective applications and handle communications between terminal devices and the application servers. In response to a network condition meeting one of predetermined management policies, the management function module may transmit, to at least one of the network edge computing device or the calling function instances, a management command for application servers on the network edge computing device. Upon receipt of the management command, the network edge computing device or the calling function instance performs a management operation related to application servers on the edge device such as adding, deleting or modifying AS (for example, modifying one or more parameters of AS) and/or create, delete, or modify a calling function instance (for example, modifying one or more parameters of the RCS-CF). In this way, the application servers can be managed dynamically for certain applications and only the calling function of the IMS is required to be installed on the network edge computing device. Thus, it could meet an ultra-low latency requirement with acceptable edge resources. In embodiments of the present disclosure, the management function module will be described as a part of the IMS, which can be called as RCS-MF and the RCS-MF will be taken as an example to describe the management function module. The network edge computing device could be implemented as or within the edge cloud and the edge cloud will be taken as an example to describe network edge computing device. However, it shall be appreciated that the present disclosure is not limited thereto, the management function module could also be implemented as other similar module like the RCS-MF and the network edge computing device could be implemented as other edge computing device like the edge cloud. It shall be also appreciated that the present disclosure is proposed mainly for latency sensitive applications such as AR/VR applications, drone based package delivery, drone based surveillance, discrete automation with motion control, remote driving and the like and AR/VR applications might be taken as an example of the latency sensitive applications. The present disclosure is not limited thereto and it could also be applicable to other applications requiring improving performance by dynamic management. For example, if the traffic is too high for a particular type of application, it could use the solution as proposed herein to dynamically manage its application service resources as well. FIG.2Aschematically illustrates example network architecture for the application server management according to an embodiment of the present disclosure. In the illustrated architecture, terminal devices210are connected to a radio access network (RAN)220(for example 5G RAN) having a network edge computing device such as an edge cloud222. The edge cloud222could create, delete, or modify a calling function instance such as RCS-CF224thereon and add, delete or modify application servers like AS #1 to AS #5 in response to a management command from a management function module such as RCS-MF234. AS #1 to AS #5 could be application servers for VA/AR applications, industry automation, tele-operation, remote control, remote driving, etc. In the illustrated architecture, the RCS-MF234is located on another communication system such as core network230. On the core network230, the RCS-MF234is deployed on the Policy Control function (PCF) module232and an IMS core238is deployed on a telecommunication cloud236and could be a third party application server. The RCS-MF234could command the edge cloud222and the RCS-CF224on the edge cloud222to dynamically manage application servers AS #1 to AS #5 for respectively applications. It shall be appreciated that although the RCS-MF is described as to be deployed on the PCF in the core network230, the present disclosure is not limited thereto. In some embodiments of the present disclosure, the RCS-MF could also be located on other entities than the PCF or even not in the core network230. FIG.2Bschematically illustrates another example network architecture for the application server management according to an embodiment of the present disclosure. The illustrated architecture is similar to the architecture illustrated inFIG.2Abut the difference lies in that the RCS-MF234is located on the edge cloud of the RAN220(5G RAN) instead of the core network210. In the architecture illustrated inFIG.2B, the RCS-MF234could only manage the RCS-CF and AS on the edge cloud222, while in the architecture illustrated inFIG.2A, the RCS-234could further manage RCS-CF on other edge clouds than edge cloud222. It shall be appreciated that the example architecture illustrated inFIG.2A or2Bis only given for illustrative purposes and the present disclosure is not limited thereto. In practical applications, it could also exploit different architecture. For example, it is possible to use a mixing architecture for a plurality of edge clouds, wherein some RCS-MFs use the architecture as illustrated inFIG.2Aand other RCS-MFs use the architecture as illustrated inFIG.2BFor another example, for the same one edge cloud, some types of application servers could be managed by RCS-MF on the edge cloud, and other types of application servers can be managed by RCS-MF on the core network. In addition, for one edge cloud, it is also possible to deploy the RCS-MF on both edge cloud and the core network to provide a redundant backup and thus, when one of the two RCS-MFs fails, the other one could take over the management. It shall be appreciated that although the RCS-MF could be located on either the edge cloud or the core network, the RCS-MF could have a similar structure. Hereinafter, reference will be made toFIG.3to describe an example implementation of an RCS-MF module according to some embodiments of the present disclosure. FIG.3schematically illustrates an example implementation of an RCS-MF module according to some embodiments of the present disclosure. As illustrated inFIG.3, the RCS-MF module300may include four modules: a policy module310, a communication module320, a database module330and an execution logic module320. The policy module310is configured to make management policies. To manage the application servers, the RCS-MF sets up criteria and rules of ultra-low latency for use cases and AS resources. For example, the policies could be based on provisioning or input from other network elements, such as Unified Data Management (UDM), User plane Function (UPF), RCS-CF, etc. The management decisions made by RCS-MF could be based on the policies. The policy module310could further perform network condition monitoring and latency estimation for applications or various types of application, and determine whether the latency meeting one of the predetermined management policies, for example determine whether a RCS Calling Function (RCS-CF) exceeds the criteria and threshold with a policy-guided function (from RCS-MF) so that RCS-MF could determine operations how to manage the applications servers for respective applications. It shall be noticed that the network condition monitoring, the latency estimating and the determining could also be performed by any other suitable module in the RCS-MF. The database module320stores information on the edge cloud, the RCS-CF and the AS. In a network architecture in which the RCS-MF is installed on an edge cloud, the database module320stores only information of this edge cloud and corresponding RCS-CF and AS instances thereon. If the RCS-MF is installed on the PCF in the core network, the RCS-MF may have an access to multiple edge clouds under control of this PCF, and the database module320may store the information of these edge clouds as well as the RCS-CF and AS instances. The execution logic module340interacts with the policy module310, the database module330and the communication module320and is responsible for life cycle management for RCS-CF and AS. In some embodiments of the present disclosure, the execution logic module340could command the edge cloud to add, delete or scale RCS-CF, or add, delete or scale AS. In a simple implementation, the RCS-CF could be deployed during initialization of the RCS-MF and exists for the whole life cycle of the RCS-MF. Alternatively, the RCS-CF could also be added, deleted, scaled when a predetermined deployment condition is detected. For example, the predetermined deployment condition can be input in advance and the PCF could detect the network condition. The execution logic module340could determine whether any predetermined deployment condition is met based on the detected network condition, and command the edge cloud to deploy an RCS-CF instance if the predetermined deployment condition is met. The AS's life cycle could be dynamically controlled by the execution logic module340. The AS instances could be added, deleted, scaled based on the predetermined management policies and the network condition. The execution logic module340could also be responsible for updating information stored in the database module. FIG.4schematically illustrates an example implementation of a network edge cloud device according to some embodiments of the present disclosure. As illustrated inFIG.4, the network edge cloud device could be edge cloud400including a communication module410and a command execution module420. The communication module410is configured to enable the communication between the edge cloud400and the RCS-MF300. For different network architecture, the communication module410may have somewhat different implementations. In some embodiments of the present disclosure, the RCS-MF is installed on the edge cloud and thus the communication module is configured to enable the message to be transferred within the application layer. In other embodiments of the present disclosure, the RCS-MF is installed on the PCF in the core network, and in such architecture, the communication module is an independent module and provides APIs of the RCS-MF to support messages from/to the RCS-MF. The command execution module420is configured to perform, upon receipt of a management command from the RCS-MF, management operations on instances, for example allocating or de-allocating resources of RCS-CF or AS instances, modifying RCS-CF, or AS. The command execution module420cloud initiate these instances upon receiving a deployment command from the RCS-MF. The RCS-CF instance430could be created by the command execution module420during the initialization of the RCS-MF or upon detection of a predetermine deployment condition. The RCS-CF includes IMS network elements that are capable of performing call functionalities and connecting to AS. In some embodiments of the present disclosure, the RCS-CF instance430includes a Session Border Controller (SBC)432and a Serving Call Session Control Function (S-CSCF)434. The SBC432is located between the 5G RAN and the IMS network and functions as an access point for the terminal device to access the IMS. The SBC could solve problems that make the IMS a powerful element, including problems concerning, for example, multiple access networks (e.g. IPv4 and IPv6, Session Initiation Protocol (SIP) normalization, Virtual Private Networks (VPNs)), security issues (Disk Operating System (DOS) attacks, topology hiding), legislative issues (emergency calls, legal intercept, interworking), media related problems (Quality of Service (QoS), transcoding, media security) an etc. Moreover, the SBC also plays the role of Proxy-Call Session Control Function (P-CSCF). The S-CSCF434is responsible for the signaling control during session. Both AS instances on the edge cloud and AS instances on other network devices are connected to the S-CSCF434so that a session could be established between a terminal device and a suitable AS instance. The S-CSCF handles SIP registrations and decides to which application server(s) the SIP message will be forwarded so as to provide services to the terminal devices. Thus, the SBC gives service providers an economical and reliable means to secure and control media and signaling streams that cross the edges of an IMS network. The SBC may be located at the edge of access networks to secure any type of IP access, and deliver IMS services. The SBC may be also arranged at the edge of peer networks to support roaming, IMS interconnection or corporate SIP trunking scenarios. In addition, the RCS-CF430may be configured with the predetermined polices. In such a case, the RCS-CF430may perform network condition monitoring and latency estimating for applications or various types of applications, determine whether the latency meeting one of the predetermined management policies and send a notification to the RCS-MF based on the polices. Thus, deploying SBC and S-CSCF in RCS-CF could fulfil the goal that RCS-CF is able to handle calls and connect to AS. The SBC and S-CSCF in RCS-CF430could have a substantially same structure with the existing RCS-CF; however, in some embodiments of the present disclosure, it could be further provisioned with some predetermined management rules, which will be described hereinafter. FIG.5illustrates an example flow chart of a method of managing application servers on network edge computing device according to some embodiments of the present disclosure. The method500could be implemented in the RCS-MF module234,300which may be located on either edge cloud222in the radio access network like 5G RAN220or the PCF232in the core network230. For the purpose of discussion, the method500will be described with reference toFIG.3. At block510, the management function module like RCS-MF transmits a calling function deployment command to a network edge computing device such the edge cloud, to deploy a calling function instance on the network edge computing device, the calling function instance being configured to connect with application servers for respective applications and handle communications between terminal devices and the application servers. In some embodiments of the present disclosure, during initialization of the RCS-MF300, the execution logic module340decides to create a RCS-CF. The execution logic module340accesses the database module330to get the address information of the edge cloud and sends a calling function deployment to the edge cloud through the communication module320to create an RCS-CF instance. The RCS-CF includes IMS network elements that are capable of performing call functionalities and connecting to AS and deploying an SBC and an S-CSCF in RCS-CF fulfils our goal that RCS-CF is able to handle calls and connect to AS. In some embodiments of the present disclosure, the RCS-CF could be provisioned with some predetermined management rules to perform network condition monitoring operations based on predetermined management rules. In such a case, the RCS-MF may first transmit a call function creation command to the network edge computing device to create the calling function instance on the network edge computing device, and then transmit a call function configuration command containing the predetermined management policies to the created calling function instance to configure the created call function instance with the predetermined management policies. The RCS-MF300, particularly the execution logic module340, is responsible to send a configuration command through the communication module420to the RCS-CF. The configuration command may contain the management policies from the RCS-MF. The configuration command can be initiated either upon the initialization of the RCS-MF or upon policy update. The execution logic module340could decide when to initiate the request. The management policies are made by the policy module301and it could define aspects of decisions that should be made by the RCS-CF with an aim at dynamically utilizing the AS resources to achieve maximum utilization of edge resources. Both RCS-MF and RCS-CF are aware of the mapping relationship. For illustrative purposes, Table 1 shows an example of management policies towards a specific RCS-CF and decisions of the RCS-CF in certain scenarios according to some embodiments of the present disclosure. In Table 1, parameter “AppId” denotes an identity that represents an application type, which usually refers to an application that requires ultra-low latency; parameter “TrThresH” denotes a high traffic rate threshold, above which the RCS-CF needs to handle the condition within the edge cloud; parameter “TrThresL” denotes a low traffic rate threshold, below which the traffic should be routed to the core network; parameter “LaThres” denotes a latency threshold for a specific application; and parameter “TrTimer” denotes a timer value. TABLE 1An example of management policies towards a specific RCS-CFand decisions of the RCS-CF in certain scenariosRCS-MFPolicyDescriptionRCS-CF DecisionAppIdThe initial state is that RCS-CF handles this type ofTrThresHrequest through core network. Once the traffic rate ofrequest reaches TrThresH for , RCS-CF send request toRCS-MF for allocating application server on edge cloud.AppIdThe initial state is that RCS-CF handles this type ofTrThresLrequest within the AS allocated on the edge cloud.TrTimerOnce the traffic rates drop below TrThresL and afterTrTimer, the traffic does not reach TrThresL again.RCS-CF send notify to RCS-MF for de-allocatingthis application server.AppIdThe initial state is that RCS-CF handles this type ofla Thresrequest through core network. Once RCS-CF detects thatthe latency of this application exceeds latThres, whichmeans, the latency already impacted user experience.RCS-CF send request to RCS-MF for allocatingapplication server on edge cloud. In Table 1, it respectively specifies conditions that the RCS-CF shall request the RCS-MF to add a new AS, or delete an existing AS, wherein the first and third row are policies for sending notification of adding new AS and the second row is a policy for sending a notification of deleting AS. It shall be appreciated that these policies illustrated in Table 1 are only given for illustrative purposes and the present disclosure is not limited thereto. In some embodiments, it is also possible to contain different policies, contains more policies, or less policies. For example, it may further contain one or more policies for RCF instance adding, deleting, modifying, AS modifying, etc. For illustrative purposes,FIG.6illustrates an example signaling flow chart of RCS-CF deployment according to some embodiments of the present disclosure. As illustrated inFIG.6, first at step610, the RCS-MF sends a request/command to the edge cloud to create an RCS-CF. In response to the received request/command, the edge cloud allocates resources for RCS-CF and initializes an RCS-CF instance in step620. After finishing the creating of the RCS-CF, in step630, the edge cloud sends back a reply message to inform the RCS-MF that RCS-CF is successfully allocated. This reply message includes information on the new created RCS-CF. The RCS-MF obtains the RCS-CF's information from the reply message and updates the database module accordingly. Then, in step640, the RCS-MF may further send a configuration request/command to the RCS-CF, and the request/command contains predetermined management policies used for RCS-CF provisioning. After RCS-CF's provisioning with the predetermine management policies, the RCS-CF sends in step650a configuration reply to notify the RCS-MF that the RCS-CF was provisioned as required. In those embodiments in which the calling function instance is configured with the predetermined management policies, the RCS-MF may further receive a network condition notification from the calling function instance on the edge cloud. The network condition notification may include information indicating the network condition meeting the one of the predetermined management policies. The RCS-MF may transmit a management command comprising transmitting the management command in response to receiving of the network condition notification. Thus, in the present disclosure, RCS-CF could monitor the network condition based on these policies but the RCS-MF still makes the decision whether to perform resource management on the edge cloud. In addition, the PCF could also obtain the network condition and thus the network condition monitoring could also be performed by the PCF. Reference is made back toFIG.5, at block520, the management function module transmits, to at least one of the network edge computing device and the calling function instances, a management command for application servers on the network edge computing device. in response to a network condition meeting one of predetermined management policies. In some embodiments of the present disclosure, the management command may include an application server allocation command to add a new application server, an application server de-allocation command to delete an application server; an application server modification command to modify an application server; a calling function creation command to create a calling function instance; a calling function deleting command to delete a calling function instance; a calling function modification command to modify a calling function instance; an application server connection command to connect an application server to a calling function instance; a calling function configuration command containing management policies to configure a calling function instance with the updated management policies. As mentioned before, the policy module310of the RCS-MF300makes the management policies according to its local provisioning and input from other NEs like UDM, UPF, RCS-CF, etc. In some embodiments of the present disclosure, the PCF could monitor the network condition and the RCS-MF's execution logic module340could determine, based on the management policies from the policy module310, whether there is ultra-low latency service requesting to this AS or the traffic is too high from the RCS-CF to this AS to meet the latency requirement. If so, the execution logic module340sends a command through the communication module320to the edge cloud to allocate AS resources and create a new AS. For example, the RAN (for example, 5G RAN) where the edge cloud is located may cover the area of a mall which opens from 9:00 AM to 10:00 PM, and serves a very large crowd during that time. The mall provides AR shopping experience and thus a large number of AR service requests are expected during the period from 9:00 AM to 10:00 PM. In this case, a management command may be sent to the edge cloud to allocate AR AS under control of the RCS-MF to meet the service requirements. In some embodiments of the present disclosure, the RCS-CF could be provisioned with some predetermined management rules, for example those given in Table 1 and the RCS-CF may monitor the network condition based on the provided policies, and transmit a network condition notification to the management function module, when the network condition meets one of the predetermined management policies. The execution logic module340could decide whether to transmit a management command to the edge cloud to dynamically manage the RCF-CF or AS, When the policies or network condition changes, the RCS-MF may also send another command to add, modify or de-allocate AS resources on edge cloud. For example, if there is a sport event which uploads the real-time VR streams, the RCS-MF could command the edge cloud to allocate the VR AS before the event starts and de-allocate VR AS after the event ends. Thus, while the sport event ends, the related VR AS will be terminated. For illustrative purposes,FIG.7illustrates an example signaling flow chart of AS allocation or modification according to some embodiments of the present disclosure. As illustrated inFIG.7, first in step710, the RCS-MF sends an AS allocation or modification request/command to inform the edge cloud that an AS needs to be created or updated. Upon receipt of the request/command, the edge cloud creates or updates an AS. Thereafter, the edge cloud sends a reply to this AS allocation or modification request/command. This reply message includes newly created or updated AS information. The RCS-MF will update the database module with the information on the AS. FIG.8further illustrates an example signaling flow chart of AS de-allocation according to some embodiments of the present disclosure. As illustrated inFIG.8, first in step810, the RCS-MF sends an AS de-allocation request/command to inform the edge cloud that an AS needs to be deleted. Upon receipt of the request/command, the edge cloud de-allocates an AS. Thereafter, the edge cloud sends a reply to this AS de-allocation request/command. This reply can be just an ACK and does not include any AS information. The RCS-MF will delete the information of this AS from the database module after receiving this reply. FIG.9illustrates an example signaling flow chart of operations from RCS-CF configuration and AS connection according to some embodiments of the present disclosure. As illustrated inFIG.9, first in step910, the RCS-MF sends a request/command to provision RCS-CF and then in step920, the RCS-CF send back a reply to the request to inform the RCS-MF that the policies are accepted. In step930, the RCS-MF sends an AS allocation request to the edge cloud. The edge cloud creates and initializes AS resources as requested in step940and sends a reply to the AS allocation request in step950. The reply contains information on the created AS and the RCS-MF updates the database module with the information on AS. Next, in step960, the RCS-MF sends AS connection request to the RCS-CF to ask the RCS-CF to set up connection to AS. This AS connection request includes the information on the AS to be connected. The RCS-CF sends a connection request to the target AS to set up a connection to the AS in step970. Thereafter, in step980the AS sends back a reply to the edge cloud after the connection is set up and in turn the RCS-CF sends a reply to the RCS-MF in step990to inform RCS-MF that the connection is successfully set up. The RCS updates the database module with the information on the connection. In some embodiments of the present disclosure, in addition to those AS management commands as illustrated hereinabove, the RCS may further dynamically manage RCS-CF, for example, transmit a calling function creation command to add new RCS-CF, transmit a calling function deletion command to delete RCS-CF, or transmit a calling function modification command to modify the RCS-CF when the RCS-CF deployment condition is met, not met, or the network condition changes. While in some other embodiments of the present disclosure, an RCS-CF could be deployed during initialization of the RCS-MF and exists for the whole life cycle of the RCS-MF. FIG.10illustrates an example flow chart of a method of performing management operations related to application servers on network edge computing device according to some embodiments of the present disclosure. The method1000may be implemented at a network edge computing device like the edge cloud or other devices. As illustrated inFIG.10, first at block1010, the edge cloud may deploy, in response to receipt of a calling function deployment command from a management function module such as the RCS-MF, a calling function instance on the network edge computing device. The calling function instance such as RCS-CF may be configured to connect with application servers for respective applications and handle communications between terminal devices and the application servers. In order to dynamically manage resources for instances of AS, RCS-CF on the edge cloud, the edge cloud shall contain functionalities for handling calls and connecting to AS instances. Thus, during the initialization of the RCS-MF or upon detection of a predetermined RCS-CF deployment condition, a call function deployment command can be transmitted to the edge cloud to create one or more RCS-CF instances on the edge cloud. In some embodiments of the present disclosure, the RCS-CF includes an SBC and an S-CSCF. The SBC is located between the RAN (like 5G RAN) and the IMS network and functions as an access point for the terminal device to access the IMS. The S-CSCF is responsible for the signaling control during session. Both AS instances on the edge cloud and AS instances on other network devices are connected to the S-CSCF so that a session could be established between a terminal device and a suitable AS instance. Thus, the SBC and the S-CSCF in RCS-CF could fulfil the goal of handling calls and connecting to AS. Upon receipt of the call function deployment command, the edge cloud allocates resources for an RCS-CF to create an RCS-CF instance including an SBC and an S-CSCF. In some embodiments of the present disclosure, the RCS-CF could be provisioned with some predetermined management rules, for example those given in Table 1. In these embodiments, the RCS-CF could receive a call function configuration command containing the predetermined management policies, and in response to this, the call function instance will be configured with the predetermined management policies. With the provided policies, the RCS-CF may monitor the network condition, and transmit a network condition notification to the management function module, when the network condition meets one of the predetermined management policies. Particularly, the RCS-CF could monitor the network condition, estimate latency for an application or a type of applications, and determine whether the network condition meets one of the predetermined management policies based on the estimated latency and the predetermined management policies. Thus, by means of the network condition notification, it could indicate the network condition meeting one of the predetermined management policies. In block1020, the edge cloud or RCS-CF could perform a management operation related to the application servers on the network edge computing device, in response to receipt of a management command for application servers on the network edge computing device. The management command may comprise one or more of: an application server allocation command, an application server de-allocation command, an application server modification command, a calling function deployment command, a calling function deletion command, a calling function modification command, an application server connection command, and calling function configuration command. In some embodiments of the present disclosure, the edge cloud could add a new application server upon receipt of an application server allocation command from the management function module. In some embodiments of the present disclosure, the edge cloud could delete an application server upon receipt of an application server de-allocation command from the management function module. In some embodiments of the present disclosure, the edge cloud could modify an application server upon receipt of an application server modification command from the management function module. In some embodiments of the present disclosure, the edge cloud could add a new calling function instance in response to receipt of a calling function deployment command from the management function module. In some embodiments of the present disclosure, the edge cloud could delete a calling function instance in response to receipt of a calling function deletion command from the management function module. In some embodiments of the present disclosure, the edge cloud could modify a calling function instance in response to receipt of a calling function modification command from the management function module. In some embodiments of the present disclosure, the RCS-CF could perform a policy configuration upon receipt of a calling function configuration command containing the updated management policies from the management function module. In some embodiments of the present disclosure, the RCS-CF could connect an application server to the calling function instance upon receipt of an application server connection command from the management function module. In some embodiments of the present disclosure, the edge cloud may add a new application server in response to receipt of an application server allocation command from the management function module, the RCS-CF may connect the application server to a calling function instance in response to receipt of an application server connection command from the management function module; and the edge cloud may further delete the application server in response to receipt of an application server de-allocation command from the management function module. In some embodiments of the present disclosure, the edge cloud may add a new calling function instance in response to receipt of a calling function deployment command from the management function module, and the edge cloud may delete the calling function instance in response to receipt of a calling function deletion command from the management function module. It shall be noted that some detailed operations at the edge cloud are already described together operations at the RCS-MF in conjunction withFIGS.5to9and thus for some details at the edge cloud, reference could be made to those descriptions given with reference toFIGS.5to9. For illustrative purposes,FIG.11illustrates an example signaling flow chart of call handling according to some embodiments of the present disclosure. First in step1110, UE sends a request to start a dialog. In step1120, the RCS-CF makes decision based on forwarding policies to determine the AS that the dialog request shall be forwarded to. In step1130, the RCS-CF forwards the dialog request from the UE to a corresponding AS providing related services to set up a dialog. The corresponding AS may be AS created on the edge cloud, or AS on other network devices. The RCS-CF does not care about the location of the AS just forwards the request to a corresponding AS connected thereto. In step1140, the AS sends a reply to the dialog request and the RCS-CF forwards the reply to the UE in step1150. In step1160, a call session is set up and thus the AS could provide services to the AS in the dialog session. FromFIG.11, it can be seen that the call flow of call handling is substantially to the existing call flow except that the RCS may make decision by provisioned policies and thus the solution proposed herein has a little effect on the existing call flow. Embodiments of the present disclosure provide a new scheme of dynamically managing resources for instances, particularly for some applications with a strict latency requirement. In embodiments of the present disclosure, the AS for these instances could be created on the edge cloud and managed dynamically, and it could meet the strict latency requirements and utilize these resources efficiently since the edge cloud could offer a service environment with ultra-low latency and high-bandwidth as well as direct access to real-time radio network information. In turn, it could bring a better and enriched user experience for 5G application, especially those for interactive applications like AR/VR applications. In some embodiments of the present disclosure, there is further provided an apparatus capable of performing the method500, and the apparatus could be or function as for example, a management function module like the RCS-MF as proposed herein. The apparatus may comprise means for transmitting, to a network edge computing device, a calling function deployment command to deploy a calling function instance on the network edge computing device, the calling function instance being configured to connect with application servers for respective applications and handle communications between terminal devices and the application servers; and means for transmitting, to at least one of the network edge computing device and the calling function instance, a management command for application servers on the network edge computing device in response to a network condition meeting one of predetermined management policies. In some embodiments of the present disclosure, the calling function instance may be configured with the predetermined management policies, and the apparatus further comprises means for receiving a network condition notification from the calling function instance on the edge cloud, the network condition notification indicating the network condition meeting the one of the predetermined management policies. In these embodiments of the present disclosure, the means for transmitting the management command may be configured to transmit the management command in response to the receiving of the network condition notification. In some embodiments of the present disclosure, the means for transmitting the calling function deployment command may be further configured to transmit the calling function deployment command during initialization of the management function module. In some embodiments of the present disclosure, the means for transmitting the calling function deployment command may be further configured to transmit the calling function deployment command when a predetermined deployment condition is detected. In some embodiments of the present disclosure, the means for transmitting the calling function deployment command may be further configured to: transmit a call function creation command to the network edge computing device to create the calling function instance on the network edge computing device; and transmit a call function configuration command containing the predetermined management policies to the calling function instance to configure the call function instance with the predetermined management policies. In some embodiments of the present disclosure, the means for transmitting the management command may be configured to perform one or more of: transmitting an application server allocation command to the network edge computing device to add a new application server; transmitting an application server de-allocation command to the network edge computing device to delete an application server; transmitting an application server modification command to the network edge computing device to modify an application server; transmitting a calling function creation command to the network edge computing device to add a new calling function instance; transmitting a calling function deletion command to the network edge computing device to delete new calling function instance; transmitting a calling function modification command to the network edge computing device to modify a calling function instance; transmitting an application server connection command to the calling function instance to connect an application server to a calling function instance; transmitting a calling function configuration command containing updated management policies to a calling function instance to configure the calling function instance with the updated management policies. In some embodiments of the present disclosure, the means for transmitting the management command may be further configured to: transmit an application server allocation command to the network edge computing device to add a new application server in response to the network condition meeting an application server adding policy; transmit an application server connection command to the calling function instance to connect the added application server to the calling function instance in response to successful adding of the application server; and transmit an application server de-allocation command to the network edge computing device to delete the application server in response to the network condition meeting an application server deletion policy. In some embodiments of the present disclosure, the means for transmitting the management command may be further configured to: transmit a calling function creation command to the network edge computing device to add a new calling function instance in response to the network condition meeting a calling function adding policy; and transmit a calling function deletion command to the network edge computing device to delete a calling function instance in response to the network condition meeting a calling function deletion policy. In some embodiments of the present disclosure, the apparatus may further comprise: means for monitoring the network condition; means for estimating latency for an application or a type of application; and means for determining whether the network condition meets one of the predetermined management policies based on the estimated latencies and the predetermined management policies. In these embodiments, the means for monitoring the network condition, the means for estimating latency and the means for determining is contained in the management function module. In some embodiments of the present disclosure, the means for monitoring the network condition, the means for estimating latency and the means for determining can be contained in a policy module in the management function module. In some embodiments of the present disclosure, wherein the predetermined management policies may comprise one or more of a high traffic rate threshold for a particular type of applications; a low traffic rate threshold and a time limit for a particular type of applications; and a high latency threshold for a particular type of applications. In some embodiments of the present disclosure, the apparatus may be implemented on the network edge computing device. In other embodiments of the present disclosure, the apparatus is implemented on another communication network communicable with a communication network having the network edge computing device. In some embodiments of the present disclosure, the calling function instance may be a rich communication service (RCS) calling function instance, and the network edge computing device is an edge cloud on a generation radio access network. In some embodiments of the present disclosure, there is further provided another apparatus for performing the method1000, and the apparatus could be or function as for example a network edge computing device like edge cloud. The apparatus may include means for deploying, in response to receipt of a calling function deployment command from a management function module, a calling function instance on the network edge computing device, the calling function instance being configured to connect with application servers for respective applications and handle communications between terminal devices and the application servers; and means for performing, by the network edge computing device or the calling function instance, a management operation related to application servers on the network edge computing device in response to receipt of a management command for application servers on the network edge computing device from a management function module. In some embodiments of the present disclosure, the calling function instance may be configured with predetermined management policies, the apparatus may comprise means for monitoring a network condition based on the predetermined management policies; and means for transmitting a network condition notification to the management function module, the network condition notification indicating the network condition meeting one of the predetermined management policies. In some embodiments of the present disclosure, the means for monitoring a network condition may be further configured to monitor the network condition; estimate latency for an application or a type of applications; and determine whether the network condition meets one of the predetermined management policies based on the estimated latency and the predetermined management policies. In some embodiments of the present disclosure, the means for deploying the calling function instance on the network edge computing device may comprise: means for creating, in response to receipt of a call function creation command from the management function module, the calling function instance on the network edge computing device; and means for configuring, in response to receipt of a call function configuration command containing the predetermined management policies, the created call function instance with the predetermined management policies. In some embodiments of the present disclosure, the means for performing the management operation may be configured to perform any of: adding, by the network edge computing device, a new application server in response to receipt of an application server allocation command from the management function module; deleting, by the network edge computing device, an application server in response to receipt of an application server de-allocation command from the management function module; modifying, by the network edge computing device, an application server in response to receipt of an application server modification command from the management function module; adding, by the network edge computing device, a new calling function instance in response to receipt of a calling function deployment command from the management function module; deleting by the network edge computing device, a calling function instance in response to receipt of a calling function deletion command from the management function module; modifying, by the network edge computing device, a calling function instance in response to receipt of a calling function modification command from the management function module; connecting by the calling function instance, an application server to a calling function instance in response to receipt of an application server connection command from the management function module; configuring by the calling function instance, the calling function instance with updated management policies in response to receipt of a calling function configuration command containing the updated management policies from the management function module. In some embodiments of the present disclosure, the means for performing the management operation may be further configured to: add a new application server in response to receipt of an application server allocation command from the management function module; connect the application server to a calling function instance in response to receipt of an application server connection command from the management function module; and delete the application server in response to receipt of an application server de-allocation command from the management function module. In some embodiments of the present disclosure, the means for performing the management operation may be further configured to: add a new calling function instance in response to receipt of a calling function deployment command from the management function module; and delete the calling function instance in response to receipt of a calling function deletion command from the management function module. In some embodiments of the present disclosure, the predetermined management policies may comprise one or more of: a high traffic rate threshold for a particular type of applications; a low traffic rate threshold and a time limit for a particular type of applications; and a high latency threshold for a particular type of applications. In some embodiments of the present disclosure, the management function module may be implemented on one of the network edge computing device or another communication network communicable with a communication network having the network edge computing device. In some embodiments of the present disclosure, the calling function instance is a rich communication service (RCS) calling function instance, and the network edge computing device is an edge cloud on a radio access network. FIG.12is a simplified block diagram of a device1200that is suitable for implementing embodiments of the present disclosure. The device1200can be implemented at or as at least a part of a network device in the 5G RAN220or the core network230as shown inFIGS.2A and2B. As shown, the device1200includes a processor1212, a memory1220coupled to the processor1210, a communication module1240coupled to the processor1210, and a communication interface (not shown) coupled to the communication module1240. The memory1210stores at least a program1230. The communication module1240is for bidirectional communications. The communication interface may represent any interface that is necessary for communication with other network elements, such as X2 interface for bidirectional communications between eNBs, S1 interface for communication between a Mobility Management Entity (MME)/Serving Gateway (S-GW) and the eNB, Un interface for communication between the eNB and a relay node (RN), or Uu interface for communication between the eNB and a UE. The program1230is assumed to include program instructions that, when executed by the associated processor1210, enable the device1200to operate in accordance with the embodiments of the present disclosure, as discussed herein with reference toFIGS.5-9. The embodiments herein may be implemented by computer software executable by the processor1210of the device1200, or by hardware, or by a combination of software and hardware. The processor1210may be configured to implement various embodiments of the present disclosure. The memory1210may be of any type suitable to the local technical network and may be implemented using any suitable data storage technology, such as a non-transitory computer readable storage medium, semiconductor based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory, as non-limiting examples. While only one memory1210is shown in the device1200, there may be several physically distinct memory modules in the device1200. The processor1210may be of any type suitable to the local technical network, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs) and processors based on multicore processor architecture, as non-limiting examples. The device1200may have multiple processors, such as an application specific integrated circuit chip that is slaved in time to a clock which synchronizes the main processor. FIG.13illustrates a simplified block diagram of a device1300that is suitable for implementing embodiments of the present disclosure. The device1300can be a network edge computing device like edge cloud221, and implemented at or as at least a part of a network device in the 5G RAN220as shown inFIGS.2A and2B. As shown, the device1300includes a processor1310, a memory1320coupled to the processor1310, a communication module1340coupled to the processor1310, and a communication interface (not shown) coupled to the communication module1340. The memory1310stores at least a program1330. The communication module1340is for bidirectional communications. The communication interface may represent any interface that is necessary for communication with other network elements, such as X2 interface for bidirectional communications between eNBs, S1 interface for communication between a Mobility Management Entity (MME)/Serving Gateway (S-GW) and the eNB, Un interface for communication between the eNB and a relay node (RN), or Uu interface for communication between the eNB and a UE. The program1330is assumed to include program instructions that, when executed by the associated processor1310, enable the device1300to operate in accordance with the embodiments of the present disclosure, as discussed herein with reference toFIG.10. The embodiments herein may be implemented by computer software executable by the processor1310of the device1300, or by hardware, or by a combination of software and hardware. The processor1310may be configured to implement various embodiments of the present disclosure. The memory1310may be of any type suitable to the local technical network and may be implemented using any suitable data storage technology, such as a non-transitory computer readable storage medium, semiconductor based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory, as non-limiting examples. While only one memory1310is shown in the device1300, there may be several physically distinct memory modules in the device1300. The processor1310may be of any type suitable to the local technical network, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs) and processors based on multicore processor architecture, as non-limiting examples. The device1300may have multiple processors, such as an application specific integrated circuit chip that is slaved in time to a clock which synchronizes the main processor. FIG.14illustrates an example cloud-based network architecture which provides a network edge computing device for implementing embodiments of the present disclosure. As illustrated inFIG.14, the example cloud-based network architecture includes four layers, an access Central Office (CO) layer, an edge Data Center (DC) layer, a local DC layer and a regional DC layer. As illustrated. at the access CO, the edge DC, the local DC and the regional DC, it could respectively achieve latencies of 2 to 5 ms, less than 10 ms, less than and less than 50 ms. Thus, embodiments of the present disclosure could be implemented on both edge DC and access CO to support applications like video monitoring, AI analysis, AR, live VR broadcast, etc. In addition, by means of such edge cloud, it is possible to provide unified API for application development and bring flexibility to implement embodiments of the present disclosure, such as RCS-MF, RCS-CF, AS, etc. The architecture illustrated inFIG.14was proposed by China Unicorn Corporation and other vendors like AT&T also have started preparing and implementing edge infrastructure. The example architecture is only given for illustrative purposes and the present disclosure is not limited thereto. In fact, embodiments of the present disclosure could be implemented in an edge cloud in different network architecture. Generally, various embodiments of the present disclosure may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device. While various aspects of embodiments of the present disclosure are illustrated and described as block diagrams, flowcharts, or using some other pictorial representations, it is to be understood that the block, apparatus, system, technique or method described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof. The present disclosure also provides at least one computer program product tangibly stored on a non-transitory computer readable storage medium. The computer program product includes computer-executable instructions, such as those included in program modules, being executed in a device on a target real or virtual processor, to carry out the method600as described above with reference toFIGS.6-9. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, or the like that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Machine-executable instructions for program modules may be executed within a local or distributed device. In a distributed device, program modules may be located in both local and remote storage media. Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowcharts and/or block diagrams to be implemented. The program code may execute entirely on a machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server. In the context of the present disclosure, the computer program codes or related data may be carried by any suitable carrier to enable the device, apparatus or processor to perform various processes and operations as described above. Examples of the carrier include a signal, computer readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable medium may include but not limited to an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of the computer readable storage medium would include an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are contained in the above discussions, these should not be construed as limitations on the scope of the present disclosure, but rather as descriptions of features that may be specific to particular embodiments. Certain features that are described in the context of separate embodiments may also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment may also be implemented in multiple embodiments separately or in any suitable sub-combination. Although the present disclosure has been described in languages specific to structural features and/or methodological acts, it is to be understood that the present disclosure defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
70,738
11943106
DETAILED DESCRIPTION Aspects of the invention are disclosed in the following description and related drawings directed to specific aspects of the invention. Alternate aspects may be devised without departing from the scope of the invention. Additionally, well-known elements of the invention will not be described in detail or will be omitted so as not to obscure the relevant details of the invention. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. Likewise, the term “aspects of the invention” does not require that all aspects of the invention include the discussed feature, advantage or mode of operation. The terminology used herein is for the purpose of describing particular aspects only and is not intended to be limiting of aspects of the invention. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Further, many aspects are described in terms of sequences of actions to be performed by, for example, elements of a computing device. It will be recognized that various actions described herein can be performed by specific circuits (e.g., application specific integrated circuits (ASICs)), by program instructions being executed by one or more processors, or by a combination of both. Additionally, these sequence of actions described herein can be considered to be embodied entirely within any form of computer-readable storage medium having stored therein a corresponding set of computer instructions that upon execution would cause an associated processor to perform the functionality described herein. Thus, the various aspects of the invention may be embodied in a number of different forms, all of which have been contemplated to be within the scope of the claimed subject matter. In addition, for each of the aspects described herein, the corresponding form of any such aspects may be described herein as, for example, “logic configured to” perform the described action. Exemplary aspects of this disclosure are directed to traffic management in mesh networks. Specifically, disclosed aspects involve systems and methods for dynamically configuring the network traffic, in a manner which increases message reliability and reduces message redundancy. Aspects of this disclosure include a supervisor provisioned, for example, at one or more nodes of an exemplary SigMesh network. The supervisor may be implemented with an appropriate combination of software, hardware, and firmware, and configured to collect information from the SigMesh and build a model of the SigMesh networks' traffic topology. Based on this model, the supervisor may configure the above-described parameters (e.g., transmit count, transmit interval, relay retransmit count, relay retransmit interval, etc.) for nodes of the SigMesh network, in a manner which dynamically enhances the efficiency and reduces congestion of traffic in the SigMesh network. With reference now toFIGS.1A-B, a simplified schematic diagram of an exemplary mesh network100is illustrated inFIG.1A. Mesh network100may be configured as a Bluetooth SigMesh network in exemplary aspects, and may include several nodes of which nodes A-T have been exemplarily shown. Although some solid lines and some dashed lines have been illustrated to indicate possible communication paths between some of the nodes A-T, it will be understood that any other communication path supported by relevant standards/specifications may be supported by mesh network100. To facilitate further understanding of mesh network100, an example deployment of some of the nodes A-T will be discussed, without imposing any restrictions on the scope of aspects of this disclosure. Accordingly, in one example, mesh network100may be a home network (e.g., Bluetooth SigMesh), wherein node Q may include a switch (or “switch Q”) and node R may include a light (or “light R”). It may be possible for devices (e.g., mobile phones, or other example user devices discussed with reference toFIG.1B) at several of the nodes A-E to control light R from switch Q. As can be noticed, switch Q may be surrounded by several such nodes A-E, which means that each of these nodes A-E may be able to transmit messages to switch Q for controlling light R, for example. If a network parameter such as the transmit count is set to be a relatively small number while switch Q is surrounded with a large number of such nodes A-E, then the likelihood of message collision from the nodes A-E may be high. This means that messages from nodes A-E may collide and fail to be injected into mesh network100. However, increasing the transmit count to a large value with a view to increasing the possibility of message injection into mesh network100would also be detrimental because the noise (from a large number of redundant messages) would increase in mesh network100, leading to poor user experience at the various nodes. Furthermore, it will be recognized that the nodes A-E, for example, may not be stationary, and so may be mobile and relocate frequently. This means that the neighbors for nodes may be in flux, and as such have an unpredictable effect on the traffic and message injection likelihood. There may also be one or more relay nodes through which messages may be transmitted through mesh network100. For instance, switch G may be used control a device at node R, through various intermediate nodes, such as node O to relay the messages from switch G to node R. Node O may be an edge node to connect switch G in this manner, and the setting of the relay count for mesh network100may affect the likelihood of the relay message being injected into mesh network100. FIG.1Bshows an example configuration of devices which may be located at corresponding nodes A-T of mesh network100, which may be wireless communication devices, and generally designated by the reference numeral101. Device101may support communication of Bluetooth or BLE signals, and as such, include transmitter102and receiver112, whose functionalities may be merged without loss of generality. Exhaustive details of transmitter102and receiver112have been omitted for the purposes of this discussion, as skilled persons will recognize detailed configurations of these devices. As shown, transmitter102includes encoder104configured to encode information to be transmitted into a protocol-specific packet format. Modulator106is configured to modulate the transmitted bits to corresponding symbols, which are used to modulate a carrier at the carrier frequency of communication paths in mesh network100, and antenna108is configured to transmit wireless signals comprising the modulated carrier on the communication paths. On the receiving end, receiver112may comprise antenna118configured to receive the wireless signals from the communication paths of mesh network100. Acquisition block120may include functionality for detecting whether the wireless signals received are intended for device101, e.g., based on acquisition thresholds adapted to signal strength of the wireless signals received on the communication paths. Symbols of the wanted signals are demodulated in demodulator116, and decoded in decoder114in order to retrieve the received information. Also illustrated are processor130and memory132in device101, which may be configured to respectively perform the below computations and store information in exemplary aspects. With reference now toFIGS.2A-B, aspects of the exemplary supervisor in SigMesh networks will be discussed.FIG.2Aillustrates a schematic view of system200in which supervisor204is deployed to manage and improve traffic of mesh network100. Supervisor204, as previously mentioned, may include any combination of hardware, software, or firmware. In one aspect, supervisor204may be deployed in one or more existing nodes A-T of mesh network100, while system200may also be a standalone module. As shown, system200may receive initial network parameters via path201. Control logic202may be configured to initially transmit message packets on path203based on the initial network parameters, to mesh network100or a node thereof. The message packets may be retransmitted, relayed, or injected into mesh network100via path205. Unlike conventional systems which lack a feedback path, in the illustrated exemplary aspect, system200also includes feedback path207from path205, feeding into supervisor204. Supervisor204may adjust the network parameters according to exemplary techniques (which will be discussed in further detail below) and provide the adjusted network parameters via path209. The initial network parameters on path201may be adjusted or modified in block210based on the adjusted network parameters suggested on path209from supervisor204, and control logic202may then modify its transmission of message packets to path203using the adjusted network parameters. The above process may be reiterated to accommodate dynamic changes in the network traffic real time. The following network parameters, which were previously mentioned, are considered in one example, while supervisor204may be configured to monitor and adjust any other network parameter on similar lines as above: “transmit count” refers to the number of message transmissions of the same packet originating from a node; “network transmit interval” refers to the time interval between successive messages transmitted from the node; “relay retransmit count” refers to the number of message retransmissions of the same packet by a relay node; and “relay retransmit interval” refers to the time interval between two consecutive messages to be relayed by the relay node. In one aspect, the above parameters are initially supplied on path201. Supervisor204collects information regarding the topology of mesh network100and message identity of messages flowing into path205from the feedback received via feedback path207, and creates suggestions for updating the above parameters in a dynamic manner to improve real time traffic conditions. FIG.2Bshows a network architecture250in which supervisor204may be implemented to perform the above-described functions. Layers252-264represent layers of network architecture250, according to the SigMesh specification for Bluetooth networks, which will be further described below, as applicable to aspects of this disclosure. Specifically, layer252is a model layer, which defines models that are used to standardize the operation of typical user scenarios and are defined in the Bluetooth Mesh Model (e.g., SigMesh) specification or other higher layer specifications. Examples of higher layer model specifications include models for lighting and sensors. Layer254is a foundation model layer, which defines the states, messages, and models required to configure and manage a mesh network. Layer256is an access layer, which defines how higher layer applications can use the upper transport layer. The access layer defines the format of application data, and defines and controls the application data encryption and decryption performed in the upper transport layer. The access layer also checks whether the incoming application data has been received in the context of the right network and application keys before forwarding it to the higher layer. Layer258is an upper transport layer, which encrypts, decrypts, and authenticates application data and is designed to provide confidentiality of access messages. It also defines how transport control messages are used to manage the upper transport layer between nodes, including when used by the Friend feature. Layer260is a lower transport layer, which defines how upper transport layer messages are segmented and reassembled into multiple Lower Transport protocol data units (PDUs) to deliver large upper transport layer messages to other nodes. The lower transport layer also defines a single control message to manage segmentation and reassembly. Layer262is a network layer, which defines how transport messages are addressed towards one or more elements. The network layer defines the network message format that allows Transport PDUs to be transported by the bearer layer. The network layer decides whether to relay/forward messages, accept them for further processing, or reject them. The network layer also defines how a network message is encrypted and authenticated. Layer264is a bearer layer, which defines how network messages are transported between nodes. There are two bearers defined, the advertising bearer and the generic attribute (GATT) bearer. Additional bearers may be defined in the future. In one example, supervisor204may be deployed to straddle the network layer262, lower transport layer260, and upper transport layer258. Thus, supervisor204may observe the messages which are sent out on path205through feedback path207and determine whether one or more of the above network parameters are to be adjusted. For instance, supervisor204may monitor the transmit count, transmit interval, relay retransmit count, relay retransmit interval, etc., based on the messages that are transmitted on path205, and determine whether adjusting one or more of the above network parameters would reduce traffic congestion and/or ensure injection (or alternatively, improve likelihood of injection) into mesh network100. Accordingly, supervisor204may provide feedback via path209with suggestions on modifying one or more of the above network parameters. Control logic202may modify the initial parameters from path201with the suggestions from path209and utilize the modified network parameters for future transmission. In this manner, supervisor204may advantageously perceive the local network density, and dynamically adjust transmit/relay counts and transmit/relay retransmit intervals according to the traffic status. The above deployment of supervisor204may be modified for specific network needs and so is very flexible and robust to network transience, packet loss, and potential disconnections between nodes. Supervisor204may also be advantageously used for flood suppression based on the ability to scale the transmission/relay rates with network density. For instance, network flooding may be selectively suppressed in a dense and noisy environment to reduce network congestion and increase the success rate of data transmission. The exemplary supervisor204configuration is also recognized as being energy-efficient based on the dynamic modification of the transmit/relay retransmit intervals and flood suppression mechanisms, which in turn reduce the active time of one or more nodes of the SigMesh network. Thus, exemplary techniques are suitable for use in battery powered devices, e.g., in the IoT systems. The above-described techniques may be implemented with relatively small software code size and memory space, as will be evidenced by exemplary algorithms which may be used for controlling message transmission, explained with reference toFIG.3below. With reference toFIG.3, an exemplary method300for controlling packet transmission on to a SigMesh network is shown. Method300may be implemented by supervisor204in conjunction with control logic202ofFIG.2Ain one example. The following process will be discussed with combined reference to system200ofFIG.2A. In block302, a message packet P (alternatively referred to as a hash) may be sent out on path205, and observed by supervisor204via feedback path207. An observation queue (not shown separately) may be implemented by supervisor204to store message packets which are being observed for the purposes of monitoring the traffic on mesh network100. As such, if in block302, it is determined that packet P is not currently being monitored, method300will proceed to block304(while noting that if packet P is already being observed, then the remaining blocks in method300will be relevant to this condition). In block304, packet P is added to the observation queue and values for initial transmit interval T0and transmit count No are obtained (e.g., as received from path201). A maximum observation window or packet observation time (PT) is set, and an observation timer Tnis initialed to the initial transmit interval T0. In block306, the packet P may be added into a sending queue, and in block308, the observation timer is started. In this regard, each new packet P is added into the sending queue to be sent out when scheduled (e.g., immediately, or after a wait). When the timer started in block308expires, the following blocks will determine if the packet P should be retransmitted (block318) or the process is complete and packet P is removed from the observation queue (block322). In block310, it is determined whether the same packet P is received again, and if so, in block312, the observation count of the packet P (or packet count) is increased. Otherwise, from block312, method300proceeds to block314, wherein the timer's expiration is checked to see if it has expired after the time Tn, and if the timer has not expired, then the process stays in block310to see if the same packet P is observed. In this way, the number of transmissions (or retransmissions), and the interval after which a packet is retransmitted, are monitored for the duration of the timer. In block316, once the timer expires after the initial time interval Tn, the packet count is checked to see if it is greater than a pre-specified threshold k, and if not, the packet P is added to the sending queue in block318to be injected into mesh network100. If the packet count is greater than the threshold k, then the time interval Tnis increased (e.g., doubled) in block320and the process from block308onwards is repeated until the time interval exceeds the maximum observation time PT. Once the timer exceeds the maximum observation time PT, the packet P is removed from the observation queue. In this way, for each packet, if it has been observed to be retransmitted more than a threshold number of times within a particular period of observation, the sending of the packet is delayed by leaving the packet P in the sending queue. Thus, flooding is avoided by preventing too many retransmissions. Moreover, if the packet P is not sent, then it is provided an opportunity to be transmitted, preventing the likelihood that it is lost due to a collision, for example. With reference now toFIG.4, system400showing aspects of a SigMesh network will be discussed. More specifically, system400illustrates exemplary aspects of calculating initial values of one or more network parameters, such as the transmit and relay counts, by supervisor204. In this regard, the initial values may be based on a neighbor node count, as will be explained below. Supervisor204is shown as node C402in this case, which may have several neighbors, among which neighbor node A404and neighbor node B406have been identified to aid in the following description. In order to determine the number of neighboring nodes for building a model of the network's topology, node C402may employ a Heartbeat model. In the Heartbeat model, the time to live (TTL) of packets, e.g., in SigMesh networks (e.g., using Bluetooth or BLE bearers) is a field in the mesh network's protocol data units (PDUs) which is used to control whether or not a received message should be relayed, with a view to limiting the total number of hops over which a message is ultimately relayed within the network. A value of zero for the TTL indicates that a message has not been relayed and should not be relayed. This means that a node can send a message to other nodes which are in direct radio range (i.e., the neighbors) and indicate that receiving node(s) should not relay the message. If a message is sent with a TTL of 2 or more, then each time the message is relayed, the TTL value is decremented. A value of 1 for the TTL means that the message may have been relayed at least once, but that it should not be relayed again. Thus, to determine the neighbor nodes count, a TTL value of zero (0) is used in an aspect, to avoid message relays. Neighbor node C402sends out a subscribe Heartbeat message with TTL=0 at402a. In the Heartbeat model, wherein the network nodes can be configured to send a message known as a Heartbeat message, periodically, neighbor node A404sends a Heartbeat message at404aand neighbor node B406sends a Heartbeat message at406a, which are received at402bby node C402. The purpose of the Heartbeat message is to indicate to other nodes that the node sending the Heartbeat message is still active and to allow its distance from the recipient to be determined, in terms of the number of hops needed to deliver the Heartbeat message. At402c, node C402calculates the neighbor node count Nc. By determining the neighbor count Nc, the neighbor count may be input into model412, for example, to use a logistic curve and choose a transmit count Tc, which would represent the best initial transmit count for node C402. Model412may receive the quality of service (QOS)410of the network in an optional aspect. With a as an upper limit of Tc, b as a lower limit of Tc, c as the largest number of nodes in a predetermined radius, the following equation holds for model412: Tc=a1+ee/2-x+b A simulation graph may be employed for the transmit count within a radius of 10 nodes, for example, such that as Tc increases, Nc will increase until an upper limit is reached. Accordingly, by determining the neighbor count Nc, node C402or supervisor204may determine and update the transmit count to be used in the exemplary aspects discussed herein. It will be appreciated that exemplary aspects include various methods for performing the processes, functions and/or algorithms disclosed herein. For example,FIG.5illustrates a method500of managing a mesh network (e.g., mesh network100). Block502comprises monitoring, at a supervisor node (e.g., supervisor204, from feedback path207) of the mesh network, one or more network parameters of a packet transmitted on the mesh network (e.g., monitoring packet count, transmission intervals in method300). Block504comprises determining whether one or more of the network parameters are to be adjusted for improving transmission efficiency or reducing traffic congestion on the mesh network (e.g., determining in block316of method300whether the number of retransmissions of the same packet is greater than a threshold). Block506comprises providing feedback to one or more nodes with the adjusted network parameters (e.g., providing feedback via path209to control logic202of a node to adjust its message transmission). Those of skill in the art will appreciate that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Further, those of skill in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. The methods, sequences and/or algorithms described in connection with the aspects disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. Accordingly, an aspect of the invention can include a computer-readable media embodying a method for managing traffic in a mesh network. Accordingly, the invention is not limited to illustrated examples and any means for performing the functionality described herein are included in aspects of the invention. While the foregoing disclosure shows illustrative aspects of the invention, it should be noted that various changes and modifications could be made herein without departing from the scope of the invention as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the aspects of the invention described herein need not be performed in any particular order. Furthermore, although elements of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
25,961
11943107
DETAILED DESCRIPTION FIG.1is a diagram100of an example of a system for multi-tenant orchestration. The diagram100includes a computer-readable medium (CRM)102, a branch-facing node (B-node)104coupled to the CRM102, a branch network106coupled to the B-node104through the CRM102, service point attachment nodes (S-nodes)108coupled to the CRM102, a virtual network facing node (V-Node)110coupled to the CRM102, and a virtual private cloud (VPC)112coupled to the V-Node110through the CRM102. In the diagram100, a cloud services exchange platform (CXP)114includes the B-node104, the S-nodes108, the V-node110, and a service engine116-1to a service engine116-n(collectively, the services116) coupled to the S-nodes108. The CRM102in intended to represent a computer system or network of computer systems. A “computer system,” as used herein, may include or be implemented as a specific purpose computer system for carrying out the functionalities described in this paper. In general, a computer system will include a processor, memory, non-volatile storage, and an interface. A typical computer system will usually include at least a processor, memory, and a device (e.g., a bus) coupling the memory to the processor. The processor can be, for example, a general-purpose central processing unit (CPU), such as a microprocessor, or a special-purpose processor, such as a microcontroller. Memory of a computer system includes, by way of example but not limitation, random access memory (RAM), such as dynamic RAM (DRAM) and static RAM (SRAM). The memory can be local, remote, or distributed. Non-volatile storage is often a magnetic floppy or hard disk, a magnetic-optical disk, an optical disk, a read-only memory (ROM), such as a CD-ROM, EPROM, or EEPROM, a magnetic or optical card, or another form of storage for large amounts of data. During execution of software, some of this data is often written, by a direct memory access process, into memory by way of a bus coupled to non-volatile storage. Non-volatile storage can be local, remote, or distributed, but is optional because systems can be created with all applicable data available in memory. Software in a computer system is typically stored in non-volatile storage. Indeed, for large programs, it may not even be possible to store the entire program in memory. For software to run, if necessary, it is moved to a computer-readable location appropriate for processing, and for illustrative purposes in this paper, that location is referred to as memory. Even when software is moved to memory for execution, a processor will typically make use of hardware registers to store values associated with the software, and a local cache that, ideally, serves to speed up execution. As used herein, a software program is assumed to be stored at an applicable known or convenient location (from non-volatile storage to hardware registers) when the software program is referred to as “implemented in a computer-readable storage medium.” A processor is considered “configured to execute a program” when at least one value associated with the program is stored in a register readable by the processor. In one example of operation, a computer system can be controlled by operating system software, which is a software program that includes a file management system, such as a disk operating system. One example of operating system software with associated file management system software is the family of operating systems known as Windows from Microsoft Corporation of Redmond, Wash., and their associated file management systems. Another example of operating system software with its associated file management system software is the Linux operating system and its associated file management system. The file management system is typically stored in the non-volatile storage and causes the processor to execute the various acts required by the operating system to input and output data and to store data in the memory, including storing files on the non-volatile storage. The bus of a computer system can couple a processor to an interface. Interfaces facilitate the coupling of devices and computer systems. Interfaces can be for input and/or output (I/O) devices, modems, or networks. I/O devices can include, by way of example but not limitation, a keyboard, a mouse or other pointing device, disk drives, printers, a scanner, and other I/O devices, including a display device. Display devices can include, by way of example but not limitation, a cathode ray tube (CRT), liquid crystal display (LCD), or some other applicable known or convenient display device. Modems can include, by way of example but not limitation, an analog modem, an IDSN modem, a cable modem, and other modems. Network interfaces can include, by way of example but not limitation, a token ring interface, a satellite transmission interface (e.g. “direct PC”), or other network interface for coupling a first computer system to a second computer system. An interface can be considered part of a device or computer system. Computer systems can be compatible with or implemented as part of or through a cloud-based computing system. As used in this paper, a cloud-based computing system is a system that provides virtualized computing resources, software and/or information to client devices. The computing resources, software and/or information can be virtualized by maintaining centralized services and resources that the edge devices can access over a communication interface, such as a network. “Cloud” may be a marketing term and for the purposes of this paper can include any of the networks described herein. The cloud-based computing system can involve a subscription for services or use a utility pricing model. Users can access the protocols of the cloud-based computing system through a web browser or other container application located on their client device. A computer system can be implemented as an engine, as part of an engine, or through multiple engines. As used in this paper, an engine includes at least two components: 1) a dedicated or shared processor or a portion thereof; 2) hardware, firmware, and/or software modules executed by the processor. A portion of one or more processors can include some portion of hardware less than all of the hardware comprising any given one or more processors, such as a subset of registers, the portion of the processor dedicated to one or more threads of a multi-threaded processor, a time slice during which the processor is wholly or partially dedicated to carrying out part of the engine's functionality, or the like. As such, a first engine and a second engine can have one or more dedicated processors, or a first engine and a second engine can share one or more processors with one another or other engines. Depending upon implementation-specific or other considerations, an engine can be centralized, or its functionality distributed. An engine can include hardware, firmware, or software embodied in a computer-readable medium for execution by the processor. The processor transforms data into new data using implemented data structures and methods, such as is described with reference to the figures in this paper. The engines described in this paper, or the engines through which the systems and devices described in this paper can be implemented, can be cloud-based engines. As used in this paper, a cloud-based engine is an engine that can run applications and/or functionalities using a cloud-based computing system. All or portions of the applications and/or functionalities can be distributed across multiple computing devices and need not be restricted to only one computing device. In some embodiments, the cloud-based engines can execute functionalities and/or modules that end users access through a web browser or container application without having the functionalities and/or modules installed locally on the end-users' computing devices. As used in this paper, datastores are intended to include repositories having any applicable organization of data, including tables, comma-separated values (CSV) files, traditional databases (e.g., SQL), or other applicable known or convenient organizational formats. Datastores can be implemented, for example, as software embodied in a physical computer-readable medium on a general- or specific-purpose machine, in firmware, in hardware, in a combination thereof, or in an applicable known or convenient device or system. Datastore-associated components, such as database interfaces, can be considered “part of” a datastore, part of some other system component, or a combination thereof, though the physical location and other characteristics of datastore-associated components is not critical for an understanding of the techniques described in this paper. Datastores can include data structures. As used in this paper, a data structure is associated with a way of storing and organizing data in a computer so that it can be used efficiently within a given context. Data structures are generally based on the ability of a computer to fetch and store data at any place in its memory, specified by an address, a bit string that can be itself stored in memory and manipulated by the program. Thus, some data structures are based on computing the addresses of data items with arithmetic operations; while other data structures are based on storing addresses of data items within the structure itself. Many data structures use both principles, sometimes combined in non-trivial ways. The implementation of a data structure usually entails writing a set of procedures that create and manipulate instances of that structure. The datastores, described in this paper, can be cloud-based datastores. A cloud based datastore is a datastore that is compatible with cloud-based computing systems and engines. Assuming a CRM includes a network, the network can be an applicable communications network, such as the Internet or an infrastructure network. The term “Internet” as used in this paper refers to a network of networks that use certain protocols, such as the TCP/IP protocol, and possibly other protocols, such as the hypertext transfer protocol (HTTP) for hypertext markup language (HTML) documents that make up the World Wide Web (“the web”). More generally, a network can include, for example, a wide area network (WAN), metropolitan area network (MAN), campus area network (CAN), or local area network (LAN), but the network could at least theoretically be of an applicable size or characterized in some other fashion (e.g., personal area network (PAN) or home area network (HAN), to name a couple of alternatives). Networks can include enterprise private networks and virtual private networks (collectively, private networks). As the name suggests, private networks are under the control of a single entity. Private networks can include a head office and optional regional offices (collectively, offices). Many offices enable remote users to connect to the private network offices via some other network, such as the Internet. The B-Node104is intended to represent an engine that couples the branch network106to the CXP114. In a specific implementation, the B-node is responsible for branch-to-cloud traffic. For example, the branch network106is intended to represent a campus, site, data center, or other branch network under the control of a customer. In a specific implementation, the B-node104creates an overlay to connect a network branch to the cloud. Data traffic originating from the branch network106within a given region may be controlled, managed, observed, and evaluated by the CXP114. In a specific implementation, the customer, or a human or artificial agent thereof, managing the branch network106, or a portion thereof, can access a single portal to select one or more of the services116in connection with a software as a service (SaaS), infrastructure as a service (IaaS), or platform as a service (PaaS) offering. In a specific implementation, the B-node104(potentially including other B-nodes, not shown) connects the CXP114to multiple different branch networks. The S-nodes108are intended to represent multi-tenant node engines adapted to orchestrate the instantiation, hosting, and/or provisioning of the services116(selected via a portal accessible in association with the CXP114) to one or more endpoints on behalf of a customer. S-nodes108may host services and apply policies that might otherwise only be available through other cloud platforms, in other regions or otherwise only available with certain connectivity. For instance, if a customer using Cloud Platform A desired certain security features provided by Firewall X service that was only available through Cloud Platform B, the S-nodes108may, via an orchestration component, host the Firewall X service for the customer so that the customer may obtain the service as though they were using Cloud Platform B. Even if a customer uses different cloud platforms or has different connectivity throughout different segments of its network, the dashboard of the CXP114's portal may provide the foregoing features (e.g., monitoring traffic, managing connectivity, etc.) within the same dashboard interface. In a specific implementation, to effectuate these features, all data traffic is routed through the S-nodes108. The S-nodes108may send/receive traffic to and from networks implementing any type of connectivity (e.g., MPLS, SD-WAN, IPsec, etc.) and host services from any one or more providers so that the connecting networks may receive the benefit of those services without the hassle of reconfiguring their network to adapt to the service provider's requirements. The S-nodes108can instantiate such services automatically upon request, so that an individual user associated with or connected through the branch network106does not have to instantiate the services themselves. The S-nodes108may collect telemetry data (e.g., to share with a multi-tenant orchestrator component), may tie the data flow to an application once packet details have been determined, may conduct analytics (e.g., statistical analysis) on data flow on a tailored basis (e.g., one in every ten packets received may be subjected to a deep packet inspection routine), and may tag or add instructions to packets for execution at a workload. The V-Node110is intended to represent an engine that couples the CXP114to the VPC112. The VPC112is intended to represent a SaaS, IaaS, PaaS, or V-net. In a specific implementation, the V-node is responsible for cloud-to-cloud traffic. For example, the V-node110(potentially including other V-nodes, not shown) connects the CXP114to different clouds. The CXP114is intended to represent a system that establishes connectivity, instantiates services for corresponding geolocations, aggregates data, implements policies, monitors traffic, and/or provide analytics across disparate cloud service providers and different connectivity architectures. In a specific implementation, CXP114operates in a manner that—to the customer—is connectivity agnostic and cloud provider agnostic. The CXP114may correspond to aggregated services offered for a given region or set of regions, where the regions may comprise one or more zones corresponding to subsections of such regions. The CXP114may service the branch network106within a particular region, and multiple CXPs may be stitched together as part of a larger cloud servicing network (e.g., mesh network, hub-and-spoke network, or a network having some other topology) to span multiple regions. In a specific implementation, the CXP114provides a portal through which a network administrator or other user associated with a customer may (i) view and select SaaS/IaaS/other services from a range of providers (or provided by the customer itself) within a common dashboard, (ii) manage connectivity (e.g., MLPS, SD-WAN, IPsec, etc.), (iii) monitor traffic, (iv) control traffic in accordance with one or more policies (e.g., security policies), etc. FIG.2is a diagram200of an example of a cloud exchange point (CXP). The diagram200includes a B-node202, an S-node204coupled to the B-node202, a service engine206-1to a service engine206-n(collectively the services206) coupled to the S-nodes204, a distributed service stitching (DSS) engine208coupled to the S-nodes204, a monitoring engine210coupled to the S-nodes204, a provisioning engine212coupled to the S-nodes204, an analytics engine214coupled to the S-nodes204, a data ingestion engine216coupled to the S-nodes204, a policy engine218coupled to the S-nodes204, a multi-tenant orchestration (MTO) engine220coupled to the S-nodes204, and a V-node222coupled to the S-nodes204. The distributed service stitching engine208, the monitoring engine210, the provisioning engine212, the analytics engine214, the data ingestion engine216, the policy engine218, the multi-tenant orchestration engine220, and the S-nodes204can be collectively referred to as a cloud services node (CSN)224. The B-node202is intended to represent a B-node like the B-node104described with reference toFIG.1. The S-nodes204are intended to represent S-nodes like the S-nodes108described with reference toFIG.1. The services206are intended to represent services like the services116described with reference toFIG.1. The DSS engine208is intended to represent an engine that stitches together (i.e. provide coherent communication, coordination, and connection to) S-nodes associated with a plurality of CXPs associated with a respective plurality of different regions. In a specific implementation, the DSS engine208is configured to enable services from other regions (other CXPs) to be properly hosted in a region with which the S-nodes204are associated in order to satisfy one or more restrictions or regulations of a service/application. The DSS engine208operates to establish a mesh network, a hub and spoke network, another applicable network distribution paradigm, or a combination of these. The monitoring engine210is intended to represent an engine that inspects data packets passed to the S-nodes204and identifies attributes about individual packets or groups of packets (e.g., identify header information that may be used to identify a source, destination, or application/service relevant to such packet). The provisioning engine212is intended to represent an engine that facilitates provisioning one or more of the services206responsive to a request therefor. In a specific implementation, the S-nodes204are configured to host the requested service itself, enabling the customer to access the service through its connection to the S-nodes204, without having to establish connectivity with the service provider or having to be siloed to a service provider's platform. The analytics engine214is intended to represent an engine that obtains data from data ingestion engine216(which is configured to receive data from network elements and/or end-points, and collect telemetry) and provide data analytics corresponding to (i) traffic coming into the S-nodes204, (i) corresponding services being used in connection with the S-nodes204throughout a connected network, (iii) connectivity issues within a network, and the like. The policy engine218is intended to represent an engine to apply a policy at the S-nodes206. In a specific implementation, the policy is identifiable from a user request for the policy to be applied to a given flow of traffic. In a specific implementation, the policy can be applied without requiring the customer to instantiate a service that applies the policy. The MTO engine220is intended to represent an engine that, upon selection, automatically instantiates one or more of the services206(which may be available across a series of CSNs) to multiple tenants without requiring manual instantiation by such tenants. In a specific implementation, the MTO engine220is SaaS-based. In some instances, orchestration features provided by the MTO engine220may be provided as a wrapper around a third-party service (e.g., where the MTO engine220is integrated directly within a third-party service, in a transparent or apparent manner). In such instances, only the features of a particular service that are supported by the CSN224may be shown. In other instances, the orchestration provided by the MTO engine220may be offered as a distinct SaaS in addition to other third-party services. The V-node222is intended to represent a V-node like the V-node110described with reference toFIG.1. The CSN224is intended to represent a collection of engines associated with the S-nodes204. In an alternative, the CSN224is incorporated into one or more S-nodes. In yet another alternative, the services206are incorporated into the CSN224(or one or more S-nodes). FIG.3is a diagram300of an example of a plurality of regional services exchange points (RSXP) stitched together to form a network. The diagram300includes an RSXP302-1to an RSXP302-n(collectively, the RSXPs302). The RSXP302-1includes a DSS engine304-1, the RSXP302-2includes a DSS engine304-2, and the RSXP302-nincludes a DSS engine304-n. The DSS engines304-1to304-ncan collectively be referred to as DSS engines304. The RSXP302-1includes a service306-1-1to a service306-n-1(collectively, the services306-1), the RSXP302-2includes a service306-1-2to a service306-n-2(collectively, the services306-2), and the RSXP302-nincludes a service306-1-nto a service306-n-n(collectively, the services306-n). The services306-1to306-ncan collectively be referred to as services306. It may be noted that the services306are depicted as coupled to the DSS engines304for conceptual purposes, but it should be understood the services could be coupled to S-nodes as depicted inFIG.2. The RSXPs302are intended to represent engines, at least including the DSS engines304, associated with different geographic, geopolitical, national, or other regions. The DSS engines304act as single engine with respect to each of the services306regardless of the region in which the services306are found. FIG.4is a diagram400of an example of a horizontally scalable multi-tenant service bounding engine. The diagram400includes a network administrator device402, a portal device404, and a horizontally scalable multi-tenant service bounding engine406. The horizontally scalable multi-tenant service bounding engine406includes a monitoring engine410, a provisioning engine412, an analytics engine414, a data ingestion engine416, a policy engine418, and a multi-tenant orchestration engine420, which are similar in function to the monitoring engine210, the provisioning engine212, the analytics engine214, the data ingestion engine216, the policy engine218, and the multi-tenant orchestration engine220ofFIG.2. The network administrator device402is intended to represent an end user device utilized by a network administrator of a customer, or a human or artificial agent thereof. In a specific implementation, the network administrator device402is a smartphone, tablet device, laptop, or other personal computing device of a network administrator, or a human or artificial agent thereof. The portal device404is intended to represent a network device through which a network administrator, or agent thereof, can utilize the network administrator device402to access the horizontally scalable multi-tenant service bounding engine406. In a specific implementation, the network administrator, or agent thereof, accesses features and functionality of the horizontally scalable multi-tenant service bounding engine406via the portal device404, through which they may make selections. The horizontally scalable multi-tenant service bounding engine406may be implemented within a CXP, such as the CXP114orFIG.1, an S-node, such as one or more of the S-nodes108ofFIG.1or the S-nodes204ofFIG.2, or a CSN, such as the CSN224ofFIG.2. Advantageously, and as the name suggests, the horizontally scalable multi-tenant service bounding engine406facilitates horizontal scaling for multi-tenant service. FIG.5is a flowchart500of an example of network administrator access to an RSXP. The flowchart500and other flowcharts described in this paper include modules that can be reordered or rearranged for parallel execution, if applicable. The flowchart500starts at module502with accessing a dashboard made available via a portal of a CXP, such as the CXP114ofFIG.1. The flowchart500continues to module504with logging into the portal using customer specific credentials. The flowchart500continues to module506with selecting a use case for a deployment (e.g., on-premises provisioning of IaaS, on-premises provisioning of SaaS, remote user, enterprise apps/services, IaaS to IaaS, hosted DM2, or the like). The flowchart500continues to module508with selecting a first use case, which is selecting a region within which to host a service, then the flowchart500returns to module506and continues as described. The flowchart500continues to module510with selecting a second use case, which is selecting custom requirements (e.g., customer already has a deal/license with a third party for a given service, customer has a proprietary service of their own, customer desires cloud native services or services offered by a public cloud provider, pay as you go details, etc.), then the flowchart500returns to module506and continues as described. The flowchart500continues to module512with selecting a third use case, which is selecting standby requirements (e.g., ready standby, cold standby, etc.), then the flowchart500returns to module506and continues as described. The flowchart500continues to module514with selecting a fourth use case, which is selecting services. The flowchart500continues to module516with selecting access connectivity (e.g., MLPS, IPsec, via Data Center, etc.). With regard to connectivity, for SaaS, it is desirable for the CSN to know a customer's origin because it may impact service levels, pricing, etc. The flowchart500continues to module518with selecting an IaaS. Similar to SaaS, for IaaS it is desirable for the CSN to know where the workloads will be located because it may impact service levels, pricing, etc. The flowchart500continues to module520with enabling a CSX. In connection with the foregoing, a CXP, such as the CXP114ofFIG.1, may provide suggestions to a customer based on analytics of data from the customer's prior use, or the use of prior customers that may be similarly situated to the given customer. In some embodiments, a CXP may generate a profile for a given customer based on such details, and provide suggestions (e.g., suggested services) to the customer on that basis. In some embodiments, a CXP may implement a pricing model for a given customer that is provided on a per-workload basis. In some instances, a CXP may facilitate a selection of where to provision a given workload based on most cost-effective options. In some embodiments, a CXP may be deployed as a blockchain network, where audit logs may be transparent such that each change can be traced back to the admin who changed it. With other blockchain features that may be provided by the technology of the present disclosure, any topology changes, rule changes, or network changes may be traced. Similarly, rules may require consensus among a certain number of administrators before such a change takes effect. A footprint or signature of the network topology may be leveraged to generate a hash in such deployments. FIGS.6A-6Cdepict various architectures and illustrate a range of deployment options.FIG.6Adepicts a diagram600A that covers on-premises to and from IaaS, on-premises to and from SaaS, and remote users.FIG.6Bdepicts a diagram600B that covers IaaS to IaaS.FIG.6Cdepicts a diagram600C that covers end users outside of an enterprise. The diagram600A includes an SD-WAN component602, a multiprotocol label switching (MPLS) component604, an IPsec component606, a CSN608coupled to the SD-WAN component602, the MPLS component604, and the IPsec component606, services610coupled to the CSN608, a SaaS platform612coupled to the CSN608, an IaaS platform614coupled to the CSN608, a workload in the IaaS engine616coupled to the CSN608, a CSN′618coupled to the CSN608, and an IaaS′620coupled to the CSN′618. The diagram600A illustrates a flow from the SD-WAN component602, the MPLS component604, and the IPsec component606to the CSN608and from the CSN608to the SaaS platform612, the IaaS platform614, and the workload in the IaaS engine616in the form of arrows. The CSN608interacts with the services610(represented with double arrows) and the flow also passes from the CSN608to the CSN′618and then to the IaaS′620. In a specific implementation, the CSN′618has no associated services. In an alternative, the CSN608does not have the services610, is coupled to another CSN″, or both. In an alternative, IPsec component606is replaced with a proprietary secure network protocol suite, an alternative security suite, or a combination of these. The diagram600B includes a workload in the IaaS engine622, a CSN624coupled to the workload in the IaaS engine622, services626coupled to the CSN624, a workload in the IaaS engine628coupled to the CSN624, a CSN′630coupled to the CSN624, and an IaaS′632coupled to the CSN′. The diagram600B illustrates a flow from the workload in the IaaS engine622to the CSN624and from the CSN624to the workload in the IaaS engine628in the form of arrows. The CSN624interacts with the services626(represented with double arrows) and the flow also passes from the CSN624to the CSN′630and then to the IaaS′632. Optionally, the CSN′630can send to a SaaS (not shown) to avoid a regional block, which is represented by a dashed arrow from the CSN′630. In a specific implementation, the workload in the SaaS engine622can enforce policy on packets it cannot see. The diagram600C includes Internet634, a CSN636coupled to the Internet634, services638coupled to the CSN636, a workload in the IaaS engine640coupled to the CSN636, a CSN′642coupled to the CSN636, and a workload in the IaaS engine644coupled to the CSN′642. The diagram600C illustrates a flow from the Internet634to the CSN636and from the CSN636to the workload in the IaaS engine640in the form of arrows. The CSN636interacts with the services638(represented with double arrows) and the flow also passes from the CSN636to the CSN′642and then to the workload in the IaaS engine644. In a specific implementation, the CSN′642and workload in the IaaS engine644are optional and in an alternative the CSN′642and workload in the IaaS engine644are absent. FIG.7is a diagram700of an example of an aggregated cloud services provisioning system. The diagram700includes an aggregated service exchange operator (ASEO) engine702, a first cloud service provider704coupled to the ASEO engine702, a first VPC706coupled to the first cloud service provider704, a second cloud service provider708coupled to the ASEO engine702, and a second VPC710coupled to the second cloud service provider708. The ASEO engine702aggregates services provided via the first cloud service provider704and the second cloud service provider708and makes the first VPC706and the second VPC710available through a common platform to a common customer base. In a specific implementation, the ASEO operator engine702is coupled to more than two cloud service providers (not shown). FIG.8is a diagram800of an example of a computer system. The diagram800includes a bus802or other communication mechanism for communicating information, one or more hardware processors (“the processor”)804coupled with the bus802for processing information, and a main memory (“the memory”)806coupled to the bus802for storing information and instructions to be executed by the processor804. The memory806also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by the processor804. Such instructions, when stored in storage media accessible to the processor804, render the computer system into a special-purpose machine configured to perform the operations specified in the instructions. Such instructions may be read into the memory806from another storage medium, such as the storage device810. Execution of the sequences of instructions contained in the memory806causes the processor804to perform process steps, such as those described in this paper, to act as a specially purposed machine. The diagram800further includes, coupled to the bus802, a read only memory (ROM)808or other static storage device—such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), to name three—for storing static information and instructions for the processor804, a display812—such as a liquid crystal display (LCD) (or touch screen)—for displaying information to a computer user, an input device814—such as a keyboard including alphanumeric and other keys—for communicating information and command selections to the processor804, and a cursor control device816—such as a mouse, a trackball, cursor direction keys, or other type of input device—for communicating direction information and command selections to the processor804and for controlling cursor movement on the display812. Instead or in addition, the same direction information and command selections as cursor control may be implemented via receiving touches on a touch screen (e.g., of the display812or some other device with a screen). A computing system may include a user interface module to implement a GUI that may be stored in a mass storage device as executable software codes that are executed by the computing device(s). This and other modules may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. In general, the word “component” can refer to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Java, C or C++. An engine as used in this paper, however, always includes hardware or firmware; the term “engine,” as used in this paper, is not intended to represent a “software engine.” A software component may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software components may be callable from other components or from themselves, and/or may be invoked in response to detected events or interrupts. Software components configured for execution on computing devices may be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution). Such software code may be stored, partially or fully, on a memory device of the executing computing device, for execution by the computing device. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware components may be comprised of connected logic units, such as gates and flip-flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors. The term “non-transitory media,” and similar terms, as used herein refers to any media that store data and/or instructions that cause a machine to operate in a specific fashion. Such non-transitory media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as the storage device810. Volatile media includes dynamic memory, such as the memory806. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, and networked versions of the same. Non-transitory media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between non-transitory media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise the bus802. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. The diagram800also includes one or more network interfaces (“the communication interface”)818coupled to the bus802. The communication interface818provides a two-way data communication coupling to one or more network links that are connected to one or more local networks. For example, the communication interface818may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the communication interface818may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component to communicated with a WAN). Wireless links may also be implemented. In such an implementation, the communication interface818sends and receives electrical, electromagnetic or optical signals that carry data streams representing various types of information. A network link typically provides data communication through one or more networks to other data devices. For example, a network link may provide a connection through local network to a host computer or to data equipment operated by an Internet Service Provider (ISP). The ISP in turn provides data communication services through the worldwide packet data communication network now commonly referred to as the “Internet.” Local network and Internet both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link and through the communication interface818, which carry the digital data to and from the computer system, are example forms of transmission media. In operation, the computer system can send messages and receive data, including program code, through the network(s), network link, and communication interface818. In the Internet example, a server might transmit a requested code for an application program through the Internet, the ISP, the local network and the communication interface818. The received code may be executed by processor804as it is received, and/or stored in storage device810, or other non-volatile storage for later execution or playback. Each of the processes, methods, and algorithms described in this paper may be embodied in, and fully or partially automated by, code components executed by one or more computer systems or computer processors comprising computer hardware. The one or more computer systems or computer processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a SaaS. The processes and algorithms may be implemented partially or wholly in application-specific circuitry. The various features and processes described above may be used independently of one another or may be combined in various ways. Different combinations and sub-combinations are intended to fall within the scope of this disclosure, and certain method or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate, or may be performed in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed example embodiments. The performance of certain of the operations or processes may be distributed among computer systems or computers processors, not only residing within a single machine, but deployed across a number of machines. As used herein, a circuit might be implemented utilizing any form of hardware, software, or a combination thereof. For example, one or more processors, controllers, ASICs, PLAs, PALs, CPLDs, FPGAs, logical components, software routines or other mechanisms might be implemented to make up a circuit. In implementation, the various circuits described herein might be implemented as discrete circuits or the functions and features described can be shared in part or in total among one or more circuits. Even though various features or elements of functionality may be individually described or claimed as separate circuits, these features and functionality can be shared among one or more common circuits, and such description shall not require or imply that separate circuits are required to implement such features or functionality. Where a circuit is implemented in whole or in part using software, such software can be implemented to operate with a computing or processing system capable of carrying out the functionality described with respect thereto. As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, the description of resources, operations, or structures in the singular shall not be read to exclude the plural. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. Adjectives such as “conventional,” “traditional,” “normal,” “standard,” “known,” and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. The presence of broadening words and phrases such as “one or more,” “at least,” “but not limited to” or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent. FIG.9is a diagram900of an example of a CXP node configuration system. The diagram900includes an external orchestration engine902, a routing component904coupled to the external orchestration engine902, an IPsec component906coupled to the external orchestration engine902, an operating system (OS) component908coupled to the external orchestration engine902, and a forwarding component910coupled to the external orchestration engine902. The routing component904, the IPsec component906, the OS component908, and the forwarding component910can be collectively referred to as a configuration data structure922. The diagram900further includes a node configuration datastore912coupled to the configuration data structure922, which represents a communication medium from the external orchestration engine902over which the configuration data structure is provided for storage in the node configuration datastore912, a configured node914coupled to the node configuration datastore914, a resource monitor916coupled to the configured node914, an on-demand configuration engine918coupled to the resource monitor916and the node configuration datastore912, a stateless node920coupled to the on-demand configuration engine918, a tunnel state datastore922coupled to the external orchestration engine902, and a tenant state datastore924coupled to the external orchestration engine902. The external orchestration engine902is intended to represent an engine that knows tunnel state (represented by the tunnel state datastore922), which tenant is on which node (represented by the tenant state datastore924), and how to configure nodes. The term “external” in this context is intended to mean node-external or router-external, as in the external orchestration engine902is implemented outside of a router. In a specific implementation, node configuration is performed outside of nodes of a CXP, such as nodes of the CXP114ofFIG.1. Advantageously, a node of a CXP can be ripped and replaced due to node configuration being stored outside of the node to be replaced. It may be noted that, with this implementation, it is not necessary for redundant nodes to synch with each other, which is beneficial because redundant nodes have a cost (e.g., synch modules); node-to-node synch communication is at least ameliorated and at best eliminated using the techniques described in this paper. The routing component904is intended to represent a software component implemented on a configured node, such as the configured node914. Routing forms virtual routing and forwarding (VRF) context for a tenant. The IPsec component906is intended to represent a software component implemented on a configured node, such as the configured node914. IPsec is a secure network protocol suite that authenticates and encrypts the packets of data to provide secure encrypted communication between two computers over an Internet Protocol network. IPsec includes protocols for establishing mutual authentication between agents at the beginning of a session and negotiation of cryptographic keys to use during the session. In a specific implementation, the IPsec component906is compliant with strongSwan, a multiplatform IPsec implementation. The OS component908is intended to represent a software component implemented on a configured node, such as the configured node914. In a specific implementation, the OS component908is compliant with Linux. The forwarding component910is intended to represent a software component implemented on a configured node, such as the configured node914. Forwarding includes flow management enabling flow-based routing. In a specific implementation, the forwarding component910is compliant with vector packet processing (VPP), a software algorithm that is used to quickly process network packets. The node configuration datastore912is intended to represent a datastore of configuration parameters for a node. In a specific implementation, the node configuration datastore is an etcd datastore. etcd is a strongly consistent, distributed key-value store that provides a reliable way to store data that needs to be accessed by a distributed system or cluster of machines. In a specific implementation, the provisioning of nodes is accomplished using an entity relationship diagram (ERD) tool. The configured node914is intended to represent a B-node, such as the B-node104ofFIG.1, an S-node, such as one of the S-nodes108ofFIG.1, or a V-node, such as the V-node110ofFIG.1. In a specific implementation, within the configured node914are configuration parameters such as represented in the diagram900as config data structure922(i.e., the routing component904, the IPsec component906, the OS component908, and the forwarding component910). Although the configured node914is coupled to the node configuration datastore912and, at least by implication, received configuration parameter values from the node configuration datastore912, it should be understood that, instead or in addition, the configured node914could be pre-configured (i.e., at least partially configured prior to being coupled to the node configuration datastore912). The resource monitor916is intended to represent an engine that sends a trigger to the on-demand configuration engine918responsive to a stimulus from the configured node914. Instead or in addition, the stimulus could come from some other source, such as the external orchestration engine902, which is represented in the diagram900as a dotted arrow from the external orchestration engine902to the resource monitor916. The stimulus is indicative of a need to spin up additional nodes to handle network resource consumption. The on-demand configuration engine918is intended to represent an engine that provides node configuration parameter values to the stateless node920in response to a trigger from the resource monitor. In a specific implementation, the trigger is an indication that additional nodes are needed to handle network resource consumption. If network resource consumption decreases, the stimulus from the configured node914to the resource monitor916could also trigger the on-demand configuration engine918to tear down nodes (not shown). The stateless node920is intended to represent a node that is not initially employed to handle network resource demands (e.g., traffic). Upon obtaining configuration parameter values from the node configuration datastore912via the on-demand configuration engine918, where the configured node914is a first configured node, the stateless node920becomes a second configured node. In an alternative, the stateless node920could initially be handling network resource demands but its configuration is changed by the on-demand configuration engine918upon receipt of a trigger at the on-demand configuration engine918from the resource monitor916. FIG.10is a diagram1000of an example of a system of VPCs with a master Kubernetes cluster. The diagram1000includes a master Kubernetes cluster1002and a VPC1004-1to a VPC1004-n(collectively the VPCs1004). The VPC1004-1includes a data plane1006and Kubernetes pods1008. The data plane1006includes a Vnet facing datastore1010, a service point datastore1012, and a branch-facing datastore1014, which correspond to data associated with a V-node, S-node, and B-node respectively. The Kubernetes pods1008include an orchestrator pod1016-1to an orchestrator pod1016-n(collectively the orchestrator pods1016). The orchestrator pods1016, potentially including engines from the master Kubernetes cluster1002, are components of an orchestration engine, such as the external orchestration engine902ofFIG.9. The VPCs1004-2to1004-nhave a data plane and Kubernetes pods, as well (not shown). FIG.11is a diagram1100of an example of a system with regional CXPs. The diagram1100includes a United States-West (US-W) CXP1102, an Amazon Web Services (AWS) network1104, and a United States-East (US-E) CXP1106coupled to the US-W CXP1102via the AWS network1104. Specifically, the US-W CXP1102and the US-E CXP1106are coupled together via at least VXLAN1108and VXLAN1110. The names US-W and US-E are chosen merely for illustrative purposes. Instead of or in addition to the AWS network104, some other cloud network could be used to couple the US-W CXP1102and the US-E CXP1106. The diagram1100further includes a VPC1112-1to a VPC1112-n(collectively, the VPCs1112) coupled to the US-W CXP1102and a VPC1114-1to a VPC1114-n(collectively, the VPCs1114) coupled to the US-E CXP. The diagram1100further includes a virtual gateway (VGW) device1116at the US-E CXP1106and customer premises equipment (CPE)1118coupled to the VGW device1116via border gateway protocol (BGP)1120. FIG.12is a flowchart1200of an example of a method for configuring a stateless node. The flowchart1200starts at module1202with updating a node configuration datastore with node configuration parameter values. The node configuration datastore may or may not include non-volatile storage. In a specific implementation, the node configuration parameter values include one or more of routing, IPsec, operating system, and forwarding parameter values. The module1202can be carried out by an orchestration engine, such as a multi-tenant orchestration engine (e.g., the multi-tenant orchestration engine220ofFIG.2or the multi-tenant orchestration engine420ofFIG.4), an external orchestration engine (e.g., the external orchestration engine902ofFIG.9), or an external multi-tenant orchestration engine (combining the functionality of the multi-tenant orchestration engine220and the external orchestration engine902). The flowchart1200continues to module1204with providing a stimulus to a resource monitor indicative of a network resource consumption state. The stimulus can include an indication from a configured node, such as an S-node, B-node, or V-node, that network resource consumption is exceeding a network resource consumption threshold for the configured node. The network consumption threshold can be associated with network resource consumption that exceeds a value or that drops below a value. Alternatively, the indication can represent a level of network resource consumption at the configured node. In a specific implementation, the stimulus includes multiple values from a plurality of configured nodes that, considered in the aggregate, provide a snapshot or estimate of network resource consumption at the plurality of configured nodes. The stimulus can be pushed from a configured node to a resource monitor or pulled from a configured node to a resource monitor. The module1204can be carried out by a resource monitor, such as the resource monitor916ofFIG.9(and the configured node914ofFIG.9). The flowchart1200ends at module1206with, responsive to the stimulus, triggering configuration of a stateless node with the node configuration parameter values. The module1206can be carried out by an on-demand configuration engine, such as the on-demand configuration engine918ofFIG.9(and the stateless node920ofFIG.9).
53,844
11943108
DETAILED DESCRIPTION Those skilled in the art will appreciate that the steps, services and functions explained herein may be implemented using individual hardware circuitry, using software functioning in conjunction with a programmed microprocessor or general purpose computer, using one or more Application Specific Integrated Circuits (ASICs) and/or using one or more Digital Signal Processors (DSPs). It will also be appreciated that when the present disclosure is described in terms of a method, it may also be embodied in one or more processors and one or more memories coupled to the one or more processors, wherein the one or more memories store one or more programs that perform the steps, services and functions disclosed herein when executed by the one or more processors. It should be noted that although terminology from a specific telecom network system such as e.g. 3GPP 5G may be used herein to explain the example embodiments, this should not be seen as limiting the scope of the example embodiments to only the aforementioned system. Other telecom systems, including LTE, WCDMA, WiMax, UMB, GSM, and evolutions in the underlying technologies or standards of the current framework may also benefit from the example embodiments disclosed herein. In the following description of exemplary embodiments, the same reference numerals denote the same or similar components. The following descriptions will be made in reference toFIG.1andFIG.2, whereFIG.1shows a flow chart representation of a method100for SLA assurance in a telecom environment andFIG.2shows a corresponding schematic block diagram. The method100comprises obtaining101annotated alarm data1that comprises an indication of a threshold crossing of at least one state variable of the telecom network defined by the SLA. In other words, obtaining information indicative of e.g. a failure to provide an agreed-upon service or of an exceeding of resource commitments in the telecom network. The obtained101annotated alarm data further comprises an indication of a desired value of the at least one state variable of the telecom network. For example, the obtained101annotated alarm data1may comprise an indication of a desired value for network latency, availability, throughput, etc. Thus, the annotated alarm data1comprises both the alarm (indication that something is wrong) and a desired value of one or more state variables. Moreover, the obtained101annotated alarm data1may be for a specific network slice (network slices and network slice instances are introduced in 3GPP 5thGeneration mobile communication technologies). Thus, the raised alarm1may relate to some issue in the Radio Access Network (RAN) or in the core network. Further, the method100comprises determining102a desired state2of the telecom network based on the desired value of at least one state variable. For example, if the obtained101alarm data1indicates that a certain segment of the telecom network has excessive latency, then the upper limit (threshold) of the latency may be included in the indicated desired value of the latency. The desired value then being any value or a range of values below this upper limit (threshold). This desired value of the latency and possibly further desired values of other state variables together form the desired state2. The determination102of the desired state2may be based on a predefined static model, predefined static function or a trained self-learning model8a-8N such as a trained neural network. The desired state2is now used as input to a “policy engine”. Thus, the method100further comprises selecting103a set of policy actions (set being defined as one or more) from a policy action bank3. In some embodiments, the selection103of policy actions is performed by means of a trained self-learning model8a-8N, such as e.g. a trained machine learning algorithm, a trained neural network, etc. The policy action bank3comprises a plurality of policy actions, where each policy action is associated with one or more estimated action effects4a,4b. Analogously as with the desired state determination, the action effects4a,4bmay be explicitly modelled or learned by means of e.g. a machine-learning algorithm8a-8N by observing the telecom network over time. The present inventors realized that alarms1themselves are indicators of something gone wrong in the telecom network or network slice. Hence, it was realized that it is possible to derive a more fine-grained description (desired state) defined by the desired value of multiple state variables. Thus, the desired state may be understood as a breakdown of a higher-level broken SLA (indicated by the alarm) to one or more Key Performance Indicators (KPIs) reflected by the desired state. Moreover, the estimated action effects (i.e. the effect of taking an action) may be difficult to completely compute (i.e. to generate an explicit mathematical representation). In such cases it is proposed to provide a self-learning model (e.g. a machine learning algorithm) that is configured to “learn” the effect of actions over time, wherefrom one can abstract this information into the estimated action effects. Moreover, the selection103of the set of policy actions is based on the estimated action effects4a,4bof the plurality of policy actions and on the determined102desired state of the telecom network such that the desired state2is reached (or at least estimated to be reached by a degree of certainty) upon execution of the selected set of policy actions. In other words, by having the estimated action effect(s)4a,4band the desired state2in place, the job of the proposed policy engine5to find the right policy action or group of policy actions whose effect(s) result(s) in the required desired state2. Thus, the step of selecting103the set of policy actions may comprise obtaining105the action effects4a,4bof each policy action and then solving108an optimization problem in order to determine the most optimal series of actions to be executed in order to reach the desired state. The method100further comprises sending104the selected103set of policy actions to an action orchestrating module6for execution of the selected103set of policy actions. Accordingly, the selected policy actions are subsequently executed one by one by the action orchestrating module6. This method100for the policy engine5may provide a benefit of the executed policy actions being tailored to the desired state of the system. Further, each estimated action effect4a,4bmay further be associated with one or more cost parameters. Thus, the step of selecting103the set of policy actions may further comprise obtaining106the cost parameter(s). Accordingly, the aforementioned numerical optimization problem may further be based on these obtained106cost parameters. For example, if the selected103set of policy actions includes an action of rebooting a virtual machine (VM) in a network slice, the cost of this action may be in terms of the time it takes to do it. Thus, this can be used to guide the optimization. Other cost parameters may for example be a quantified risk parameter, monetary or financial costs associated with an action, and so forth. Moreover, in some embodiments, each of the plurality of policy actions is further associated with at least one pre-condition7a,7bof the telecom network. Each pre-condition7a,7bdefines a predefined parameter value or a predefined parameter range of a state variable of the telecom network. Accordingly, the method100may further comprise obtaining a current state of the telecom network, where the current state is defined by current values of a plurality of state variables of the telecom network (e.g. active services, latency values, availability, etc.). The selection103of the set of policy actions may further be based on the obtained current state of the telecom network and the obtained106pre-conditions7a,7b. In more detail, the current state of the telecom network and the obtained pre-conditions7a,7bmay set constraints for the numerical optimization problem that must be fulfilled by the solution. Stated differently, each policy action can be associated with one or more pre-conditions7a,7b. These pre-conditions7a,7bcan then be matched with a current state of the telecom network. The policy engine5may accordingly be configured to match pre-conditions7a,7bof the policy actions with the current state of the telecom network so to ensure that only policy actions are executed whose pre-conditions7a,7bare met. Thus, it is now possible to generate or determine pre-conditions7a,7bfor the policy actions so that it gates/stops the policy action from being executed when it could potentially adversely affect other traffic or interfere with other actions in place in the telecom network. This makes the policy action execution more robust and sensitive to the dynamics of the telecom network. In some embodiments, the selectin of policy actions comprises computing a cost function based on the desired state of the telecom network, the estimated action effects4a,4b, the current state of the telecom network, and the pre-conditions7a,7bof the plurality of policy actions. Accordingly, the policy actions whose associated predefined pre-condition(s)7a,7bis/are not fulfilled based on the current state of the telecom network are associated with a higher cost than the policy actions whose associated predefined pre-condition(s)7a,7bis/are fulfilled based on the current state of the telecom network. The action effect4a,4bof each policy action may be estimated by means of a trained self-learning model8a-8N. This trained self-learning model8a-8N may be the same trained self-learning model8a-8N used for the determination of the desired or an independent (second) trained self-learning model8a-8N. More specifically, the method100may further comprise monitoring a state of the telecom network before and after the selected103set of policy actions is executed. The state of the telecom network is defined by values of a plurality of the state variables of the telecom network. The raised alarm(s) is/are as mentioned an indication of a threshold crossing of one or more of the state variables. Further, the method100may comprise observing changes of any values of the plurality of state variables while monitoring the state of the telecom network after the selected103set of policy actions have been executed. These observed changes can later be defined as estimated action effects4a,4bof one or more of the policy actions of the now executed set of policy actions. Accordingly, the trained self-learning model8a-8N (used to determine the estimated action effects) can be updated based on the observed changes. In other words, if a neural network8a-8N is used to determine the estimated action effects (i.e. to estimate the action effects); the weights of the neural network are updated. Yet further, in some embodiments, when the selected103set of policy actions comprises a plurality of policy actions, the method100may comprise obtaining at least one intermediate state of the telecom network. The at least one intermediate state is defined by predicted or expected values of the plurality of state variables of the telecom network during an ordered execution of the selected set of policy actions. Accordingly, the step of selecting103the set of policy actions comprises selecting an order of execution of the selected103set of policy actions based on the obtained at least one intermediate state. Thus, one can account for the effect that an executed policy action has on the network and how that correlates with the pre-conditions of the subsequent policy actions to be executed. Therefore, the obtained intermediate states may be used to optimally control an order of execution of the selected103plurality of policy actions. Thereby, the whole policy action execution procedure is optimized based on “intermediate” states and conditions instead of only initial conditions. For example, even if the pre-conditions of a policy action is or is not fulfilled by the current state of the network, that aspect may change by the execution of one or more other policy actions, rendering the whole SLA assurance method more dynamic and adaptable. FIG.3shows a schematic illustration of a control device30or control module31of a network function comprising control circuitry (e.g. one or more processors) configured to execute the instructions32stored in the memory31so to perform a method according to any one of the embodiments disclosed herein. In other words, executable instructions32for performing these functions are, optionally, included in a non-transitory computer-readable storage medium31or other computer program product configured for execution by one or more processors33. FIG.4is a schematic block diagram representation of a system40for operating a telecom network having a Service Level Agreement (SLA)41according to an exemplary embodiment of the present disclosure. The system40has a policy engine module44comprising control circuitry configured to obtain annotated alarm data comprising an indication of a threshold crossing of one or more state variables of the telecom network or network slice48(referred to as network in the following). The thresholds are defined by the SLA41of the telecom network48. The annotated alarm data further comprises an indication of a desired value of the one or more state variables of the telecom network48. The various modules42,43,44,45of the system40may for example be realized in accordance with the schematic illustration of the control device/unit ofFIG.3. The system40has an SLA monitoring module42having control circuitry configured to obtain a current state of the telecom network48. The current state is defined by current values (numerical values, strings, or any other suitable format as known in the art) of a plurality of state variables of the telecom network48. Further, the control circuitry of the SLA monitoring module42is configured to obtain a plurality of state variable thresholds (may also be referred to as quality indicator thresholds) as defined by the SLA41. Each state variable of the plurality of the state variables is associated with one or more state variable thresholds. Moreover, the SLA monitoring module's control circuitry is configured to form or generate alarm data upon a crossing of a state variable threshold by at least one state variable based on a comparison between the obtained plurality of state variables and the obtained plurality of state variable thresholds. Then, the control circuitry is configured to form the annotated alarm data by associating the formed/generated alarm data with the desired value of the at least one state variable (causing the raised alarm) of the telecom network48. This annotated alarm data is subsequently sent/provided to the policy engine module44. However, optionally, the steps of forming the annotated alarm data and sending the annotated alarm data may be performed by different module, namely an annotation module43. Accordingly, the annotation module43may comprise circuitry configured to obtain alarm data comprising the indication of the threshold crossing (from the SLA monitoring module42), and to form the annotated alarm data by associating the obtained alarm data with the desired value of the at least one state variable of the telecom network. In general, an alarm indicating a condition is normally raised by a monitoring system. An “annotated alarm” may in the present context be understood as a raised alarm with metadata indicating a desired effect. For example if the alarm in the general case is about latency increase for a certain network slice, then the annotated alarm would contain additionally a minimum lowering of the latency desired. Further, one the policy engine module44has obtained the annotated alarm data, the control circuitry of the policy engine module44is configured to determine a desired state of the telecom network based on the desired value(s) of the at least one state variable. The definition and other details related to the desired state and that terminology has already been discussed in the foregoing with reference toFIGS.1and2, and will therefore for the sake of brevity and conciseness not be further elaborated upon as it is considered to be readily understood by the skilled reader. In some embodiments, the annotation module may be a part of or integrated with the policy engine module44(not shown). Moreover, the annotation module43may be configured to determine the desired state of the telecom network48based on the desired value(s) of the at least one state variable. Furthermore, the control circuitry of the policy engine module44is configured to select a set (i.e. one or more) of policy actions from a policy action bank46. The policy action bank46comprises a plurality of policy actions, where each policy action is associated with at least one estimated action effect. Accordingly, the selection of the set of policy actions is based on the estimated action effects of the plurality of policy actions and on the determined desired state of the telecom network such that the desired state is reachable upon execution of the selected set of policy actions. The selected set of policy actions are subsequently sent to an action orchestrating module47for execution of the selected set of policy actions. For example in the case of the telecom network being a network slice the set of policy actions may be one or more of: rebooting a specific Virtual Machine (VM), adding a load balancer at a specific location, scaling out a specific service, moving a specific service to a new VM, increasing buffer size, etc. The estimated action effects of the plurality of policy actions may for example be determined by means of an effect estimation module45. More specifically, the estimated action effects may be determined by means of an explicit mathematical function/model or by means of a trained self-learning model. In more detail, the effect estimation module45comprises control circuitry configured to monitor a state of the telecom network48before and after the selected set of policy actions is executed. The state is defined by values (values, strings, etc.) of a plurality of state variables of the telecom network48before and after the selected set of policy actions is executed, the state being defined by values of a plurality of state variables of the telecom network. Moreover, the control circuitry of the effect estimation module45is configured to observe changes of any values of the plurality of state variables while monitoring the state of the telecom network when the selected set of policy actions have been executed, and to update the trained self-learning model based on the observed changes. However, as mentioned the estimated action effects may also be explicitly modelled and subject to regular updates over time based on aggregated data. In more detail, each policy action is performed because it provides a certain positive effect. However, some policy actions may have negative effects as well. For example, if the policy action is to reboot a VM, it may potentially have the positive effect that the buffers are cleared up and latency through the VM reduces. It may also have the effect that for a brief period there is a service discontinuity. Such and other effects are determined by the effect estimation module45. However, even if it is stated that each policy action has one or more effects, in a complex system such as a telecom network, these effects are not easy to compute/predict. For example, the action of rebooting the VM has a latency decrease effect, but the actual effect may be quite difficult to compute as it is possibly a complex function of the current state of the network slice and the VM, etc. Therefore, it is advantageous to employ some Artificial Intelligence (AI) functionality such as machine learning to observe the effects over repeated instances of the action in order to train the AI function, and to be able to more accurately estimate the effects of various actions in various network states. The effect estimation module45can be said to learn initial effects my merely observing effects of the actions taking into account inventory items (e.g. current state and network topology). Inputs to the effect estimation module45comprises the action, topology, and state of the network before the action and the corresponding effect on the network observed after the action was executed. In one example realization, the effect estimation module45may be employed to observe the network with a human operator selecting the set of actions to be executed. This may then be used to initialize e.g. a Reinforcement Learning (RL) model for computing the effects of various actions. Thus, over time the effect estimation module45learns by observing relationships of the input and the output to provide a better effect estimate for each policy action (e.g. by updating the Reinforcement Learning or Machine Learning model). Moreover, in some embodiments, each of the plurality of policy actions held by the policy action bank46is further associated with at least one pre-condition of the telecom network48. More specifically, each pre-condition defines a predefined parameter value or a predefined parameter range of a state variable of the telecom network48. Accordingly, the control circuitry of the policy engine module44is further configured to obtain a current state of the telecom network48, the current state being defined by a current values of a plurality of state variables of the telecom network48. Thereby, the selection of the set of policy actions is further based on the obtained current state of the telecom network48and the obtained pre-conditions. Stated differently, each of the above mentioned policy actions may be annotated with preconditions that are required to hold before the policy action is performed. These may cover safety considerations or necessary pre-configurations that constrain under what states (of the telecom network48) the action may be allowed to be performed. If for example the policy action is “rebooting a specific VM”, the pre-condition may be that it may only be performed/executed provided that it is not serving another slice. In another example embodiment, a pre-condition may be that all other slices sharing this VM are not latency critical. Still further, the control circuitry of the policy engine module44may be further configured to compute a cost function based on the desired state of the telecom network48, the estimated action effects of the plurality of policy actions, the current state of the telecom network, and the pre-conditions of the plurality of policy actions. The policy actions whose actions whose associated predefined pre-condition(s) is/are not fulfilled based on the current state of the telecom network are associated with a higher cost than the policy actions whose associated predefined pre-condition(s) is/are fulfilled based on the current state of the telecom network. In a 5G system, the high-level service level agreements (SLAs) are converted into low level quality indicators (state variable thresholds). In such systems, the low-level quality indicators could be throughput, latency, availability, etc. A network management entity (e.g. SLA monitoring module) could typically monitor the state of the telecom network and when a particular quality indicator (state variable) goes “bad”, an alarm may be raised by the network management entity which could trigger an action that is emanated from the current network state and policy rules as described in the foregoing. In one exemplary embodiment, the state variable could be the bitrate for a video rendering service. For example, if an end-user is watching a video with certain quality assurance guaranteed to the client in the SLA. This can be monitored using a state variable such as throughput in the downlink transmission for the video service-type to the client. When this is not met, then an alarm is raised by the SLA monitoring module in the network management system. Based on the state of the network, a policy rule directing sequence of actions is initiated. If for example, the state of the network is such that there is sufficient additional radio resources (e.g. more sub-carrier/time allocations in the lower level) available, then the policy rule will trigger an action resulting of providing more resources to the video rendering network slice to meet the SLA. In another exemplary embodiment, one can consider availability (for example, 99% availability) of a service such as billing information to clients in a telecom network as a state variable. Thus, when the SLA monitoring module generates an alarm, e.g. because the state of the telecom network is such that billing server cannot be reached then the policy engine module may trigger a reboot of that particular server. In both of the above described embodiments, the effect of the action is captured directly to address the cause of the SLA violation. In summary, the above proposed method and system one can determine a set of actions for execution that is far more well-tailored (than previously known) to meet the requirements to achieve the desired SLA with reduced negative side effects. This may form the basis for a more robust closed-loop operation to maintain SLAs. In more detail, the trigger to the whole sequence of process steps is an annotated alarm generated by the SLA monitoring module42. Then, the desired state, estimated effects, policy action pre-conditions, and policy action cost parameters, are used by or provided as input to the policy engine module44. The policy engine module “makes” a decision (selects a set of policy actions for execution) using any standard optimization technique such as e.g. AI planning to arrive at one or more actions in an ordered sequence that gives the desired state. Thus, the selection of the set of policy actions is goal driven and actively uses the estimated effects to select a cost optimal sequence of actions. For example, a cost parameter may be the expected time of the action. In that case, the optimal action sequence is one that takes minimum amount of time, and still reaches the desired state. The set of policy actions (action sequence) is then set to an orchestrator47that executes the actions on the network/network slice48. The result of the execution are monitored against the SLAs41and if needed a new alarm is generated at a future time, closing the loop. Furthermore, as mentioned in the foregoing, the conventional approach is to use policies that are statically defined. An example from the ONAP (Open Network Automation Platform) policy framework is shown in the table below. Policy NameDeployment strategySlice-Type-1At least 4 hosts, scale up on 70% load,scale down on 40% loadSlice-Type-2At least 3 hosts, scale up on 70% load,scale down on 30% load Such a rule-based approach relies heavily on a network expert's knowledge of the deployed network. Moreover, the effects of such policies are not expressed nor even contemplated. Thus, such rule-based knowledge is hard to extend to other networks. In particular, for 5G slices, it may be difficult to employ such static policies in order to dynamically ensure that slice SLAs are met. Instead, in the proposed approach, the policy actions are planned in a more elaborate manner based on their effects on the network. The present disclosure has been presented above with reference to specific embodiments. However, other embodiments than the above described are possible and within the scope of the disclosure. Different method steps than those described above, performing the method by hardware or software, may be provided within the scope of the disclosure. Thus, according to an exemplary embodiment, there is provided a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a vehicle control system, the one or more programs comprising instructions for performing the method according to any one of the above-discussed embodiments. Alternatively, according to another exemplary embodiment a cloud computing system can be configured to perform any of the methods presented herein. The cloud computing system may comprise distributed cloud computing resources that jointly perform the methods presented herein under control of one or more computer program products. Generally speaking, a computer-accessible medium may include any tangible or non-transitory storage media or memory media such as electronic, magnetic, or optical media—e.g., disk or CD/DVD-ROM coupled to computer system via bus. The terms “tangible” and “non-transitory,” as used herein, are intended to describe a computer-readable storage medium (or “memory”) excluding propagating electromagnetic signals, but are not intended to otherwise limit the type of physical computer-readable storage device that is encompassed by the phrase computer-readable medium or memory. For instance, the terms “non-transitory computer-readable medium” or “tangible memory” are intended to encompass types of storage devices that do not necessarily store information permanently, including for example, random access memory (RAM). Program instructions and data stored on a tangible computer-accessible storage medium in non-transitory form may further be transmitted by transmission media or signals such as electrical, electromagnetic, or digital signals, which may be conveyed via a communication medium such as a network and/or a wireless link. It should be noted that the word “comprising” does not exclude the presence of other elements or steps than those listed and the words “a” or “an” preceding an element do not exclude the presence of a plurality of such elements. It should further be noted that any reference signs do not limit the scope of the claims, that the disclosure may be at least in part implemented by means of both hardware and software, and that several “means” or “units” may be represented by the same item of hardware. Although the figures may show a specific order of method steps, the order of the steps may differ from what is depicted. In addition, two or more steps may be performed concurrently or with partial concurrence. For example, one or more of the modules of the system40may be combined and realized as a single module or entity. In other words, the functions of one or more of the modules may be integrated into other modules. Such variation will depend on the software and hardware systems chosen and on designer choice. All such variations are within the scope of the disclosure. Likewise, software implementations could be accomplished with standard programming techniques with rule-based logic and other logic to accomplish the various connection steps, processing steps, comparison steps and decision steps. The above mentioned and described embodiments are only given as examples and should not be limiting to the present disclosure. Other solutions, uses, objectives, and functions within the scope of the disclosure as claimed in the below described patent embodiments should be apparent for the person skilled in the art.
31,320
11943109
DETAILED DESCRIPTION ReferencingFIG.1, an example system schematically depicts aspects of embodiments of the present disclosure. The example system includes an application102(e.g., a vehicle) having a first network104and a second network106thereon. A network, as utilized herein, should be understood broadly, and may include one or more aspects such as: the hardware implementation (e.g., wires and wiring configurations, applicable standards such as connectors, insulation, shielding, wire requirements such as gauging, twisting, coaxial arrangements, etc.), implementations of any layer (e.g., from the ISO 7 layer model, such as: application layer, presentation layer, session layer, transport layer, network layer, data link layer, and/or physical layer; although a given network may have fewer layers, and/or layers organized in a distinct manner); and/or may be wired or wireless in whole or part. Without limitation to any aspect of the present disclosure, example and non-limiting networks include a Controller Area Network (CAN), a Media Oriented Systems Transport (MOST) network, a Local Interconnect Network (LIN), a FlexRay network, a Time-Triggered Protocol (TTP) network, a Low-Voltage Differential Signaling (LVDS) network, and/or an Ethernet implemented network. In certain embodiments, one or more networks may be an electrical signal zone (e.g., a device providing data and/or receiving commands as an electrical signal, such as a voltage value, a frequency value, and indicated resistance value, or the like), such as a sensor or actuator electrically coupled to an interpreting device that is capable to receive information from, and/or pass information or commands to, one or more electrical devices on the electrical signal zone. An example system includes the first network104being of a different type than the second network106. As utilized herein, two networks having different types should be understood broadly, and includes networks having different protocols, at least one layer distinct from each other (e.g., having a distinct application layer, presentation layer, etc.), two networks that are not operationally compatible (e.g., a device coupled to one of the networks will not function on the second network without changes to connections, communications, or other aspects), and/or two networks that are not message compatible (e.g., messages configured for a first one of the networks could not be directly placed on the second one of the networks, due to a distinction such as addressing, frame construction, message logic compatibility, etc.). An example system includes the first network104being an Ethernet implemented network, and the second network106of a different type, such as a CAN network and/or a LIN network. The example system further includes a converged network device (CND)108interposed between the first network104and the second network106, and structured to facilitate communications between the first network104and the second network106. The CND108interposed between the networks104,106includes embodiments wherein the CND108passes communications between the networks104,106, for example receiving a communication from the first network104, translating the communication for the second network106(e.g., encapsulating all or a portion of the communication into a message for the second network106; converting aspects of the communication such as device addresses, bit depths for data, and/or unit values for data; and/or adding or removing aspects of the communication such as priority information, message delivery requests or requirements, industry standard information such as message identifiers, etc.). In certain embodiments, the CND108does not physically pass communications, or just passes a portion of the communications, but may regulate, manage, provide permissions, suppress messages, or otherwise control other devices (e.g., switches, routers, gateways, repeaters, or the like) that perform operations to pass communications between the networks. Accordingly, the CND108interposed between the networks104,106may, in certain embodiments, be physically positioned between the networks104,106, where communications passing between the networks104,106are physically received by a component of the CND108. In certain embodiments, the CND108interposed between the networks104,106may have visibility to communications on the networks104,106, and control devices to regulate the passing of messages between the networks. In certain embodiments, the CND108interposed between the networks104,106may have visibility of end points on the networks104,106, and control devices to regulate the passing of messages between the end points of each network104,106. One of skill in the art, having the benefit of the present disclosure, can readily arrange a CND108according to one of these interposition schemes, and/or according to a combination of more than one of these interposition schemes, having information ordinarily available when contemplating a particular system. Certain considerations when designing an interposition scheme for a CND108for a given system include, without limitation, include: the number and type of networks on the vehicle; the capabilities of the individual networks (e.g., throughput, bandwidth, address availability, broadcast/unicast/multi-cast availability and desirability of each network and/or end points on a network, requirements and/or availability of acknowledgement for each network and/or end points, and/or requirements and/or availability of encryption for each network and/or end points); the availability, position, and/or control over network implementing controllers (e.g., presence and ownership of switching devices; access to instructions, such as firmware or buffers, for available devices; and/or the connectivity of available devices to the one or more networks, such as whether the devices are arranged to implement desired message passing between networks, desired redundancy, and/or desired failure mode response); capability of network implementing controllers (e.g., buffer sizing and availability, message rate capacity, processing capacity); hardware cost considerations for adding CND-specific components to the system; hardware cost considerations for providing capability for CND operations in other components of the system; integration cost considerations and system capability to implement additional CND-specific components and/or adding capability for CND operations in other components of the system); the number, type, and/or message throughput of end points that utilize cross-network communications; the expected change of any one or more of these aspects over the life of the vehicle (e.g., due to service events, upgrades, and/or campaign events such as product recall events related to the vehicle); and/or the expected change of any one or more of these aspects over a life cycle of a related group of vehicles (e.g., a related fleet of vehicles; model year of vehicles; and/or a group of model years relevant to the system, such as vehicles expected to have a similar network infrastructure, with variance to the distribution of devices, changes to the network, or the like). In the example ofFIG.1, a first external device110is depicted as communicatively coupled to the application102. The first external device110is directly coupled to the application102, which may include a directed wired connection (e.g., to a service port, OBD port, or other available connection) and/or a wireless connection (e.g., a WiFi connection such as an IEEE 801.11 compatible connection, and/or a Bluetooth connection). The first external device110may connect to a specific network (e.g., the first network104or the second network106), and/or may connect to another device (e.g., the CND108and/or a device regulated by the CND108) that manages communications with the external device110directly. Whether the external device110is coupled to a network104,106or another device such as the CND108, in certain embodiments the CND108is capable to manage communications such that the external device110receives only authorized communications, and further to manage communications such that the external device110may request communications from an end point on any network104,106and nevertheless receive the requested information. In certain embodiments, the first external device110may be a service tool, original equipment manufacturer's (OEM's) tool, a manufacturer's tool, a body builder's tool, and/or an application (e.g., an application communicating through a computing device such as a laptop, desktop, mobile device, and/or mobile phone; e.g., an application operated by an owner, servicer personnel, fleet manager, or the like). In the example ofFIG.1, a second external device114is depicted in communication with the application102and/or the first external device110through a cloud connection112. The cloud connection112may be a connection of any type, including a mobile connection (e.g., a modem on the application102connecting using cellular data or another data service), an internet connection, a wide area network (WAN), and/or combinations of these. The cloud connection112may access the application102through a transceiver, which may form a part of the CND108and/or be regulated, at least in part, by the CND108. In certain embodiments, an application102may have more than one transceiver, where one or more, or all, of the transceivers are regulated, at least in part, by the CND108. In certain embodiments, the CND108may regulate certain vehicle communications (e.g., from certain networks, end points, devices, types of data, flows, and/or applications on the vehicle), but not other communications. An end point, as used herein, should be understood broadly. An end point is an organizing concept for access to a network104,106of the vehicle, and may include a specific device (e.g., an engine controller, a transmission controller, a door controller, an infotainment system, etc.), a group of devices having a single network access (e.g., multiple devices communicating together through a single network access point, where the network104,106and/or the CND108may have visibility to the individual devices, or may only have visibility to the communications from the end point as a group). For example, a door controller (not shown) may be an end point for one of the networks104,106, with communications for underlying devices (e.g., door position sensor, door lock actuator and position, window actuator and position, etc.) passing to the network104,106through the door controller end point, where the CND108may have visibility to the underlying devices (e.g., a message indicating door position, that includes identifiers that the door position sensor is sending the message), or may have visibility only to the door controller end point (e.g., the message indicating the door position is known to be provided by the door controller, but the CND108does not know which underlying device may have sent the message). One of skill in the art, having the benefit of the present disclosure and information ordinarily available about a contemplated system, can readily determine which devices in the system are end points for each network104,106. Certain considerations for determining end point arrangements include, without limitation: the availability of hardware ports on the network(s); the distribution of vehicle controllers; the messages that are to be passed between vehicle controllers; the regulating options (e.g., message rates, priorities, data collection, message configuration, identity information of components, addressing management between networks and with external devices, etc.) as set forth in the present disclosure that are to be available for a given end point; the desired granularity of data control (e.g., permissions for specific devices to provide or request information; permissions for applications either on-vehicle or off-vehicle to provide or request information; security authorization and type, such as per-user, per-entity, per-device, per-application, per-flow, etc.); and/or redundancy options that are to be available for the given system (e.g., redundancy of network communications capability, redundancy of control operations and related devices, and/or redundancy of CND operations where CND components are distributed in more than one location of the vehicle). An application, as utilized herein, should be understood broadly. An example application includes a group of related vehicle functions or operations, for example speed control (e.g., of the vehicle, or a sub-component of the vehicle such as an engine or a driveline), anti-lock brake system (ABS) operations, an advanced driver-assistance system (ADAS), performance control (e.g., achieving a torque request, speed request, or other performance request from an operator), or other function of the vehicle. An example application includes a group of related functions apart from the vehicle, such as an application to support geolocation and/or navigation, to request and/or process service information about the vehicle, and/or a third-party application interacting with the operator (e.g., to find a nearest hotel, selected event, etc.). Applications may be implemented by the vehicle manufacturer, a supplier, an original equipment manufacturer, a body builder, a third party, the operator, service personnel, or the like. Applications, as used herein, provide an organizing concept that may be utilized to relate certain data, certain end points, and/or related functions of the vehicle. In certain embodiments, the CND108can utilize an application to identify a data source, a data destination, permissions available for the application, priority information related to the application, or the like, to implement certain data regulating operations herein. A flow, as utilized herein, should be understood broadly. An example flow includes a related group of data (e.g., speed data, temperature data, audio-visual data, navigation data, etc.), a related group of functions (e.g., among vehicle functions, extra-vehicle functions such as service operations and/or data collection, aggregations between related vehicles, and/or combinations of these that are related for a particular system), a related group of devices (e.g., door actuators), and/or a related group of applications. Flows, as used herein, provide an organizing concept that may be utilized to relate certain data, certain end points, certain applications, and/or related functions of the vehicle or apart from the vehicle. In certain embodiments, the CND108can utilize a flow to identify a data source, a data destination, permissions available for the flow, priority information related to the flow, or the like, to implement certain data regulating operations here. In certain embodiments, the utilization of the flow allows the CND108to perform separate operations that may involve the same end points to support the desired network management. For example, a vehicle speed management application may have a high priority, and a speedometer end point may be associated with the vehicle speed management application. In the example, if the vehicle speed is being communicated to support the vehicle speed management application, then the CND108applies a high priority to the vehicle speed message. However, if the vehicle speed is being communicated to support a trip planning flow (e.g., where a trip planning flow is present and does not have a high priority), the CND108may apply a lower priority to the vehicle speed message. In a further example, a failure of a vehicle controller, portion of a network, or other off-nominal condition may result in the migration of the vehicle speed management application to another controller in the system, whereby the vehicle speed message is being communicated (e.g., where the backup controller is on another network) to support the vehicle speed management application, and the CND108may apply a higher priority to the vehicle speed message. The utilization of flows and applications to organize the components of the system allows for the same or similar information to be regulated by the CND108in a differential manner to support various functions, allowing for improvements in the performance and security of network regulation operations (e.g., reducing unnecessary cross-network traffic, providing information only as needed, and/or regulating communications with external devices), and supports additional functionality relative to previously known systems, such as redundancy support, distributed control, and granular cross-network messaging. A service group, as utilized herein, should be understood broadly. An example service group includes a related group of applications for the vehicle. The related group of applications may be entirely positioned on the vehicle (e.g., one or more vehicle systems, functions, or other applications of the vehicle), and/or may include aspects that are positioned on external devices (e.g., with supporting processing, data collection or storage, externally sourced data used by the service group, etc.) which may be a web application, web tool, cloud application, service application, or the like. In certain embodiments, any group of local communicating devices may be logically related as a service group. The utilization of service groups to organize the components and/or applications of the system allows for the same or similar information to be regulated by the CND108in a differential manner to support various functions, allowing for improvements in the performance and security of network regulation operations (e.g., reducing unnecessary cross-network traffic, providing information only as needed, and/or regulating communications with external devices), and supports additional functionality relative to previously known systems, such as redundancy support, distributed control, and granular cross-network messaging. Regulated components, as utilized herein, and without limitation to any other aspect of the present disclosure, include any components of a system that are regulated with respect to communications, including data collection, subscriptions, data requests, access to external devices and/or addresses, access to network zones, access to end points, utilization of communication resources (e.g., network zone bandwidth, external communication portals, total data limits or quantities, etc.). Regulated components include, without limitation, one or more of: end points, flows, applications, controllers, service groups, interface circuits, network zones, external communication portals, external devices, source addresses, destination addresses, vehicle functions, entities associated with any of these, users associated with any of these, and/or user roles associated with any of these. Example operations to regulate communications between end points of network zones, and/or regulating communications with external communication portal(s) and/or external devices include, without limitation, operations such as those described following. Operations to regulate may be performed for end points, for associated groups of end points, and/or for network zones. Associated groups of end points may be associated according to flows, applications, service groups, controllers, vehicle functions, source addresses for communications, and/or destination addresses for communications. In certain embodiments, applications, service groups, and/or flows may be provided with an identifier as an implementation to associate related components such as end points. Operations to regulate may be performed by, without limitation, the CND, a network gateway, a network interface circuit, and/or a gateway interface circuit. Regulating operations are described in the context of certain example regulating devices throughout the present disclosure, but embodiments may be configured to have other devices perform the regulating. Example communication and/or regulating operations include:providing a communication between a first end point and a second end point (in either direction), including configuring the communication (e.g., protocols, message information, metadata, parameter units, etc.) for the receiving network zone and/or end point device;encapsulating a message from the first network zone and providing the encapsulated message to the second network zone;determining if a requesting device (and/or associated flow) on one of the network zones has permission to request a communication from a device on the other one of the network zones, and providing the communication in response to the permission determination;adjusting at least one of a data rate, requested resolution, and/or requested response time of a communication between devices of the network zones based on a permission determination for a requesting device, a communication performance of a requesting and/or a providing device, and/or a network performance parameter (e.g., current available bandwidth, absolute or current network capability, network utilization, etc.) of one or both network zones, and/or a priority value associated with a requesting device (and/or associated flow) for a communication;performing an up-sampling and/or down-sampling operation on the communicated data between the network zones;mirroring communications from a first end point to a port of the second network zone, including encapsulating, configuring, processing, and/or up-sampling or down-sampling the mirrored communications;providing a communication from a first end point to a device coupled to the second network zone, such as a diagnostic device, OBD device, service tool, manufacturing tool, OEM tool, and/or network monitoring device, and/or where providing the communication includes encapsulating, configuring, processing, and/or up-sampling or down-sampling the provided communications, and/or where the provided communications may be unicast, multi-cast, and/or provided as a subscription service;providing a communication from a second end point device to a device coupled to either the first network zone or the second network zone, such as a diagnostic device, OBD device, service tool, manufacturing tool, OEM tool, and/or network monitoring device, and/or where providing the communication includes encapsulating, configuring, processing, and/or up-sampling or down-sampling the provided communications, and/or where the provided communications may be unicast, multi-cast, and/or provided as a subscription service;providing a communication from a device coupled to the second network zone1908, such as a diagnostic device, OBD device, service tool, manufacturing tool, OEM tool, and/or network monitoring device, to a first end point, and/or where providing the communication includes encapsulating, configuring, processing, and/or up-sampling or down-sampling the provided communications, and/or where the provided communications may be unicast, multi-cast, and/or provided as a subscription service;further providing the communication as a command value, for example where the first end point executes operations relating to the mission of the mobile application in response to the command value (e.g., setting a set point, target value, or threshold in response to the command value);providing a communication from a device coupled to the second network zone, such as a diagnostic device, OBD device, service tool, manufacturing tool, OEM tool, and/or network monitoring device, to a first end point, and/or where providing the communication includes encapsulating, configuring, processing, and/or up-sampling or down-sampling the provided communications, and/or where the provided communications may be unicast, multi-cast, and/or provided as a subscription service;further providing the communication as a test execution value, for example where the first end point executes operations relating to an active text execution operation of the mobile application in response to the command value (e.g., performing certain operations for a service test, active diagnostic operation, or the like);providing a communication from a first end point to a number of second end point devices, where the provided communications are configured to meet a super-set of the requirements of the second end point devices (e.g., data rates, resolution, units, etc.), and where the provided communications may be unicast, multi-cast, and/or provided as a subscription service;parsing a communication value from a first device (e.g., a first end point, second end point, and/or device coupled to a network zone, such as a diagnostic device, OBD device, service tool, manufacturing tool, OEM tool, and/or network monitoring device), determining a target device (e.g., communication recipient and/or communication provider responsive to the communication value) in response to the parsed communication value, and configuring communications of the target communication recipient and/or communication provider in response to the parsed communication value. For example, the communication value may include a generic and/or normalized component identifier (e.g., turbine temperature, front passenger door actuator, etc.), and the CND determines the respective end point(s) corresponding to the component identifier according to the current configuration of the mobile application, and may further determine communication routing, encapsulation, processing, and the like to translate between the first device and the target device(s). For example, such operations allow for the configuration and placement of devices on network zones to be changed, while not requiring that devices, service personnel, or other requestors keep track of the specific configuration and placement of devices;additionally or alternatively, such operations include the CND storing configuration information in response to a configuration change (e.g., replacement or moving of a device from one network zone to another, changes to the communication parameters or capabilities of the device, etc.), and/or performing run-time determinations to confirm the location, identity, configuration, communication parameters and/or capabilities of devices, which may be utilized during run-time operations and/or stored for later utilization and/or as a default configuration subject to further updates;performing any one or more of these operations on a group or sub-group of devices, for example where devices are consolidated in relation to a single end point, but may be treated as separate devices by other end points or devices in communication with a network zone (e.g., a diagnostic device, OBD device, service tool, manufacturing tool, OEM tool, and/or network monitoring device). For example, such operations allow for multiple configurations, updates, and/or upgrades of the mobile application where a first configuration has two (or more) devices with separate end points, and a second configuration has the two (or more) devices utilizing a single end point (and/or the two devices consolidated into a single device). Example and non-limiting embodiments include consolidation of multiple sensors communicating to a network zone through a single interface (e.g., a smart sensor having network communication capability, a multi-plexed signal, etc.), and/or replacing an interface of multiple components behind a single network interface (e.g., a single communicating device, such as an edge gateway or a configurable edge gateway, that interfaces to a single network zone as a single end point and manages communications for related devices). In a further example, such operations allow for devices to communicate across network zones without regard to changes in the configuration, to support upgrades and updates that relate to device relationships with end points, and to support backwards compatibility (e.g., a later configuration, a later control distribution among devices, and the like, where operations of the CND allow an earlier system having a distinct configuration to support the updated configuration and/or control distribution among devices);additionally or alternatively, such operations include the CND storing configuration information in response to a configuration change (e.g., intervention of a single end point between more than one device and a network zone, consolidation of devices, etc.), and/or performing run-time determinations to confirm the location, identity, configuration, communication parameters and/or capabilities of devices, and/or consolidation status of devices, which may be utilized during run-time operations and/or stored for later utilization and/or as a default configuration subject to further updates;performing any one or more of these operations on a group or sub-group of devices, for example where devices are distributed between more than one end point but may be treated as a single devices by other end points or devices in communication with a network zone (e.g., a diagnostic device, OBD device, service tool, manufacturing tool, OEM tool, and/or network monitoring device). For example, such operations allow for multiple configurations, updates, and/or upgrades of the mobile application where a first configuration includes a device with a single end point, and a second configuration has the device (or portions thereof) utilizing more than one end point (and/or a previously consolidated device made up of two or more separate devices in the second configuration). Example and non-limiting embodiments include separation of a group of sensors communicating to a network zone through a single end point (e.g., a smart sensor having network communication capability, a multi-plexed signal, etc.) into one or more sensors each having a separate end point (and/or sub-groups of the multiple sensors each having a separate end point). In a further example, such operations allow for devices to communicate across network zones without regard to changes in the configuration, to support upgrades and updates that relate to device relationships with end points, and to support backwards compatibility (e.g., a later configuration, control distribution among devices, and the like, where operations of the CND allow an earlier system having a distinct configuration to support the later configuration);additionally or alternatively, such operations include the CND storing configuration information in response to a configuration change (e.g., division of devices behind a single end point on a single network zone into more than one end point and/or across more than one network zone), and/or performing run-time determinations to confirm the location, identity, configuration, communication parameters and/or capabilities of devices, and/or consolidation status of devices, which may be utilized during run-time operations and/or stored for later utilization and/or as a default configuration subject to further updates;implementation of a service oriented architecture, wherein the CND determines available services (e.g., data parameters available for communications, command values available for execution, and/or configurations of these such as rate information, units, resolution, precision, accuracy, availability descriptions, dependent data and/or operating conditions, etc.), publishes the available services, and/or determines subscribing clients (e.g., devices, flows, and/or end points) for the available services;additionally or alternatively, such operations include the CND determining permissions and/or authorization for publishing available services, for seeing available services (and/or portions of the available services), and/or subscribing to available services;additionally or alternatively, such operations include the CND determining subscribing entities as an end point, a device, a flow, and/or an external device such as a diagnostic device, OBD device, service tool, manufacturing tool, OEM tool, and/or network monitoring device;additionally or alternatively, such operations include the CND determining a priority of service oriented communications, which may be dependent upon the publishing device, end point, or related flow, and/or dependent upon the subscribing device, end point, or related flow;additionally or alternatively, such operations include the CND adjusting the service oriented architecture operations in response to operating conditions (e.g., mobile application operating conditions, network status of one or more affected network zones, communication status of one or more external devices, etc.);additionally or alternatively, such operations include the CND accessing stored information setting forth available services, publication parameters (permissions, priority, related operating conditions, etc.), and/or subscribing entity information;additionally or alternatively, such operations include the CND updating stored information in response to one or more of: a received update, such as a policy description, a service configuration description, etc.; run-time updates from end-points, devices, and/or flows, for example, and without limitation, executed during start-up or shut-down operations of the mobile application;additionally or alternatively, such operations include the CND implementing a service oriented architecture based on run-time operations, with or without storing the information and/or updating the stored information; and/oradditionally or alternatively, allowing updates to the stored information, run-time updates to the stored information, and/or run-time operations implementing the service oriented architecture, in response to a priority and/or a permission associated with the device, end point, and/or flow requesting the update and/or run-time implementation;additionally or alternatively, operations of an example CND include adjusting operations of any one or more of the foregoing in response to operating conditions of the mobile application (e.g., adjusting communication operations during certain operations, such as: high power operation; high transient operation; shut-down operation; start-up operation; a selected operating mode such as vocational operation, power take-off (PTO) operation, charging operation, cruise control operation, autonomous vehicle operation, etc.). Adjustments to communication may be qualitative (e.g., allowing or disallowing certain communication types, certain communication priority thresholds, etc., during certain operating conditions; and/or capturing certain data values during certain operating conditions as a data capturing event), quantitative (e.g., controlling a rate of communications, a network zone utilization, external device communication rates, etc.), or a combination of these (e.g., controller a rate of communications for certain communication types, etc.) of these, and may include increasing or decreasing capability of communications according to the operating condition and/or the communication type (e.g., providing for decreased device communication capability during shut-down operations, but increasing external device communication capability during the shut-down operations; increasing device communication capability for certain devices or flows, but reducing device communication capability for other devices or flows during start-up operations, etc.);additionally or alternatively, operations of an example CND include adjusting operations of any one or more of the foregoing in response to off-nominal operating conditions relating to the mobile application, where the off-nominal operating conditions include conditions such as: degradation of a network zone (e.g., loss of throughput, loss of communication with one or more end points of a network zone, injection or presence of noise onto a network zone, injection of traffic onto a network zone, a physical failure of at least a portion of the network zone, etc.); a fault condition of one or more devices (e.g., where the CND adjusts a data source related to the faulted device, adjusts a data rate related to the faulted device, implements a back-up data source for the faulted device, re-routes data to a back-up data recipient for data provided to the faulted device, implements an event driven data collection scheme where the fault of the device is an event, etc.); a lost control function of a vehicle controller (e.g., where the lost control function indicates that the vehicle controller is lacking a data value to perform its mission; where the lost control function indicates that the vehicle controller has lost communication with the associated network zone; and/or where the lost control function is an indication, by the vehicle controller or another controller in the system, that the vehicle controller is not able to perform its mission or a part of its mission). Further example operations of the CND, in response to the off-nominal conditions, include one or more of:providing a data value to a vehicle controller from an alternate source (e.g., from a different end point, network zone, etc., and which may include encapsulating, configuring, processing, and/or up-sampling or down-sampling the alternate source communications, which may result in communications that are identical to the original data value that was lost, or alternative communications that may be sufficient as a backup data value for the vehicle controller);providing a data value to a second vehicle controller to replace all or a portion of the lost control function of the vehicle controller, for example where a second vehicle controller is configured to act as a backup for the vehicle controller, where the second vehicle controller may be fully capable to perform the lost control function and/or may be capable to perform alternate operations (e.g., with more limited capability) in place of the lost control function; the data value provided to the second vehicle controller may be a same data value as provided to the vehicle controller, an alternate source communication (e.g., having a distinct data rate, resolution, units, precision, etc.), or another data value altogether (e.g., where the second vehicle controller utilizes a distinct data set to perform the fully capable or alternate operations). Additionally or alternatively, the CND is capable to provide data from any network zone to the vehicle controller and/or to the second vehicle controller, which may themselves be on any network zone;suppressing communication of one or more data values in response to the off-nominal condition, for example where a fault condition, device or end point loss, or the like indicates that the one or more data values are not being utilized; where the one or more data values are low priority in view of the off-nominal condition; and/or where the one or more data values are indicated as invalid in view of the off-nominal condition (e.g., sensor values from a sensor having a fault or failed condition);shifting of communications from a first network zone (e.g., a degraded network zone) to a second network zone, such as when end points and/or devices are reachable through more than one network zone (e.g., where the zones are logically separated but physically coupled, where more than one physical route is available between relevant end points (e.g., referenceFIG.15), and/or where a second vehicle controller and/or a second end point coupled to the second network zone is capable to perform the operations (or a portion thereof, and/or an alternate thereof) of a first vehicle controller and/or first end point coupled to the first network zone;repeating communications from a first network zone (e.g., a degraded network zone) on a second network zone;shifting an end point from a first network zone (e.g., a degraded network zone) to a second network zone, for example where the shifted end point is physically coupled, or couplable, to both the first network zone and the second network zone (e.g., where the separation between the network zones is a logical separation, and/or where the end point is reachable through more than one network zone, such as depicted inFIG.15), where operations of the CND include adjusting an addressing, protocol, encapsulation operations, and/or any other operations to effect the shift of the end point, which may further include updating the location of the shifted end point with other devices/end points in the system, or translating communications with other devices/end points in the system without notification of the shift;combinations of these, such as shifting an end point from a first network zone to a second network zone, and shifting related communications to the second network zone and/or repeating related communications on the second network zone;regulate communications between end points of a first network zone (and/or one or more additional network zones) and an external device (e.g., a diagnostic device, OBD device, service tool, manufacturing tool, OEM tool, network monitoring device, operator device, cloud computing device, and/or a third party application), where the regulating between end points of the first network zone and the external device(s) including any one or more of the foregoing operations, and/or may further include: limiting communications according to off-nominal conditions of a component (e.g., an end point, device, flow, network zone, etc.) of the system; limiting communications according to an operating condition of the mobile application; limiting communications according to a permission and/or priority of the end point(s), associated flows, and/or the external device; limiting communications according to an aggregated data value (e.g., corresponding to an associated data service provider for the communication; corresponding to a group of end points; corresponding to an associated flow; and/or corresponding to an entity related to any one or more of these), which may be aggregated according to time (e.g., daily, weekly, monthly, etc.), operating condition (e.g., trip, event, etc.), and/or where the data value includes one or more of a total data sent/received value, a data rate value, and/or combinations of these; and/or limiting communications according to an external data access type (e.g., cellular, WiFi, Bluetooth, hardware/port plug-in, etc.); and/orcombinations of any one or more of the foregoing. ReferencingFIG.2, an example system includes a vehicle202having a first network104, a second network106, and a CND108interposed between the networks104,106. The example system depicts the vehicle202communicatively coupled to an external device110, similar to the depiction ofFIG.1, and/or communicatively coupled to a second external device114. The example ofFIG.2depicts another external device204communicatively coupled to the vehicle202, through the cloud connection112in the example. The third external device204is depicted schematically as a lap top, for example as operated by a fleet service manager, owner, and/or vehicle representative (e.g., a warranty administrator). The example ofFIG.2is an illustrative depiction to show additional context options and a specific application as a vehicle, but is otherwise similar to the system ofFIG.1. ReferencingFIG.3, an example embodiment including a vehicle202is schematically depicted, illustrating certain further details that may be present in certain embodiments. The example system includes the vehicle202having a first network104and a second network, and a CND108interposed between the first network104and the second network. In the example ofFIG.3, the second network is an Ethernet network with devices (e.g., an interactive dashboard302, a door actuator310, and a transmission controller320) coupled to an Ethernet switch312. In the example ofFIG.3, a third network318is shown, with a fuel tank sensor306coupled to the CND108. In the example, the third network318may be of the same type as one of the other networks, for example segregated from the other networks to improve the cost of installation, risk management, or for other considerations, and/or the third network may be of a different type to support devices—for example a sensor operating on a LIN network. The third network318may communicate with the CEG314, the Ethernet switch312, or another device (not shown) of the CND108. The example ofFIG.3includes a first device308on the first network104(e.g., a controller for a prime mover, in the example ofFIG.3), and a number of devices (e.g., an interactive dashboard302, a fuel tank sensor306, and a door actuator310, in the example ofFIG.3) on the second network. The system includes one of the devices302,310,320on the second network communicating to the first device308via the CND108. For example, the door actuator310may lock the door when the vehicle202moves, pulling the vehicle movement information (e.g., engine speed, gear position, vehicle speed, and/or a state parameter such as a “VEHICLE MOVING” Boolean value, bit mask, or the like) from the first device308. The arrangement ofFIG.3is a non-limiting example. Additionally or alternatively, a given device (e.g., the prime mover308) may appear as a single end point or as multiple end points, for example the controller of the prime mover308may provide numerous parameters to the first network104, which may each be provided with an identifier and operate as separate end points (e.g., engine temperature from an engine temperature sensor), and/or may include parameters provided by the prime mover308controller as such (e.g., engine temperature from the engine controller). To illustrate an example ofFIG.3, the first network104may be a CAN bus network, where the desired data (e.g., a vehicle movement indicator) is provided according to considerations for the CAN network, and as a CAN message. The door actuator310is provided on the second network, for example an Ethernet network where the door actuator310is on a port of the second network. The port for the door actuator310may be a physical port (e.g., a port of an Ethernet switch312dedicated for the door actuator310) or a virtual port (e.g., an address location for the second network, which may be on a shared physical port with one or more other devices). In the example ofFIG.3, the door actuator310cannot receive the CAN message indicating vehicle movement, and the CND108interprets a request from the door actuator310for the vehicle movement indication, retrieves the message from the first network104, and sends the message to the door actuator310over the second network. The operations performed to send the message may vary with the application. For example, the CND108may publish to devices on the second network that certain parameters are available from the first network104(and/or third network318), and provide selected parameters to devices directly (e.g., providing the vehicle movement indicator to requesting devices), or publish data values representing parameters that are available to subscribing devices for those parameters (e.g., utilizing a broker—not shown—to make subscribed parameters available). In certain embodiments, the CND108may limit publication of parameters available to devices, end points, applications, and/or flows that are authorized to see those parameters are available. Stated differently, different devices on the second network may see a different list of parameters available, depending upon the authorization of those devices and/or applications or flows associated with those devices. In certain embodiments, the CND108may limit provision of the parameters to devices, end points, applications, and/or flows that are authorized to receive those parameters—for example by denying a subscription request for a parameter and/or suppressing the sending of a parameter to an unauthorized device despite the subscription. Accordingly, in certain embodiments, a device may be able to see that a parameter is available (e.g., in a published list of available parameters), but be unable to receive data values of the parameter. In certain embodiments, a device may be limited to seeing available parameters that the device is authorized to receive. In certain embodiments, a device may have only limited availability to receive a parameter, for example the CND108may limit the rate of a data value to support reduced network utilization, data security considerations (e.g., limiting the accuracy, resolution, and/or data rate of sensitive parameters such as vehicle position), and/or to support proprietary considerations (e.g., limiting the accuracy, resolution, and/or data rate of parameters that may relate to a proprietary control operation, for example to limit the ability for an application to reverse engineer or otherwise determine how the control operation functions). In certain embodiments, the CND108determines which parameters to publish, to provide, and the conditions to provide them, based upon stored data defining permissions and/or capabilities of devices, end points, applications, flows, and the like. In certain embodiments, the CND108further accesses stored data defining processing or adjustment operations to the data, for example encapsulation operations (e.g., to pass CAN messages to an Ethernet network), unit conversions, time stamp definitions, and the like. In certain embodiments, the CND108determines the authorization for applications and/or flows that are on vehicle, off vehicle (e.g., operating on an external device such as110,114,204), or combined on and off vehicle. In certain embodiments, the CND108may support prioritization of data flow, including the rate at which devices provide information or receive information, based upon a prioritization of the related device, end point, application, flow, or other parameter. In certain embodiment, the CND108may support differential prioritization based upon the vehicle status or operating condition, for example using a first priority scheme during startup operations, a second priority scheme during run-time operations, a third priority scheme when the vehicle is moving, etc. In certain embodiments, the CND108may be responsive to any defined vehicle condition, such as charging, regenerating, aftertreatment operations, control regimes (e.g., cruise versus operator control), emergency conditions, fault conditions, a service condition, or the like. The example CND108ofFIG.3includes a first device314that communicates with the first network104. An example first device314includes a configurable edge gateway (CEG), that reads communications from the first network104, and provides them to the second network106. In certain embodiments, the first device314translates the communications for the second network, for example encapsulating the communication, a portion of the frame of the communication, and/or a payload of the communication, into a message for the second network. In certain embodiments, the first device314is capable to request communications from devices on the first network104, for example requesting a parameter that is available but is not currently being communicated onto the first network104. In certain embodiments, the first device314is not a part of the CND108, but is controlled by the CND108, for example by responding to command from the CND108, accessing stored data that is written, in whole or part, by the CND108, or through other operations as provided throughout the present disclosure. The example CND18ofFIG.3includes a second device312that communicates with the second network. An example second device312includes an Ethernet switch, which may be configurable, that reads communications from the second network. In certain embodiments, the second device312receive messages from the first network104through the first device314, for example receiving messages in a format that is communicable on the second network. An example first device314includes a CEG that communicates to the Ethernet switch through a port on the Ethernet switch that is provided for messages from the first device314. Accordingly,FIG.3provides an illustration of a second device312on a second network, that communicates with the first device308via the CND108. An example system includes an external device110,114,204that communicates with the CND108. In the example ofFIG.3, the external device110,114,204may communicate through a transceiver304, and/or via direct access to a network of the vehicle202(e.g., using a service port, OBD port, WiFi, Bluetooth, etc.). The external device is structured to adjust a configuration of the CND108—for example by changing the stored data that provides for published available data, associated permissions, defined applications, defined flows, defined end points, defined devices, and the like. In certain embodiments, the external device has an associated permission value, and the CND108permits changes according to the associated permission value, for example blocking adjustments to changes associated with certain networks, devices, end points, applications, flows, or the like. An example system includes the first network as a bus network, which may further be a CAN bus network. An example system includes the second network as an Ethernet network, which may have any selected topology such a data bus architecture. In certain embodiments, the Ethernet network may have a data bus architecture as a hardware topology, but operate in a distinct manner logically (e.g., as a switched network). ReferencingFIG.4, an example system includes a CND108having a first network gateway device404and a second network gateway device404. In the example ofFIG.4, the first network gateway device404is a CEG that accesses one or more CAN based networks406, each having one or more end points408—for example devices coupled to the CAN network406that provide communications to, and/or receive communications from, the respective CAN network406. The example ofFIG.4depicts two CAN networks406, which may be arranged for convenience of integration (e.g., to divide components of the vehicle logically by function, by position in the vehicle, and/or any other arrangement such as a related group of components communicating on a common CAN network406). In the example, the first network gateway device404communicates with both CAN networks406, although the CND108may include, and/or may be configured to regulate, more than one CEG, for example having one CEG accessing each CAN network406, and/or each CEG accessing a subset of the CAN networks406on the vehicle. The example ofFIG.4depicts bus networks406, and the networks406are described as CAN networks for purposes of illustration, but the networks406may be of any type as described throughout the present disclosure. The end points408may be any type of end point capable to communicate with the network406, such as a controller, smart sensor or actuator, or other device capable to provide communications to the network406, and/or receive communications from the network406. The example ofFIG.4describes the CND108as including the network gateway devices402,404, but the CND108may be separate from one or more of the network gateway devices402,404, and may configure operations of the network gateway devices402,404, for example by adjusting stored data thereon, adjusting stored data accessible to the devices402,404, providing commands thereto, and/or performing any other operations as set forth throughout the present disclosure. In the example ofFIG.4, the second network gateway device402is an Ethernet switch that accesses an Ethernet based network410, depicted schematically as a number of end points412communicating with a number of ports414of the Ethernet switch402. The ports414are depicted schematically, and may be logical ports, hardware ports, or combinations of these. The physical topology of the Ethernet network410may be a bus arrangement, a hub arrangement, a star arrangement, or any other type of network topology, and which may be distinct from the logical topology of the Ethernet network410. The second network gateway device402is depicted as having a network interface416, which may include the physical port connection(s). In certain embodiments, the second network gateway device402is a configurable Ethernet switch, which may include a processor, computer readable storage (e.g., to store instructions, configuration information, buffering for data communication and/or collection operations, and the like). These aspects are not shown for clarity of the depiction and the present description, but they may be present on the second network gateway device402, within a same housing as the second network gateway device402, on a separate board (e.g., mounted on a separate printed circuit board) from the network interface416and/or from the remainder of the second network gateway device402, positioned on another device in the system and in communication with the second network gateway device4040(e.g., on the first network gateway device404, on a vehicle controller, and/or on another controller in the system), and/or distributed across a combination of these locations. In the example ofFIG.4, the first network gateway device404includes one or more network interface(s)418(and/or network interface circuit) that communicatively couple the first network gateway device404to the network(s)406, and a translation circuit420that configures messages from the Ethernet network410for communication to the network(s)406, and/or that configures messages from the network(s)406for communication to the Ethernet network410. Additionally or alternatively, the translation circuit420configures messages for passage from one of the network(s)406to another one of the network(s)406—for example where the networks406are of different types, utilize different protocols, would otherwise have conflicting source or destination information, and/or otherwise have distinct characteristics that are managed by the first network gateway device404to ensure message compatibility, successful mission operation of the vehicle, and/or to implement any other configuration operations as set forth in the present disclosure. The translation circuit420is depicted schematically as a single device, but may be implemented as one or more devices, for example with a number of translation circuit420components each implementing a type of configuration, interacting with a type of network406, to distribute processing and/or memory operations of the translation circuit420, or for any other reason according to the particular system. In the example ofFIG.4, the first network gateway device404provides messages to the Ethernet switch in response to a corresponding message on the CAN based network406. In the example ofFIG.4, the first network gateway device404provides the message to a port414of the Ethernet switch. In the example ofFIG.4, any messages provided from the networks406appear on the Ethernet network410as a message on the port between the translation circuit420and the network interface416, and is received from the Ethernet network410through the port between the translation circuit420and the network interface416. The translation circuit420allows for configuration operations between messages, such end points on each network406,410can communicate therebetween, as regulated by the CND108. The example ofFIG.4further includes an on-board diagnostic (OBD) interface422, which in the example communicates with a dedicated OBD port424. The example ofFIG.4is non-limiting for purposes of illustration, and the OBD interface422may be associated with any network, or more than one network (e.g., to support multiple OBD tools that may connect to the vehicle). An example embodiment includes the OBD interface422associated with the second network gateway device402, for example where the OBD system is largely CAN based, allowing for reduced traffic between the translation circuit420and the network interface416, as many of the OBD parameters are native to one or more of the CAN networks406. The OBD interface422may alternatively be present on the Ethernet network410, or present on more than one network406,410of the system. Regardless of the location of the OBD interface422and the network406,410origination of OBD related data, OBD requests and information can be made available to the OBD port424(which may be a physical connection, a wireless connection, or another external connection including a mobile data connection) via operations of the CND108to authorize and provide cross-network communication from end points of any of the networks406,410. Additionally, the example ofFIG.4utilizes an OBD interface422as a non-limiting example, but any type of special, dedicated, and/or proprietary interface may be provided in a similar manner, with an interface and port that can make any data from any end point on a network406,410available, subject to configurable regulation by the CND108. An example system includes the CND108interposed between an electrical sensor and one of the networks406,410, and structured to provide a sensed value on the network in response to an electrical response of the electrical sensor. For example, one of the networks406may be an electrical connection to the second network gateway device402, with a corresponding end point408as the electrical sensor, and whereby the translation circuit420converts the electrical signal from the sensor to a communication for the respective network (e.g., network410, or another network406). In the example, the translation circuit420may perform processing operations on the electrical signal, such as analog/digital (A/D) processing, determination of indicated bits, determination of an indicated value, de-bouncing of the signal, filtering of the signal, diagnostic bit detection (e.g., determination of a fault, and conversion to a corresponding fault value; and/or conversion of predetermined voltage values to a corresponding fault value), saturation management (e.g., limiting outputs to predetermined values), slew limitations (e.g., applying rate-of-change limits to the indicated value), and the like. Electrical signals from the sensor, where present, may be voltage values, frequency values, indicated resistance values, or any other type of sensor electrical value as known in the art. In another example, a system includes the CND108interposed between an electrical actuator and one of the networks406,410, and structured to provide a command value from the network as a configured electrical response to the electrical actuator. For example, one of the networks406may be an electrical connection to the second network gateway device402, with a corresponding end point408as the electrical actuator, and whereby the translation circuit420converts the communication from the respective network (e.g., network410, or another network406) to an electrical signal for the actuator. In the example, the translation circuit420may perform processing operations on the electrical signal, such as digital-to-analog processing, determination from indicated bits to corresponding values, diagnostic bit provision, saturation management, slew limitations, and the like. Electrical signals to the actuator, where present, may be voltage values, frequency values, modulated values, or any other type of actuator electrical value as known in the art. In certain embodiments, an electrical actuator may additionally have sensing values (e.g., position feedback, acknowledgement, etc.), and/or other feedback values (e.g., certain electrical values indicating the actuator has a fault condition, is non-responsive, is stuck, is saturated, etc.) which may be provided on the same or a distinct electrical connection, and which may logically be part of the same network406or a distinct network (e.g., actuation on one network406, and feedback on a second network406). It can be seen that the embodiment ofFIG.4provides for communication between end points on distinct networks, without the end points requiring knowledge about how communications to other end points are to be performed, or where other end points are positioned. Without limitation to any other aspect of the present disclosure, the embodiment ofFIG.4provides the capability for operation of vehicle networks with devices distributed across distinct networks, including networks of a different type. Additionally, the embodiment ofFIG.4provides for operation of the vehicle as devices move between networks, without limitation to whether the device has changed communication capability. For example, a first device on a CAN network that is moved to the Ethernet network can continue to function, with appropriate configuration of the CND108, as messages that were utilized by the device from the CAN network can be moved to the Ethernet network and made available to the device in the new position. In certain embodiments, the migrated device can continue to utilize previous algorithms (e.g., the same local control)—for example computer readable instructions specifically built for the specifics of the former CAN messages, including bit depth, resolution information, message rates, floating/fixed point data nature, and the like, with the CND108configured to encapsulate the entire original CAN message into an Ethernet message (e.g., a frame, a packet, and/or in a specified manner), such that the migrated device can receive the former CAN message as originally presented and utilized by that same local control. Accordingly, the embodiment ofFIG.4, and the principles set forth in relation toFIG.4, allow for changes in the end point device mix between networks, whether across a number of vehicles (e.g., changes that occur over a course of design revisions, model years, or the like) or within a same vehicle (e.g., changes that occur during service, upgrades or changes to end points, upgrades, upfits, recall replacements, etc.), with only an update to the CND108configuration to support the changes. In certain embodiments, the embodiment ofFIG.4and the principles set forth in relation toFIG.4allow for changes in the end point device mix between networks without requiring an update to the CND108configuration, for example where a range of end points are contemplated to be available in more than one possible network location and/or configuration, and where the CND108is configured to determine the end point arrangement present on the vehicle and to utilize a selected configuration (e.g., from among two or more available configurations) accordingly. Accordingly, the embodiment ofFIG.4, and the principles set forth in relation toFIG.4, further allow for changes to the end point device mix between networks, at least within a predetermined range of end point devices and configurations, to support vehicle operations without any changes to the vehicle, and even with only intermittent or no communication with external devices for configuration of the CND108. ReferencingFIG.5, an example system includes a CND108regulating communication between networks on a vehicle, where the networks may be separated physically, logically (e.g., as virtual local area networks (VLANs), or other logical separation schemes), and/or two or more of the networks may be different types. The embodiment ofFIG.5is generally consistent with the embodiment ofFIG.4, with some differences depicted to highlight certain aspects of the present disclosure. The example ofFIG.5includes additional interfaces504,506, which may be separate networks or network zones relative to the networks406. The example ofFIG.5depicts a vehicle control device interface (VCDI)508, which may be an interface to a vehicle controller (e.g., engine controller, transmission controller, anti-lock brake system (ABS) controller, advanced driver-assistance system (ADAS) controller, door controller, battery controller, head unit, interactive dashboard, etc.) of any type, including a controller providing communications at the end point504, and/or an electrical interface such as to a sensor, actuator, or combined sensor and actuator. The example ofFIG.5depicts an additional interface506to an end point502, which may be a communicative device of any type as understood in the art or set forth herein. In the embodiment ofFIG.5, network interface circuits418,508are depicted between the end points408,502and the translation circuit420, to allow for the translation circuit420to interface with numerous network types that may be present on the vehicle. The interface circuits418,508may be positioned with the translation circuit420, or located elsewhere and communicatively coupled to the associated network(s) and to the translation circuit420. The example ofFIG.5additionally depicts networks512,514that are communicatively coupled to the first network gateway device404through end points412on same network as the network interface416. In certain embodiments, the CND108does not have or need specific knowledge about the networks512,514or associated end points516,518, as communications to the networks512,514are provided through the end points412. However, the CND108is structured to provide communications from networks in communication with the second network gateway device402, such as networks406, and/or networks interfaced at end points504,506. Communications from the second network gateway device402may provide the requested information (e.g., ambient temperature, door position, vehicle speed), for example as an encapsulated payload that provides the information, or as a native message (e.g., a CAN message indicating ambient temperature, door position, vehicle speed; and/or a LIN message having associated sensor information). Accordingly, end points516,518can send and receive tunneled messages with networks406(or other networks) in a shared format, or otherwise receive information from any network on the vehicle, subject to regulation by the CND108. ReferencingFIG.6, an example system includes a CND108regulating communication between networks on a vehicle, where the networks may be separated physically, logically (e.g., as virtual local area networks (VLANs), or other logical separation schemes), and/or two or more of the networks may be different types. The embodiment ofFIG.6is generally consistent with the embodiment ofFIG.4, with some differences depicted to highlight certain aspects of the present disclosure. Without limitation to any of the flexibility of arrangements depicted inFIG.4, the example ofFIG.6depicts the translation circuit420positioned in the first network gateway device404. Without limitation to any other aspect of the present disclosure, co-location as depicted inFIG.6, and as utilized herein, can indicate physical co-location (e.g., the translation circuit420positioned within a shared housing with the first network gateway device404, and/or on a same board with the first network gateway device404) and/or logical co-location (e.g., the grouping of operational responsibility of implementing hardware, such as connections, connectivity, operational instructions, stored data, data storage, and/or processing resources, etc.). The determination of a co-location scheme depends upon the purpose of the co-location (e.g., sharing hardware resources, reducing external interfaces, simplifying and/or diversifying risk profiles of the co-located components and/or of other components in the system related to the co-located components); the nature of the co-located components (e.g., hardware implementations, processing and/or memory resources related to the co-located components); the division of ownership of the co-located components (e.g., manufacturer, supplier, service party, vehicle owner, vehicle operator); operational responsibility of components and/or the vehicle (e.g., warranty, operational liability, service, insurance, uptime responsibility, etc.); and/or integration responsibility of components (e.g., installation, design, meeting a footprint requirement, tradeoffs between components, and/or ability to influence these). Accordingly, in certain embodiments, co-locating components may include one or more of: positioning components within a shared housing or group of housings; positioning components in a selected geometric proximity; positioning components in a selected logical arrangement (e.g., associating in a same flow or group of flows, associating in a same application or group of applications, providing operational constraints such as parameter naming, memory assignment, execution order, or the like); positioning components in a selected risk profile arrangement (e.g., positioning in a same impact zone, a same temperature environment, a same NVH environment, a same EMI environment, subject to a same failure mode (e.g., electrical, logical, fault, physical impact, and/or dependency on a physical component such as a pump, cooling system, etc.)); on a same board; and/or within a shared memory location (e.g., computer readable instructions positioned in a shared memory location, and/or executed by a same processor resource). In the example, NVH is the “noise, vibration, and harshness” environment, and EMI is the “electro-magnetic interference” environment. One of skill in the art, having the benefit of the present disclosure and information ordinarily available when contemplating a particular system, can readily determine implementations of components that are co-located as set forth in the present disclosure. It can be seen that components arranged in one or more of the described co-location schemes may be co-located for certain embodiments, or not co-located for other embodiments, and/or may be co-located for the purposes of certain operating conditions, but not co-located for the purposes of other operating conditions. Certain considerations to determine whether components are to be co-located, and the selected co-location scheme for those components, include (without limitation): the purpose of the co-location; operational costs of resources (e.g., communications, processing resources, operational limitations to the vehicle mission, operational impact to the vehicle mission such as cooling requirements, power consumption, and the like); capital costs of resources (e.g., computing power, network infrastructure, memory resources, individual component quality or capability requirements, shielding requirements, data throughput whether intra-vehicle or extra-vehicle, etc.); integration costs for components (e.g., footprint availability and cost, interface management, design flexibility and lock-down trajectory, and/or ability to trade-off and/or optimize with other aspects of the system); and/or the ability to distribute costs to other interested parties related to the system (e.g., suppliers, manufacturers, customers, and/or service parties; and which may include the ability to distribute increased costs related to increased capabilities, and/or to trade costs between interested parties). In the example ofFIG.6, the translation circuit420may provide communications by, without limitation, populating and/or reading from a shared memory with the network interface416, and/or by communicating with a port414(not shown). ReferencingFIG.7, an example system includes a CND108regulating communication between networks on a vehicle, where the networks may be separated physically, logically (e.g., as virtual local area networks (VLANs), or other logical separation schemes), and/or two or more of the networks may be different types. The embodiment ofFIG.7is generally consistent with the embodiment ofFIG.4, with some differences depicted to highlight certain aspects of the present disclosure. Without limitation to any of the flexibility of arrangements depicted inFIG.4, the example ofFIG.7depicts the translation circuit420having a first portion702co-located with the second network gateway device402and a second portion704co-located with the first network gateway device404. The portions702,704of the translation circuit420may be separated for any reason, including at least separating translation operations by network (e.g., which network406is being serviced), by predetermined end points, by flows, by translation operation (e.g., processing of frame information, processing of payload information, managing capability differences by down-sampling, up-sampling, buffering, providing communication commands, encapsulation of a message into another message format, etc.), and/or by direction of communication (e.g., direction between selected networks, between the gateway devices, between end points, between flows, or combinations of these). ReferencingFIG.8, an example system includes a CND108regulating communication between networks on a vehicle, where the networks may be separated physically, logically (e.g., as virtual local area networks (VLANs), or other logical separation schemes), and/or two or more of the networks may be different types. The embodiment ofFIG.8is generally consistent with the embodiment ofFIG.4, with some differences depicted to highlight certain aspects of the present disclosure. In the example ofFIG.8, the first network gateway device and the second network gateway device are co-located, and omitted as being depicted as part of the CND108. In certain embodiments, the CND108ofFIG.8may alternatively be a combined gateway device that is regulated by the CND108, rather than forming a part of the CND108. In certain embodiments, one or more portions of the combined gateway device(s) may form a part of the CND108, with other portions of the combined gateway device(s) regulated by the CND108. A policy, as utilized herein and without limitation to any other aspect of the present disclosure, includes a description of data to be collected, such as data parameters, collection rates, resolution information, priority values (e.g., ordering data collection values for selection in response to off-nominal conditions where not all data collection parameters can be serviced, etc.). In certain embodiments, a policy further includes event information, which may be stipulated as parameter or quantitative based events (e.g., a given data value exceeds a threshold, etc.), and/or categorical events (e.g., a particular fault code, operational condition or state, or vehicle location/jurisdiction occurs). In certain embodiments, a policy further includes an event response, such as data values to be captured in response to the occurrence of the event, and/or other changes in the data collection scheme such as increased or reduced data collection rates, changes in collected resolution, or the like. In certain embodiments, an event response further includes a time frame associated with the event occurrence, for example a time period after the event occurrence to utilize the adjusted data collection scheme, and/or a time period preceding the event occurrence (e.g., utilizing a rolling buffer or other data collection operation, providing temporary information that can subsequently be captured if the event occurs). In certain embodiments, changes to the data collection scheme for an event can include multiple changes—for example changes over a period of time, further changes based upon the progression of the event (e.g., if the event severity gets worse), and/or criteria to determine that an event is cleared. In certain embodiments, changes to a data collection scheme may be implemented based on event related clearance of the same or another event, for example implementing a data collection change until a next shutdown event of the vehicle, until a service technician clears the event, for a selected number of shutdown events occurs, or the like. A policy may additionally or alternatively include parameters for performing any regulating operations for any regulated components as set forth throughout the present disclosure. The utilization of a policy herein may reference a partial policy, for example the implied policy that would be implemented in response to a single data collection scheme from a single user, wherein the full policy is prepared, verified, and communicated to the vehicle after one or more partial policies are aggregated. The utilization of a policy herein may reference an unverified policy, for example after a policy responsive to a number of users is aggregated, but verification operations of the policy are not yet completed (e.g., before it is determined if the data collection implied by the policy can be performed). The utilization of a policy herein may reference a previously applied policy (e.g., a policy present on a vehicle before an updated version of the policy is communicated to the vehicle and/or implemented on the vehicle). The utilization of a policy herein may reference an updated policy, for example a verified policy that is pending for communication to the vehicle and/or confirmed by the vehicle (e.g., from the CND108). ReferencingFIG.9, an example system includes a CND108regulating communication between networks on a vehicle, where the networks may be separated physically, logically (e.g., as virtual local area networks (VLANs), or other logical separation schemes), and/or two or more of the networks may be different types. The embodiment ofFIG.9is generally consistent with the embodiment ofFIG.4, with some differences depicted to highlight certain aspects of the present disclosure. In the example ofFIG.9, the first network gateway device404and the second network gateway device402are not co-located, and the CND108is depicted in communication with the first network gateway device404. The CND108may be in communication with any one or more of the network gateway device(s), and/or may be positioned at least partially on one or more of the network gateway device(s). Additionally or alternatively, the CND108may regulate communication between the networks by accessing and/or adjusting a memory location (e.g., a policy, configuration instructions, a configuration table, or the like) available to one or more of the network gateway device(s), where a relevant portion of the instructions (if any) may be passed to other network gateway device(s) if the CND108does not communicate directly with those devices. In certain embodiments (not shown), the CND108may communicate to one or more of the network gateway devices utilizing one or more of the networks, for example at a port414of the first network gateway device404. In certain embodiments, the CND108may be positioned, at least partially, on one or more of the network gateway devices, co-located with one or more of the network gateway devices, and/or included (at least partially) in a component of one or more of the network gateway devices (e.g., a translation circuit and/or a network interface circuit). ReferencingFIG.10, an example first network gateway device404is depicted. In the example ofFIG.10, the first network gateway device404is a configurable Ethernet Switch, including an Ethernet network interface416(or Ethernet network interface circuit) having a number of ports414for communication with an Ethernet network. The ports414may be physical ports, logical ports, or a combination thereof. ReferencingFIG.11, an example second network gateway device402is depicted. In the example ofFIG.11, the second network gateway device402is a configurable edge gateway (CEG), providing translation between a secondary network406and a primary network interface (e.g., an Ethernet network such as network410). The utilization of secondary and primary to reference networks merely indicates a logical arrangement of networks, where interfaces to other networks than the primary are referenced as edge interfaces (e.g., interfaced with an edge gateway). In certain embodiments, the primary network may have a higher capability (e.g., bandwidth, throughput, and/or resource dedication), a greater number of devices or end points thereon, a migration target network (e.g., over the life of a vehicle, a group of vehicles, a period of model years, etc.) for end points over time, and/or a main entry network for external communications (e.g., over-the-air updates, configuration updates, data collection, etc.), although a particular embodiment may have some, all, or none of these considerations present for a network considered as a primary network. The example ofFIG.11depicts an optional OBD interface422, which may be present elsewhere in the system, or not present in the system. ReferencingFIG.12, a vehicle having a number of networks thereon, where communications between the networks are regulated by a CND108, is schematically depicted. The arrangement ofFIG.12is provided to illustrate certain aspects of the present disclosure, and is a non-limiting arrangement. The example ofFIG.12includes end points1202,1204(e.g., one or more vehicle controllers) coupled to a first network406, and a number of end points1206,1208,1210,1212coupled to a second network (e.g., an Ethernet network, with a switch co-located with the CND108and/or at least partially separate from the CND108). In the example ofFIG.12, the controllers1202,1204,1206,1208,1210,1212are able to pass communications, as regulated by the CND108, between disparate networks of the vehicle. In certain embodiments, a given controller can be switched between networks, and communications with other controllers within the vehicle, and/or communications external to the vehicle, can be maintained, and further can be maintained whether the related controllers (or external controllers, applications, or devices) have knowledge of the switch or not. ReferencingFIG.13, a vehicle having a number of networks thereon, where communications between the networks are regulated by a CND108, is schematically depicted. For purposes of illustration, the example ofFIG.13includes the same networks and set of controllers as the example ofFIG.12. In the example ofFIG.13, the controllers1204,1208,1210, and1212have been co-located1302, and the controller1204has additionally been moved from the first network406to the second network. The co-location1302of the controllers1204,1208,1210,1212can be any implementation, including consolidation of the controllers into a lesser number of housings (e.g., 1-3 total housings instead of 4), onto a lesser number of boards (e.g., 1-3 boards, instead of 4), and/or utilizing at least partially shared computing resources (e.g., shared processing, shared memory, shared caches, and/or combinations of these). In certain embodiments, the utilization of the CND108allows for the arrangement ofFIG.13, including the consolidation of vehicle controllers, by providing for communication regulation, and maintained connectivity, with only a configuration update to the CND108, and/or with consolidation changes of vehicle controllers that fit within available predetermined configurations of the CND108(and thereby can be implemented without an update to the CND108). Additionally, the consolidation of controllers may provide a number of benefits, such as reduction in network costs, reduction in network traffic, selected distribution of risk (e.g., arrangement of controller positions and/or network routing in a lower risk, or diversified risk, position; and/or reduction of risk to another system component utilizing the footprint gains and/or cost savings of the controller consolidation). In certain embodiments, the consolidation of controllers may enable deeper sharing of information between controllers (e.g., due to increased available network capacity, bypassing of network limitations with shared controllers, and/or utilization of shared memory resources), which may allow for more capable operations of the controllers, and/or operations previously unavailable because the shared information between controllers was not as readily available. In certain embodiments, the CND108further enables the consolidation of controllers, by de-coupling the controller locations from end point locations (not shown) that are required to be distributed (e.g., sensors and actuators that need to be placed in certain locations to perform their function no longer need to be located near the respective controller due to operations of the CND108, and/or CEG402). In certain embodiments, the consolidation of controllers allows for reduced costs and/or increased capability, for example by reducing hardware costs for shared computing resources, enabling higher capability (e.g., processing power and/or memory) computing resources, or combinations of these. The operations of the CND108thus allow for consolidation operations of vehicle controllers that were not previously available. In certain embodiments, the example ofFIG.13may be a consolidation of controllers relative toFIG.12, and/or an illustration of an unrelated embodiment. ReferencingFIG.14, a vehicle having a number of networks thereon, where communications between the networks are regulated by a CND108, is schematically depicted. For purposes of illustration, the example ofFIG.14includes the same networks and a similar set of controllers as the example ofFIG.12. In the example ofFIG.14, the co-located1302controllers include a set of controllers1402,1404,1406, and the CND108depicted as a controller on the co-located1302controller. The CND108may be positioned, at least in part, on one or more of the co-located controllers1402,1404,1406, and/or may be separate as depicted. In certain embodiments, the example ofFIG.14may be a further consolidation of controllers relative toFIG.13, and/or an illustration of co-located1302controllers unrelated to the examples ofFIGS.12and13. ReferencingFIG.15, a vehicle having a number of networks thereon, where communications between the networks are regulated by a CND1502,1504, is schematically depicted. For purposes of illustration, the example ofFIG.15utilizes two consolidated controllers1302,1506, each including a group of co-located vehicle controllers as set forth throughout the present disclosure. The example ofFIG.15includes a first CND1502(or CND portion) interposed between a first network406and a second network (end points412directly coupled to the CND1502and the consolidated controller1506directly coupled to the CND1502), and a second CND1502(or CND portion) interposed between the first network406and a second network (end points412directly coupled to CND1504and the consolidated controller1302directly coupled to the CND1502). In certain embodiments, the second network associated with the first CND1502may be a separate network relative to the second network associated with the second CND1504, but may be a same type of network (e.g., an Ethernet network) and/or may utilize the same or electrically coupled hardware relative to each other. The example ofFIG.15illustrates the CND1504as having primary network regulation for the first network406, but regulation of the first network406may be distributed, shared, regulated according to end points, applications, and/or flows, or the like. In certain embodiments, regulation of the second network(s) may be performed by only one of the CNDs1502,1504, and/or distributed, shared, regulated according to end points, applications, and/or flows. A number of representative aspects ofFIG.15are described following, any one or more of which may be present in certain embodiments. An example aspect ofFIG.15includes shared regulation of networks by the CNDs1502,1504, with either of the CNDs1502,1504fully or partially capable to support regulation of all networks, for example if an end point, network, the other CND (or portion), and/or controller experiences a failure, a fault, or diminished operational capability. An example aspect ofFIG.15includes primary regulation of networks by one the CNDs1502,1504, with the other CND capable to fully or partially support regulation of the networks, for example if an end point, network, primary CND, and/or controller experiences a failure, fault, or diminished operational capability. An example aspect ofFIG.15includes one or more of the consolidated controllers1302,1506capable to at least partially assume control operations for the other of the consolidated controllers1506,1302if one of the consolidated controllers loses capability, connectively with an end point, or the like. In certain embodiments, the CNDs1502,1504are capable to pass parameters that were previously only available to the original controller1302,1506in response to the assumption of the control operations by the replacement controller1506,1302. In certain embodiments, the redundant network routing availability is usable by the CNDs1502,1504, to provide at least partial connectivity between end points that lose connection when a part of the network goes down. The CNDs1502,1504may provide equivalent parameters (e.g., another end point that is capable to provide equivalent data), substitute parameters (e.g., another end point that is capable to provide a substitute or backup parameter that is usable, at least partially, as a substitute for the lost parameter), the same parameters (e.g., where the data from the original end point, or the same data value from another end point, can be routed through the remaining network infrastructure), and/or may provide managing parameters such as controller hand-off communications, heart beat or status communications, or the like. In certain embodiments, one or both of the CNDs1502,1504or CND portions may be co-located with another system component, such as one of the consolidated controllers1302,1506. In certain embodiments, network routing for networks on the vehicle is provided to yield distinct risk profiles for networks on the vehicle, reducing the risk of a single failure rendering the vehicle inoperable for the mission, and/or inoperable for at least a limp home operation, controlled shutdown, data capture, or the like. In certain embodiments, controller, CND, and/or consolidated controller locations may be selected to provide distinct risk profiles for related devices, reducing the risk of a single failure rendering the vehicle inoperable for the mission, and/or inoperable for at least a limp home operation, controlled shutdown, data capture, or the like. In certain embodiments, network routing for networks on the vehicle is provided to yield a lower operating cost, installation cost, integration cost, overall risk profile, distribution of weight and/or footprint of components on the vehicle, or the like. Resolution of competing priority interests may be performed in any manner, such as always favoring the highest priority requestor, providing a weighted response based on the priority (e.g., servicing a high priority request more often than a lower priority request), and/or utilizing a credit based scheme that allows lower priority requests to be serviced after a period of time and/or number of requests, while favoring higher priority requests. Resolution of competing priority interests may include meeting service performance requirements (e.g., QoS values) for higher priority requests, and servicing lower priority requests to the extent possible while meeting performance requirements for higher priority requests. As utilized herein, the mission of a device (e.g., a controller, end point, vehicle, mobile application, etc.) should be understood broadly, and includes at least the related functions, structures, capability, and operations of the device to support operation of the mobile application to perform the intended function or primary function of the mobile application. Without limitation to any other aspect of the present disclosure, an intended function or primary function of the mobile application includes one or more of: motive operation of the mobile application, in accordance with the designed motive capabilities (e.g., with specified torque, speed, responsiveness, etc.); and/or non-motive operation (e.g., industrial operations, vocational operations, pumping operations, provision of shaft power, movement range, and control thereof) of the mobile application, with the designed non-motive capabilities. In certain embodiments, the intended function or primary function of the mobile application includes off-nominal operational response that may be less capable than the designed motive or non-motive capabilities, such as operation in a limp home mode, communication of fault or failure conditions, and/or prevention of further degradation of the vehicle and/or mobile application. In certain embodiments, the intended function or primary function of the mobile application includes sending and/or receiving external data, performing update operations, facilitating service operations, facilitating update and/or upgrade operations, or the like. Accordingly, the mission of a device may vary between mobile applications, according to the current operating condition of the mobile application, and/or according to the current status of the mobile application and/or components, devices, and/or controllers thereof. One of skill in the art, having the benefit of the present disclosure and information ordinarily available when contemplating a specific mobile application, will readily understand the mission of the mobile application, the mission of devices of the mobile application, and the variability of these across operating conditions and status conditions of the mobile application. ReferencingFIG.16, an example system1600is provided for providing extra-vehicle communication control, consistent with embodiments of the present disclosure. Systems described throughout the present disclosure may be provided on a mobile application such as a vehicle or as described throughout the present disclosure. Example systems herein recite particular arrangements, for example of a converged network device (CND)108, circuits, controllers, or other components. The arrangements are provided for clarity of the present description, but components may be distributed, combined, divided, and/or have distinct relationships to those depicted to form systems and to perform procedures described herein. Circuits, controllers, processors, or other devices set forth herein are configured to functionally perform operations as described herein, and may include computing components such as processors, memory, and/or communications components. Additionally or alternatively, such devices may include logic circuits, hardware configured to perform one or more functions of the device, sensors, actuators, and/or display devices of any type. A given circuit, controller, processor, or other such device may be distributed and/or grouped, in whole or part, with other such devices. Certain operations herein are described as interpreting or receiving parameters, or obtaining parameter values using other similar language depending upon the context. Any such operations include receiving the parameter value as a network communication; receiving the parameter value from a sensor; receiving the parameter value as a feedback value (e.g., an actuator position, a reported fault code value, etc.); retrieving the parameter value from a memory location accessible to the interpreting or receiving device; receiving the parameter value as a command; receiving the parameter value as a response to a request from the receiving or interpreting device; and/or receiving pre-cursor values from which the parameter is, at least in part, determined (e.g., operating a virtual sensor using other information to determine the interpreted or received parameter value; determining a state value based upon the received information, where the state value is the received or interpreted value for the purpose of the description; and/or using received information to infer the interpreted value). Any such operations may further include more than of these (e.g., interpreting a parameter value in distinct ways at different times, operating conditions, during off-nominal conditions, depending upon a source of the parameter value, and/or depending upon the usage or purpose of the interpreted parameter value at a given time or during certain operating conditions), and/or combinations of these (e.g., operating a virtual sensor on received information to determine a pre-cursor value, and determining the interpreted parameter value in response to the pre-cursor value). The example system1600includes a vehicle102having a first network zone1612and a second network zone1614, where the first network zone1612and the second network zone1614are different types of networks. Without limitation to any other aspect of the present disclosure, different types of networks as described herein contemplates any difference in the networks, such as: a difference in a network capability (e.g., band width, message size, latency, noise sensitivity, etc.); a difference in a network protocol at any layer (e.g., hardware type; message frame requirements; addressing schemes; acknowledgement types, requirements, or capabilities; casting availability such as unicast, multi-cast, and/or broadcast); a network standard type (e.g., Controller Area Network (CAN); Media Oriented Systems Transport (MOST) network; Local Interconnect Network (LIN); FlexRay network; Time-Triggered Protocol (TTP) network; Low-Voltage Differential Signaling (LVDS) network; Audio Video Bridging (AVB) compliant network; a customized version of any one or more of the foregoing; and/or a proprietary version of any one or more of the foregoing). An example network zone includes an electrical signal zone (e.g., a network where a corresponding network interface circuit interprets an electrical signal value as a communication, and/or provides an electrical signal value as a communication to an end point of the electrical signal zone, such as a sensor providing certain electrical values indicating a sensed parameter value, a diagnostic value, or the like, and/or an actuator responsive to certain electrical values to move to a selected position and/or apply a selected force, and/or where the actuator may additionally or alternatively provide feedback information and/or diagnostic information on the electrical signal zone). Electrical signals for an electrical signal zone may be of any type, including at least: voltage values; frequency values; current values; and/or configured pulse-width modulated (PWM) values such as duty cycles, amplitudes, selected periods, and the like. The example system1600further includes a policy manager circuit1602that interprets a policy1606including a network regulation description (not shown), and a configuration circuit1604that configures at least one network interface circuit (e.g., a first network interface circuit1608corresponding to the first network zone1612and/or a second network interface circuit1614corresponding to the second network zone1614) in response to the policy1606. For example, the policy1606may be provided by an external device1618, and/or may be previously stored (e.g., at a time of manufacture, assembly, and/or during a previous update from the external device1618), where the policy1606includes the network regulation description having selected indications of devices on the vehicle102for capability to utilize the network zones1612,1614, to communicate between zones, and/or to communicate with external devices1618. An example system1600includes the first network interface circuit1608provided as a part of a CEG, where the first network zone1612is a CAN bus network, and the second network interface circuit1614provided as a part of a CES, where the second network zone1614is provided as an ethernet network. In the example, the first network interface circuit1608provides selected communications from the first network zone1612to the second network interface circuit1614at a selected port of the ethernet network, and/or receives selected communications from the second network zone1614at the selected port of the ethernet network, thereby providing for inter-network communications between the first network zone1612and the second network zone1614. In the example, communications from the first network zone1612to an external device1618may be provided through the second network zone1614(e.g., where the external device1618is coupled to the second network zone1614and/or connected wirelessly to the vehicle102), or directly to the external device1618(e.g., where the external device1618is coupled directly to the first network zone1612or CAN bus). An example system1600includes the first network zone1612as a virtual local area network (VLAN), logically separated from the second network zone1614, but positioned on at least partially shared hardware with the second network zone1614. In the example, the first network interface circuit1608and second network interface circuit1610may be operated as elements of a network switch or router, controlling communication between end points of the first network zone1612and second network zone1614in response to the policy1606. Devices on the vehicle102that are regulated by the policy include, without limitation, one or more of: an end point of a network zone; a flow associated with a communicating device (e.g., an end point or an application); an application associated with a communicating device (e.g., an end point). For example, an end point of the first network zone1612(e.g., a backup camera on the vehicle102) may request or perform communications on a network of the vehicle, but may be associated with more than one application or flow (e.g., associated with a first flow relating to vehicle reverse movement operations at a first operating condition, and associated with a second flow relating to vehicle security operations at a second operating condition), and accordingly the communications of the backup camera on the vehicle102may have different regulation parameters depending upon the flow associated with the operations at the moment. In certain embodiments, an end point is associated with more than one application or flow, and the end point is regulated according to a highest priority one of the associated applications or flows (e.g., to reduce communication requirements, such as determining the application or flow that is requesting the immediate communication to be regulated, and/or to reduce processing time to determine which application or flow is requesting the immediate communication). In certain embodiments, an end point is associated with more than one application or flow, and the end point is regulated according to priority of the application or flow requesting the immediate communication. Devices on the vehicle102that are regulated by the policy may be referenced herein, without limitation, as a local communicating device. Local communicating devices include, without limitation: an end point of a network zone; an application; a flow; a sensor device; a service group; a vehicle function (e.g., power management, cabin comfort, traction control, etc.); and/or a vehicle controller (e.g., an engine controller, a transmission controller, an anti-lock brake system (ABS) controller, an advanced driver assistance system (ADAS) controller, etc.). It can be seen that a given component, such as an end point of a network zone, may be a first local communicating device during one operating condition, and a second local communicating device during another operating condition—for example depending upon the vehicle operating condition (e.g., shutdown, motive operation, parked operation, etc.), and/or may be a first local communicating device for a first purpose (e.g., a brake controller performing active traction control operations) and a second local communicating device for a second purpose (e.g., the brake controller providing data to be stored for diagnostic operations). Additionally, it can be seen that the distribution of communicating devices between applications, flows, controller, vehicle functions, and the like, depends upon the organizing strategy of the particular system, design choices made by a manufacturer or other entity having design and/or configuration control of the system, and the like. For example, traction control may be provided by a unified vehicle controller for a given system (e.g., which may treat the traction control as a vehicle controller for network regulation purposes); provided by distributed controllers for another system (e.g., which may treat the traction control as a vehicle function for network regulation purposes); and/or may be treated as a logically grouped set of operations for another system (e.g., which may have any hardware organization including the previously described organizations, and which may treat the traction control as an application or flow for network regulation purposes). One of skill in the art, having the benefit of the present disclosure and information ordinarily available when contemplating a particular system, can readily determine the organizational scheme and network regulation for local communicating devices of the system. The organizational scheme for local communicating devices includes the inclusion and/or association of end points of the network zones, and/or certain communications (including source or destination communications for the end point(s)) with one or more of: particular end points, vehicle controllers, vehicle functions, applications, and/or flows of the system. Certain considerations to determine the organizational scheme include, without limitation: the number, types, capabilities, and inter-connection bandwidth of network zones of the system; the available size and/or granularity for policy(ies) of the system; the available processing power available for implementation of the policy(ies) of the system; the number and distribution of vehicle controllers and other controllers throughout the system; the expected change of the system over time (e.g., availability to reconfigure, remanufacture, and/or re-spec the vehicle; expected changes in coming model years associated with the vehicle; and/or the level of consumer and/or third-party customization of the vehicle that is available or expected); the number and distribution of sensors and/or actuators throughout the system, and the connectivity of the sensors and/or actuators to a network zone (e.g., consolidation at controllers, and/or consolidation using smart sensors/actuators capable to directly interface with a network zone); the presence, number, and distribution of multi-purpose communicating elements on the system (e.g., sensors, actuators, controllers, and/or data values that service multiple vehicle functions, flows, and/or applications); the presence, number, and distribution of multi-purpose data elements on the system (e.g., sensors, actuators, controllers, and/or data values that provide redundant capability to support a given vehicle function, flow, and/or application); and/or the expected utilization of a network aspect (e.g., communications on a network zone, external communication data rate and/or aggregate data communicated, inter-network communications, etc.) relative to a related capacity (e.g., a bandwidth of a network zone, external communication bandwidth, external communication data limit, inter-network communications, etc.). An example policy manager circuit1602receives a policy communication1616from an external device1618, and interprets the policy1606by performing an operation such as storing the policy1606(e.g., in a memory location accessible to the policy manager circuit1602, and/or distributed throughout a number of memory locations) and/or updating a stored policy1606. In certain embodiments, the policy manager circuit1602configures the policy1606for utilization by network regulating aspects of the system1600, for example by updating a number of configuration files utilized by interface circuits1608,1610, adjusting high level descriptions of the policy communication1616(e.g., limit external communication data to 32 GB per month) to executable commands by network regulating aspects of the system1600, adjusting reference values of the policy communication1616(e.g., associating a local address value of an end point referenced in the policy communication1616, such as when an end point has moved without notification to the external device1618, and/or where specific addressing information of local devices is abstracted from the external device1618, etc.), associating system-specific nomenclature to elements of the policy description1616(e.g., local parameter value names or IDs, flow names or IDs, application names or IDs, etc.), or the like. An example system1600includes the external device1618communicatively coupled to the policy manager circuit1602through at least one of the first network zone1612or the second network zone1614—for example using a CAN bus port, OBD port, ethernet port, proprietary port, or other direct coupling to a network zone. An example system1600includes the external device1618communicatively coupled to the policy manager circuit1602through a wireless connection, such as a WiFi connection, cellular connection, and/or Bluetooth connection. An example system1600includes the policy manager circuit1602verifying the policy1606, as communicated by the policy communication1616, before performing the storing and/or updating of the policy1606. For example, the policy manager circuit1602may require an authentication of the external device1618, and/or a determination of the permissions associated with the external device1618, before performing a change to the policy1606. In certain embodiments, the policy manager circuit1602may determine permissions associated with the external device1618, an entity utilizing the external device1618, an application or flow utilizing the external device1618, or the like, before performing a change to the policy1606. In certain embodiments, the policy manager circuit1602may reject the policy communication1616if the policy1606implied by the policy communication1616exceeds an authority associated with the external device1618, and/or if the policy1606cannot be implemented (e.g., executing the policy1606would exceed the capability of the system1600, such as a bandwidth of a network zone, an external communications limit, a memory storage limit, or the like). In certain embodiments, the policy manager circuit1602may partially implement the policy communication1616if the policy1606implied by the policy communication exceeds an authority associated with the external device1618, and/or if the policy1606cannot be fully implemented. For example, the policy manager circuit1602may implement the authorized portions of the policy communication1616, and/or implement portions of the policy communication1616than the system1600has capability to implement. In certain embodiments, the policy manager circuit1602implements portions of the policy communication1616, for example where a system capability would be exceeded by a full implementation, according to: a priority of associated end points, flows, applications, vehicle functions, etc. of the policy communication1616(e.g., implementing higher priority aspects until a limit is reached); and/or maximizing an implementation value of the policy communication1616(e.g., associating a value for each aspect according to an associated priority, importance, benefit description, etc. of the given aspects; for example where meeting a group of slightly lower priority aspects of the policy would exceed the value of meeting only a single higher priority aspect of the policy). An example policy manager circuit1602provides a policy notification1620to the external device1618in response to verifying the policy1606. An example policy notification1620includes a confirmation that the policy1606is updated and/or stored according to the policy communication1616. An example policy notification1620includes a notification that the policy1606has not been implemented (e.g., where the external device1618does not have authorization to implement the policy communication1616). An example policy notification1620includes a reason for the rejection of the policy communication1616(e.g., a lack of authorization, lack of capability, etc.). An example policy notification1620includes one or more aspects of a partial implementation of the policy communication1616, for example a description of which aspects of the policy communication1616have been implemented or rejected, and/or a reason for the partial implementation. In certain embodiments, the policy manager circuit1602may provide the policy notification1620to a separate external device (not shown), either instead of the policy notification1620to the first external device1618, and/or in addition to the policy notification1620to the first external device1618. In certain embodiments, the policy notification1620to separate external devices may have the same information, or separate information. For example, the policy manager circuit1602may provide a simple policy notification1620to the requesting external device1618(e.g., a rejection of the policy communication1616), and a more detailed policy notification1620to a separate external device (e.g., indicating authorizations that prevent the implementation of the policy communication1616, capacities that prevent the implementation of the policy communication1616, and/or details related to a partial implementation of the policy communication1616). In certain embodiments, the policy manager circuit1602may provide a more detailed policy communication1616to the requesting external device1618, and a simpler policy communication1616to the separate external device(s). In certain embodiments, the policy notification1620may include providing a prompt to a user interface of an external device (not shown), for example allowing an authorized external device, user, entity, or the like, to provide a permission to allow a policy1606update in response to the policy communication1616. In a further example, the prompt to the user interface of the external device may include a prompt to one or more of a vehicle owner, a vehicle operator, a vehicle manufacturer, an administrator related to the vehicle (e.g., a network administrator, fleet owner, fleet service operator, compliance personnel associated with the vehicle, etc.). Without limitation to any other aspect of the present disclosure, example aspects of a policy1606include: a data collection parameter (e.g., data available to at least one network zone of the vehicle, such as data from any sensor, actuator, controller, and/or end point at least selectively couplable to a network zone and/or in communication with an end point of a network zone); a data collection permission value (e.g., a sampling or communication rate; a permission to provide the data value to a network zone; a permission to request the data value from a network zone; a resolution value associated with the data; a time lag permission associated with the data; a storage permission associated with the data such as an amount of data storage authorized, data expiration criteria, and aged data treatment parameters such as compression and/or summarization operations to be performed on aging data and/or to be performed if permitted storage becomes limited due to inability to communicate the stored data externally or competing storage priorities intervene with the planned available storage); a service publication permission value (e.g., an authorization to publish the availability of a service, which may include scheduled authorization to publish to some local communicating devices, external applications, and the like, but not to others; and/or an authorization to publish details of the available service such as data parameters provided, actuators available, etc.); a service subscription permission value (e.g., published services that are visible to the associated local communicating device; service details that are available to the associated local communicating device; and/or permissions to subscribe to services for the associated local communicating device); and/or an external communication permission value (e.g., data rates, associated parameters, external addresses allowed, APNs allowed, aggregate data communication permissions, etc.). The policy1606includes any one or more of the foregoing associated with local communicating devices (e.g., end points, controllers, vehicle functions, flows, applications, etc.), external devices (e.g., specific devices or device categories, entities, and/or applications). In certain embodiments, a given flow, application, or vehicle function may include aspects associated with a local communicating device, and other aspects associated with an external device (e.g., a route predictor application that utilizes local communicating devices combined with an external application such as a cloud based application or a web based application). ReferencingFIG.17, an example system1700is provided for providing extra-vehicle communication control, consistent with embodiments of the present disclosure. The example system includes a vehicle102having a first network zone1612and a second network zone1614, where the second network zone1614is of a different type than the first network zone1612. The example system1700includes a CND108interposed between the first network zone1612and the second network zone1614. The CND108interposed between the network zones1612,1614, includes physical interposition (e.g., communications between the network zones1612,1614pass through the CND108, and/or through a device controlled by the CND108such as a CEG, CES, or other network interface circuit) and/or a logical interposition (e.g., where communications between the network zones1612,1614pass through a device controlled by the CND108, and/or where the CND108regulates communications between the network zones1612,1614such as data values passed, configuration of the data values, data rates, up-sampling and/or down-sampling of data, encapsulation operations, frame inclusion and/or processing of passed communications, etc.). The example system1700further includes a policy manager circuit1602that interprets a policy1606including an active diagnostic description1705, and a diagnostic execution circuit1702that provides a diagnostic command value1712to an end point of a network zone1612,1614in response to the active diagnostic description1705. The example system1700includes end points of the first network zone1612(end points1708) and end points of the second network zone1614(end points1710). In the example system1700, an end point1708,1710includes a device responsive to the diagnostic command value1712. Example and non-limiting diagnostic command values1712include: a command to collect one or more data values; a command to operate an actuator; and/or a command to operate a vehicle function (e.g., provide an engine speed, power level, or higher level function such as executing a regeneration mode, scheduled test operation, etc.). The example system1700allows for the execution of an active diagnostic test, requested by an external device, to be successfully performed regardless of the distribution of end points1708,1710throughout networks of the vehicle, including where an end point has moved between networks, and/or where a given diagnostic command value1712is utilized to perform active diagnostic tests across a range of vehicles having varying network configurations and distribution of end points1708,1710. ReferencingFIG.18, an example end point1708includes a device control circuit1802that interprets the diagnostic command value1712, and provides an actuator command value1804in response to the diagnostic command value1712. The example end point1708includes, or is associated with, an actuator1806responsive to the actuator command value1804. For example, a diagnostic command value1712may include a command such as “lock the driver door”, “close an exhaust gas recirculation valve”, “raise a motor temperature to 80° C.”, etc., allowing for an abstraction between the diagnostic command value1712and actuator1806responses to achieve the diagnostic command value1712. Additionally or alternatively, the diagnostic command value1712may be associated with a complex operation or series of operations, such as a full test sequence or the like, and accordingly numerous end points1708,1710and/or actuators1806throughout the system1700may be implicated by a single diagnostic command value1712. An example system1700further includes the diagnostic execution circuit1702determining whether a vehicle operating condition1720is consistent with the diagnostic command value1712before providing the diagnostic command value1712to the end point(s)1708,1710. For example, the diagnostic command value1712may include a diagnostic test that adjusts torque delivery of a prime mover of the vehicle, and associated vehicle operating conditions1720may include parameters such as: ensuring the vehicle is out-of-gear; ensuring the vehicle is not in a motive power mode; and/or ensuring the vehicle is in a selected test mode. In certain embodiments, the vehicle operating conditions1720for a given diagnostic command value1712may be set forth in the active diagnostic description1705, allowing for active control of vehicle operating conditions1720for test performance (e.g., target temperatures; diagnosing specific conditions such as vehicle launch, altitude operation, or the like) and/or extra-test considerations (e.g., operator or service personnel safety, fuel economy or emissions, impact to network communication rates, processing demand, and/or memory storage, etc.). In certain embodiments, the vehicle operating conditions1720for the given diagnostic command value1712may be enforced by another flow, application, vehicle function, or the like associated with the vehicle (e.g., torque commands cannot be adjusted separate from operator commands unless specified vehicle conditions1720are present, etc.). An example system1700includes the policy1606including a diagnostic execution condition1706, where the diagnostic execution circuit1702further determines whether the vehicle operating condition(s)1720are consistent with the diagnostic command value1712in response to the diagnostic execution condition(s)1706. An example system1700includes the diagnostic execution circuit1702further performing a diagnostic data collection operation in response to the active diagnostic description1705, and storing a diagnostic data set1714in response to the diagnostic data collection operation. For example, the active diagnostic description1705may include a number of data parameters to be collected, vehicle state conditions to be monitored, and/or parameter threshold values to be determined (e.g., a temperature above a threshold value). The stored diagnostic data set1714may include the collected data, vehicle state conditions determined based on the collected data, parameter threshold confirmation values determined based on the collected data, or combinations of these. The collected data may be from end points1708,1710responsive to the diagnostic command values1712(e.g., confirmation that actuators have responded to commands, diagnostic data or fault codes associated with responsive actuators, etc.), or from end points1708,1710apart from those responsive to commands (e.g., observation of a temperature, pressure, speed value, state confirmation, etc. that is not associated directly with the actuating end points1708,1710). An example diagnostic execution circuit1702performs a processing operation on data collected in the diagnostic data collection operation, and stores the diagnostic data set1714in response to the processing operation. For example, the stored diagnostic data set1714may include state information, virtual sensor information, negative information (e.g., only storing data associated with operations where a threshold is not met), up-sampled and/or down-sampled values for the data collected, and/or any other processing operations set forth throughout the present disclosure. Example and non-limiting processing operations for the data collected, or portions thereof, include: compressing the data collected; summarizing the data collected; operating a virtual sensor utilizing the data collected; determining a vehicle operating condition parameter in response to the data collected; determining the diagnostic data set in response to a determined vehicle operating parameter; performing an up-sampling operation on the data collected; and/or performing a down-sampling operation on the data collected. An example diagnostic execution circuit1702further communicates the diagnostic data set1714to an external device (e.g.,1618) in response to the diagnostic data collection operation. The external device receiving the diagnostic data set1714may be the same or a different external device than an external device supplying the active diagnostic description1705. An example diagnostic execution circuit1702further processes the collected data before communicating to the external device, which may include the initial processing to determine the stored diagnostic data set1714, and/or a further processing operation on the stored diagnostic data set1714before communicating to the external device. For example, the diagnostic execution circuit1702may store the diagnostic data set1714, and send a portion of the diagnostic data set1714(e.g., selected parameters, active diagnostic outcomes, etc.) to the external device. The example diagnostic execution circuit1702then performs selected operations such as: further processing the diagnostic data set1714before communicating it to the external device (e.g., to reduce external data communications, in response to selected data for transmission by the external device, etc.); communicates the diagnostic data set1714to the external device (e.g., responsive to availability of an external communication such as a WiFi connection, connected external device, or the like; and/or responsive to a request from the external device for all of the diagnostic data set1714); communicates selected additional portions of the diagnostic data set1714(e.g., requested data by the external device); keeps the diagnostic data set1714and/or a further processed form of the diagnostic data set1714stored for a selected time period; and/or deletes the diagnostic data set1714after the diagnostic execution operation (e.g., according to an outcome of the active diagnostic test, and/or according to a request of the external device). It can be seen that operations of system1700allow for execution of active diagnostic operations by an external device (e.g., a service tool, service application, cloud-based application, fleet service computing device, and/or third party application) that engages end points on a vehicle across a mixed network, allowing for diagnostic operations that do not require knowledge of the location and/or organization of end points on the vehicle, that can support multiple configurations of a vehicle, and/or can support changing configurations of the vehicle. Additionally or alternatively, operations of system1700allow for scheduled transmission of data, including reduction of data transmitted while achieving robust active diagnostic capability, and scheduled consumption of processing, memory, and inter-network communication resources on the vehicle while achieving the robust active diagnostic capability. An example system1700includes a diagnostic verification circuit1704that determines a diagnostic confirmation value1716based on a response of the actuator to the diagnostic command value1712(e.g., confirming whether the actuator performed the commanded function, and/or across a group of actuators whether the vehicle has performed the active diagnostic according to the active diagnostic description1705). The example diagnostic verification circuit1704stores the diagnostic confirmation value1716(e.g., as a part of the diagnostic data set1714) and/or communicates the diagnostic confirmation value1716to an external device. In certain embodiments, the diagnostic verification circuit1704adjusts storage and/or communication of the diagnostic data set1714in response to the diagnostic confirmation value1716—for example ensuring that the diagnostic data set1714is related to a performance of the active diagnostic. In certain embodiments, the diagnostic execution circuit1702may store all or a portion of the diagnostic data set1714as a rolling buffer of data, saving a selected portion of the diagnostic data set1714in response to the diagnostic verification circuit1704providing the diagnostic confirmation value1716(e.g., where a diagnostic has a timed value or actuator position as a part of the diagnostic execution, allowing the diagnostic to be determined complete when the timer or other accumulating condition is completed). An example active diagnostic description1705includes a target device description1718(e.g., a fueling actuator, engine controller, door actuator, mirror position adjustment actuator, etc.) that does not identify which network zone1612,1614that an end point corresponding to the target device description1718is positioned on. The example system includes a configuration circuit1604that determines a network address value1722for the end point in response to the target device description1718(e.g., a port number of an ethernet network, a message ID for a CAN network, etc.), and the diagnostic execution circuit1702provides the diagnostic command value1712to the end point further in response to the network address value1722. For example, the target device description1718may include a standardized description for the end point (e.g., engine speed, ambient temperature, passenger seat occupancy sensor, etc.), and the configuration circuit1604may access a configuration table relating the standardized description to the local network address for the intended component. Additionally or alternatively, the target device description1718may have a description that matches a baseline product (e.g., a 2020 LX version of a given vehicle), a description that matches an original version of the vehicle (e.g., as the vehicle was configured after manufacture), and/or a description that matches an earlier version of the vehicle (e.g., as the vehicle was configured as of a certain date). In certain embodiments, the configuration table or other information utilized by the configuration circuit1604to determine the network address value1722may be one or more configuration file(s) maintained by a network interface circuit, a configuration file maintained by a policy manager circuit, a configuration file maintained by the CND, and/or a configuration file maintained as a part of the policy1606. An example active diagnostic description1705includes a target device description1718(e.g., a fueling actuator, engine controller, door actuator, mirror position adjustment actuator, etc.) that identifies the end point is on one network zone (e.g., the first network zone1612), and the configuration circuit1604determines the end point is on another network zone (e.g., the second network zone1614) in response to the target device description1718. For example, the configuration circuit1604may determine that the target device description1718is pointing to the wrong device, or a non-existent device, and/or may further determine that the external device is utilizing a previous, different, and/or standardized configuration file to provide the target device description1718, where the configuration circuit1604utilizes a local configuration file to determine the proper network address value and/or network zone for the end point intended by the target device description1718. In certain embodiments, the configuration circuit1604determines the proper network address value and/or network zone for the end point utilizing other information from the target device description1718, such as parameter names, intended functions, or the like. Similarly, the configuration circuit1604can correct the target device description1718indicating an incorrect address other than the wrong network zone, such as an address on a first network zone, where the correct address is another address on the first network zone. The operations of the configuration circuit1604allow for simplification of active diagnostic definition (e.g., external devices do not require system-specific information about end point locations and network distribution); adaptation of diagnostic execution as end points and/or local communicating devices of the vehicle are moved and/or upgraded; and/or allow for a layer of abstraction between external devices and the configuration of the vehicle. The simplification and/or abstraction of the active diagnostic definition from the vehicle network configuration allow for reduced cost of active diagnostic development and roll-out, and increased user base for active diagnostic development (e.g., with enhanced protection of confidential information such as vehicle configuration information and/or data compartmentalization) which can enhance overall diagnostic capability, enhance vehicle operator experience, and increase competition and implied competition for active diagnostic development and implementation. ReferencingFIG.19, an example system1900includes a vehicle102having a first legacy network zone1902and a second high capability network zone1904. For example, the first legacy network zone1902may be a first network type, such as a CAN bus, and the second high capability network zone1904may be a second network type, such as an ethernet network. In certain embodiments, the second high capability network zone1904may be of the same type as the first legacy network zone1902, but may be a higher capability version such as a high speed CAN bus, a higher speed ethernet network, or the like. In certain embodiments, a system1900such as that depicted inFIG.19may be present where a vehicle is migrating to an upgraded network type, such as during a transition over a number of model years of the vehicles, as new components are added to a vehicle that utilize a higher capability network, and the like. The example system1900includes CND108interposed between the first legacy network zone1902and the second high capability network zone1904, where the CND108includes a policy manager circuit1602that interprets a policy1606including an external communication value1906, and an external communication control circuit1908that regulates communications between an external device1618and end points of the first legacy network zone1902and/or end points of the second high capability network zone1904in response to the external communication value1906. For example, external communications between end points of the first legacy network zone1902may be limited to reduce traffic on the first legacy network zone1902that are created by communications to and from the external device1618, and/or due to a sensitivity of end points on the first legacy network zone1902(e.g., where vehicle controls and/or proprietary information are maintained on the first legacy network zone1902, and/or where security protocols associated with the first legacy network zone1902are more limited than those available with the second high capability network zone1904). In another example, external communications between end points of the second high capability network zone1904may be limited to reduce external transmissions (e.g., through a transceiver of the vehicle, utilizing a particular data provider, etc.) from the vehicle (e.g., where higher capability devices on the second high capability network zone1904may have the capability to generate high data rates), due to the potentially large number of devices on the second high capability network zone1904, including devices that may be recently added to the vehicle (and accordingly do not have a long history of know usage, security vetting, and/or vehicle operations impact data) and/or devices that may be added by entities that are not as closely controlled as providers of devices on the first legacy network zone1902(e.g., devices that may be provided by third parties, that relate to recently developed vehicle capabilities, and/or that are not related to core vehicle functions, such as entertainment providers). The provided reasons for limiting external traffic between end points on various networks and external devices are non-limiting and provided for illustration, but the external communication control circuit1908may regulate communications between end points of any network zone and any external device for any reason. An example system1900includes the external communication value1906including an active diagnostic description—for example diagnostic operations and/or data collection to be performed as a diagnostic operation, and which may involve commands to, data collected from, and/or communications with any end point on any network zone of the vehicle. An example system1900includes the external communication value1906including an active test description—for example a test operation (e.g., a test of any end point, actuator, sensor, flow, application, vehicle function, and/or vehicle controller on the vehicle), and which may involve commands to, data collected from, and/or communications with any end point on any network zone of the vehicle. An example system1900includes the external communication value1906including a data request value (e.g., collection of a data parameter from any end point, and/or including processing of the data parameter) and/or a vehicle command value (e.g., command of any actuator, display, controller, etc. with any end point). Example and non-limiting external device(s)1618include a service tool, a manufacturer tool, a dealer tool, and/or a cloud based tool. An example external communication value1906includes a target device description including an identification of a target end point (e.g., a network zone, local address, sensor name, actuator name, data parameter name, etc.), where the external communication control circuit1908determines that the end point has a different configuration (e.g., a different network zone, local address, sensor name, actuator name, data parameter name, etc.) than the identification provided in the target device description. In certain embodiments, the external communication control circuit1908may include or utilize a configuration circuit1604(e.g., referenceFIGS.16,17and the related descriptions) to determine the proper identification for the target end point. An example external communication value1906does not include an identification of a target end point, and the external communication control circuit1908provides a proper identification for the target end point based on the external communication value1906(again referencingFIGS.16,17, and the related descriptions, including operations of the configuration circuit1604). It can be seen that the operations of system1900allow for external devices1618to operate across a number of vehicle configurations, without specific knowledge of end point locations, parameter names, local addresses, or the like, to implement active diagnostics, testing, and data collection. The vehicle configurations may represent changes of a vehicle after servicing, replacement of components (e.g., end points), upgrading of components and/or executable instructions stored on a computer readable medium, changes over the course of model years, and/or changes to a vehicle due to campaigns, upgrades, and/or remanufacturing. ReferencingFIG.20, an example apparatus2000is depicted for providing an external network view for one or more networks of a vehicle having a mixed network. The example apparatus2000may be utilized in conjunction with any vehicle described throughout the present disclosure, and aspects of the apparatus2000may be positioned on the vehicle, on an external device at least selectively in communication with the vehicle, on a cloud server, and/or on a web application. The example apparatus2000includes a vehicle communication circuit2002that interprets vehicle communications data2016, which may be data collected from the vehicle and/or data to be provided to the vehicle. The example apparatus2000further includes a visualization circuit2004that generates visualization data2018in response to the vehicle communications data2016. Example visualization data2018includes a first network identifier (e.g., identifying a network zone, end point, or other network identifier for corresponding data) and a second network identifier. Example visualization data2018can include network identifiers corresponding to each of at least two distinct network zones of the vehicle, and/or each of at least two distinct end points of the vehicle. An example network identifier includes an ethernet based protocol and/or a CAN based protocol. Another example network identifier includes one or more of a cellular based protocol, a WiFi based protocol, and/or a Bluetooth based protocol. The example apparatus2000further includes a display interface circuit2006that transmits the visualization data2018, providing stored visualization data2022and/or providing the visualization data2018to an electronic display2012. The transmission of the visualization data2018may include any one or more operations selected from the operations such as: transmitting the visualization data2018from the vehicle to a tool; transmitting the visualization data2018from the vehicle to a cloud server; transmitting the visualization data2018from the vehicle to a display device (e.g., an electronic display2012such as a vehicle display, a service tool, an external computing device such as an operator device, a service device, a manufacturer device, a fleet owner or service device, a vehicle communications administrator device, and/or a third-party device, etc.); transmitting the visualization data2018from a cloud server to a tool; transmitting the visualization data2018from a cloud server to a display device; and/or transmitting the visualization data2018from a first cloud server to a second cloud server (e.g., allowing separate storage criteria for the stored visualization data2022between the cloud servers, including anonymization of data, aggregation of data, compartmentalization of aspects of the data, etc.). In certain embodiments, transmission of the visualization data2018may include transmitting the visualization data2018to an on-vehicle storage (e.g., a dedicated memory space available for the stored visualization data2022for later access, requested access, and/or later transmission to an off-vehicle location), and/or to a closely coupled storage (e.g., a USB device coupled to the vehicle, to a mobile device such as an operator's mobile phone, and/or to a computing device in close-range wireless communication such as a WiFi or Bluetooth connection). Additionally or alternatively, the transmission of the visualization data2018may include any one or more operations selected from the operations such as: storing the visualization data2018on a shared storage of the vehicle; storing the visualization data2018on a shared storage of the vehicle, and selectively transmitting the stored visualization data2022to an external device; transmitting the visualization data2018to a secured cloud storage; and/or transmitting the visualization data2018to a secured cloud storage, and providing selected access to the stored visualization data2022to a monitoring tool, an external application, a service tool, and/or a user device. An example apparatus2000includes an electronic display2012that interprets and displays the visualization data2018. An example electronic display2012accesses the stored visualization data2022and displays at least a portion thereof, and/or a processed visualization element determined from the visualization data2018and/or stored visualization data2022. Example visualization data2018includes topology data corresponding to a network topology of the first network and/or second network (e.g., depicting the network(s) and/or selected end points associated with each of the networks). The topology data may include a visual representation, a table listing, or other visualization of the topology data. An example visualization circuit2004is further structured to include portion of meta-data of the vehicle communications data2016in the visualization data2018. Example and non-limiting meta-data of the vehicle communications data2016includes data such as a source address, destination address, time stamp, vehicle operating condition or state condition, fault code information, status parameters for end points, flows, applications, and/or vehicle functions, or the like. In certain further embodiments, meta-data of the vehicle communications data2016includes information relating to the trajectory of the vehicle communications data2016through the vehicle network, for example frame data related to an originating communication (e.g., frame data from a communication on a first network2008, where communication is encapsulated and passed to the vehicle communication circuit2002from the second network2010), processing information for a payload and/or frame of the vehicle communications data2016(e.g., processing operations performed on the payload and/or the frame of the communication, for example allowing reverse calculation of the processing, an up-sampling and/or down-sampling description, or the like). In certain embodiments, the meta-data may have predetermined values, for example a first data value associated with a first processing operation (e.g., filtering, a resolution change, etc.), a second data value associated with a second processing operation, whereby the meta-data communicates the processing operation (or other operations) according to the value of selected portions (e.g., specified bits) of the vehicle communications data2016. An example apparatus2000includes a monitoring input circuit2014that interprets a data filtering value2020(e.g., a description of filtering operations, such as: a selection of certain end points and/or local communicating devices; a selection of certain network zones; communications meeting specified criteria; a down-sampling description for selected communications; communications relating to off-nominal conditions such as end points, flows, vehicle functions, and/or applications having an associated fault value, and/or communications relating to end points having lost packets, high or low expected communication rates, etc.). Example and non-limiting data filtering values2020include a network address association, a vehicle control device association, a vehicle system association, a network protocol type, an end point identifier, a data type, an application association, and/or a flow association. Example and non-limiting data filtering values2020include a reference to a system, such as an engine system, a steering system, a braking system, a fuel system, a prime mover system, an anti-lock braking system, a traction control system, and/or a drivetrain control system. Still further example and non-limiting data filtering values2020include a reference to a system such as a security system, a lighting system, a safety system, an environmental control system, an ADAS, and/or an infotainment system. The example apparatus2000includes the visualization circuit2004filtering, based at least in part on the data filtering value2020, portions of the vehicle communications data2016to generate the visualization data2018. In certain embodiments, the data filtering value2020may be provided in a policy1606, communicated from an external device1618, and/or received through a user interface operated (e.g., by the display interface circuit2006) on an electronic display2012, external tool2014, and/or a user device such as a device of a vehicle owner or operator, service personnel, manufacturer, fleet owner, fleet service personnel, vehicle communications administrator, and/or an interaction with a cloud-based or web-based application. ReferencingFIG.22, an example user interface to retrieve and filter vehicle communications data2016is depicted. The example user interface may be implemented on an external device, web application, cloud-based application, external tool, or the like. In the example ofFIG.22, “Switch0” corresponds to a first network zone, and “Switch1” corresponds to a second network zone, allowing a user to select end points from each network zone that are to be monitored. In the example, filter selections allow for reduction from monitored end points (e.g., selections on the left side) according to filtering criteria, such as including only selected end points, flows, applications, etc. (selections on the right side). In the example ofFIG.22, monitored parameters may be further down-sampled (selections at the bottom). Further in the example ofFIG.22, a selected mirroring timeout may be set (e.g., where monitoring is performed using port mirroring). The example user interface ofFIG.22illustrates certain aspects of the network monitoring and filtering operations described herein, and is not limiting to the present disclosure. An example apparatus2000includes the visualization data2018including a traffic monitoring visualization. For example, a traffic monitoring visualization can provide a visualization corresponding to one or more of: an end point on one of the first network or the second network (e.g., showing incoming and/or outgoing traffic from the end point); a vehicle system; an application; a flow; a vehicle controller; a vehicle function; a selected one of the first network or the second network; or a port of one of the first network or the second network. An example visualization data2018includes a port counter visualization, for example displaying messaging traffic corresponding to a port (a physical port or a logical port) of one of the network zones. An example visualization data2018includes an end point data flow monitoring visualization, for example displaying messaging traffic corresponding to an end point of one of the network zones. ReferencingFIG.23, an example visualization data2018is depicted including a traffic monitoring visualization. The example ofFIG.23depicts network traffic (e.g., messages, bits, etc.) for a first end point2302and a second end point2304. The example ofFIG.23is a non-limiting example, and traffic monitoring may be depicted in any manner, and may be organized according to any grouping, such as per-network, per-port, all traffic associated with an application, all traffic associated with a flow, all traffic associated with a vehicle function, all traffic associated with a service group, etc. An example apparatus2000includes the visualization data including a network activity profile, where the network activity profile is provided for one or more of: an end point on one of the first network or the second network; a vehicle system; an application; a flow; a vehicle controller; a vehicle function; a selected network zone; and/or a selected port of one of the network zones. ReferencingFIG.24, an example visualization data2018is depicted including a network activity profile. The example ofFIG.24depicts network bandwidth utilization for a selected network zone, with a number of utilization plots2402,2404,2406,2408, each associated with an end point of the selected network zone. ReferencingFIG.25, an example visualization data2018is depicted including a network activity profile for a selected network zone. The example ofFIG.24depicts a total activity for the network zone at the top, a network bandwidth utilization for particular devices (e.g., ISL0, ISL1) in the middle, and network bandwidth utilization for a vehicle controller (e.g., a Heads-up display and head unit) at the bottom, with the network bandwidth utilization for the vehicle controller further depicting utilization for a number of specific devices broken out (e.g., various cameras, in the example). The example ofFIGS.24and25are non-limiting, and network activity profile data may be determined and displayed in any manner, and further may be grouped and/or sub-grouped in any manner, including by end point, flow, application, vehicle function, vehicle controller, etc. An example vehicle communication circuit2002interprets the vehicle communications data2016by performing one or more operations such as: interpreting the vehicle communications data2016from a policy1606stored on a memory positioned on the vehicle and communicatively coupled to the vehicle communication circuit2002; receiving the vehicle communications data2016from a service tool communicatively coupled to vehicle communication circuit2002; receiving the vehicle communications data2016from an application communicatively coupled to the vehicle communication circuit2002; or receiving the vehicle communications data2016from a monitoring tool communicatively coupled to the vehicle communication circuit2002. In certain embodiments, retrieving vehicle communications data2016including traffic monitoring, network activity, and/or messages corresponding to an end point of a network zone and/or corresponding to a port of a network zone includes mirroring traffic from a first port of a network zone to a second port of the network zone, and monitoring the second port of the network zone to determine the vehicle communications data2016. For example, a first port of the second network zone2010may correspond to an end point to be monitored, where the operation to retrieve the vehicle communications data2016includes an operation to mirror the first port of the second network zone2010to a second port of the second network zone2010(e.g., where the vehicle communications circuit2022and/or a monitoring tool such as external tool2014are communicatively coupled to the second port), and monitoring the second port of the second network zone2010to determine the vehicle communications data2016. ReferencingFIG.26, an example visualization data2018is depicted including data flows between selected network participants (e.g., end points, flows, applications, vehicle controllers, etc.). The example ofFIG.26depicts data flows between selected end points, in the example depicting data flows with the “EP1” (e.g., an end point, such as a head unit) and the other end points (e.g., EP3, EPS, EP10, in the example, such as an ADAS related component, a parking controller, etc.). The example ofFIG.26allows monitoring of the network to determine if expected data flows are occurring, if off-nominal data flow is occurring, and the like. ReferencingFIG.27, an example visualization data2018is depicted showing total network activity for a selected network zone (at the top), and data pathing from a selected end point to other end points (the data path at the bottom) in the system. In the example, user interface elements may be provided, for example allowing selection of a time (top depiction) that is utilized for the data pathing depiction at the bottom, allowing for selection of the target end point (e.g., EP1 at the left), and/or whether transmission, receipt, or both, are depicted. In certain embodiments, the visualization data2018may be presented as a user interface, for example allowing a user to select components and have the related data flows depicted. It can be seen that a visualization such as those depicted inFIGS.26and27can be utilized to confirm expected operations, to diagnose issues (e.g., degraded operation of a component, diagnoses of a network issue, and/or detect off-nominal operating conditions such as those indicated by communication between components that more substantially communicate during certain off-nominal operating conditions). Additionally or alternatively, a visualization such as that depicted inFIG.26can be utilized to: improve network topology design, hardware selection, and/or protocol selection; to consolidate applications, flows, vehicle functions, etc. on vehicle controllers (e.g., to reduce network traffic requirements); and/or to identify potential redundant or unnecessary network communications. ReferencingFIG.21, an example local address table2100is depicted, schematically depicted configuration information consistent with various embodiments of the present disclosure. The example local address table2100may be part of the policy1606and/or a configuration file (e.g., accessible in whole or part by interface circuit(s) and/or a configuration circuit). The local address table2100may be provided as a data structure in a memory location accessible to the interface circuit(s), configuration circuit(s), and/or other implementing components described throughout the present disclosure. The local address table2100may be provided as a distributed data structure, with portions of the local address table2100provided as a data structure in memory location(s) accessible to the implementing components. The example local address table2100is depicted schematically to provide an illustration of the type of local address information that may be utilized to implement aspects of the present disclosure, but the details of the stored information and the organization of data structures implementing the local address table2100may be configured according to the implemented embodiments. The example local address table2100includes an end point identifier2102, which may be a local identifier of end points present in the system. In a further example, non-local end point identifiers (not shown) may further be included, for example to allow external devices to reference end points using an industry-standard terminology, or other selected terminology. The example local address table2100includes a network zone identifier2104, for example indicating which network zone the end point is considered to be a part of. The example address table2100further includes a local address value2106, for example indicating how the respective end point is addressed on the appropriate network zone. In certain embodiments, the local address value2106may be a TCP/IP address, a port number, or other identifier. In certain embodiments, for example on a logical bus architecture such as a CAN bus, the local address value2106may include a message identifier, such as a value included in a message that indicates the intended recipient (or the source) of messages to or from the end point. The example local address table2100includes an external address value2108, which may, for example, include an address utilized to identify the end point by external devices. The utilization of the external address value2108allows for external devices to abstract knowledge of the end point, including local addressing and/or associated network zones, from operations to utilize and/or collect data from the corresponding end points. It can be seen that further information may be included in a local address table2100, such as additional external address values (e.g., to allow for multiple external addresses to associate with a given end point of the system), and/or the inclusion of one or more additional non-local end point identifiers (e.g., to allow for multiple industry standards, proprietary nomenclature, informal nomenclature, etc., to successfully associate with a given end point of the system). In certain embodiments, one or more of the external addresses2108and/or non-local end point identifiers may further be associated with versions (e.g., interface versions, vehicle model descriptions, etc.), allowing for the implementing components using the local address table2100to interpret data commands and/or requests from external applications, algorithms, etc. to properly associate a desired end point to the data command and/or request, as changes occur within the vehicle (e.g., end points move between network zones and/or addresses) or external to the vehicle (e.g., external applications are updated for updated vehicle configurations that are no longer applicable to the specific vehicle of the system). It can be further seen that the utilization of the local address table2100allows for multiple addressing support for end points of the vehicle, for example providing both IPv4 and IPv6 addressing for end points of the vehicle. In certain embodiments, the local address table2100can be expanded, or alternatively a separate data structure maintained, allowing for association of end points with applications, flows, vehicle functions, vehicle controllers, APNs, external data routing paths, network zone trajectories, or the like. Accordingly, a given application such as “route management” can be associated with particular end points of the vehicle, and the associations can survive through a movement of the end point (e.g., from one network zone to another network zone). The utilization of a local address table2100, and/or extended or alternate data structures as described herein, allows for configuration of priorities, permissions, subscription management (both publishing of services and subscribing to services), and/or any other communication regulating activities as set forth herein. In certain embodiments, the local address table2100can be expanded, or alternatively a separate data structure maintained, allowing for addresses of external devices to be configured according to end points, applications, flows, vehicle functions, and/or vehicle controllers. For example, a given vehicle function may be allowed access to a given external resource (e.g., a routing function that accesses an external resource having maps, traffic reporting, etc.), with an associated external address associated with the vehicle function that provides access to the external resource. In the example, other vehicle functions may not be allowed access to the given external resource, with an associated external address associated with those vehicle functions (and/or with a lacking association for those other vehicle functions, depending upon the implementation), such that when those other vehicle functions request access to the external resource, a default address, protected space, null communication, or other selected behavior is instead implemented. Accordingly, a first application of the vehicle requesting accessing to an external resource, such as https://www.google.com may receive a typical expected access to the external IP address corresponding to the Google website, where a second application of the vehicle requesting access to the same external resource may receive an access denied indication, a default external resource indication (e.g., a cloud-based resource in a protected space indicating the requested resource is not permitted), or other selected response from the system. Accordingly, the local address table2100, and/or an expanded, extended, or alternate version thereof, may be utilized as a local DNS and/or an external DNS. In certain embodiments, for example where access to an external resource is requested, where the external DNS does not have an address for the resource, and where a permission to the requestor (e.g., end point, application, flow, vehicle function, and/or vehicle controller) is not denied to access the external resource, an off-vehicle external DNS (e.g., on a cloud server, from an internet provider, etc.) may be accessed to provide the external address. In certain embodiments, the on-vehicle external DNS may be updated based on an address retrieved from the off-vehicle external DNS. ReferencingFIG.28, an example system2800includes a vehicle102having a first network zone1612and a second network zone1614is depicted, where the first network zone1612and the second network zone1614are of different types. The example ofFIG.28includes a CND108interposed between the network zones1612,1614. The example CND108includes a policy manager circuit1602that interprets a policy1606including a network regulation description, a configuration circuit1604that configures a first network interface circuit1608in response to the network regulation description, where the first network interface circuit1608regulates communications between end points of the first network zone1612and end points of the second network zone1614. Additionally or alternatively, the configuration circuit1604configures a gatekeeper interface circuit2802in response to the network regulation description, where the gatekeeper interface circuit2802regulates communications between end points of at least one of the network zones1612,1614and external communication portal(s) and/or the external device1618. An example first network interface circuit1608includes a CEG, where the first network zone1612is not a primary network (e.g., the first network zone1612is a CAN network, and the second network zone1614is an ethernet network), and where the first network interface circuit1608is communicatively coupled to a port of the second network zone1614to send and receive communications that are passed between the network zones1612,1614. ReferencingFIG.29, an example network regulation description2904includes a data request permission description2906including data values2910associated with data requestors2908(e.g., end points each on one of the network zones1612,1614). An example first network interface circuit1608regulates communications between end points of the first network zone1612and the second network zone1614in response to the data request permission description2906, for example limiting associated data requestors2908to authorized data values2910, and/or preventing associated data requestors2908from accessing unauthorized data values2910. In certain embodiments, the first network interface circuit1608further regulates communications between end points of the first network zone1612(e.g., from a first end point to a second end point, both on the first network zone1612) in response to the data request permission description2906. An example system2800further includes the configuration circuit1604configuring the second network interface circuit1610in response to the network regulation description, where the second network interface circuit1610regulates communications of end points of the second network zone1614. Again referencingFIG.29, an example second network interface circuit1610regulates communications between end points of the second network zone1614and the first network zone1612in response to the data request permission description2906, for example limiting associated data requestors2908to authorized data values2910, and/or preventing associated data requestors2908from accessing unauthorized data values2910. In certain embodiments, the second network interface circuit1610further regulates communications between end points of the second network zone1614(e.g., from a first end point to a second end point, both on the second network zone1614) in response to the data request permission description2906. An example system2800further includes the configuration circuit1604configuring a gatekeeper interface circuit2802in response to the network regulation description2904, where the gatekeeper interface circuit2802regulates communications between end points of both the first network zone1612and the second network zone1614with an external device1618. The example external device1618may be coupled to the first network zone1612, the second network zone1614, or both. Additionally or alternatively, the external device1618may be coupled to a transceiver (not shown) of the vehicle102, which may be a cellular, WiFi, and/or Bluetooth transceiver. In certain embodiments, the transceiver may be communicatively coupled to a network zone, for example as a port on one of the network zones. In certain embodiments, the first network zone1612is a non-primary network zone, the second network zone1614is a primary network zone, and the transceiver is communicatively coupled to the second network zone1614. In a further example embodiment, the second network zone1614is an ethernet network, and the transceiver is coupled to the second network zone1614by communicating with the second network interface circuit1610through a port of a CES including the second network interface circuit1610. Example and non-limiting external devices1618include one or more of: a cloud server based application, a web based application, and/or a mobile device application. Again referencingFIG.29, an example data request permission description2906includes a data access permission2914associated with each one of a number of external communicators2912. Example external communicators2912include identified external devices1618, external applications, external flows, external entities (e.g., service, manufacturer, owner, operator, etc.), external addresses, etc. Example and non-limiting data access permissions2914include permissions to communicate with particular end points, flows, applications, vehicle functions, network zones, vehicle controllers, and the like. In certain embodiments, the data access permissions2914may be distinct for transmitted and received communications—for example a given external communicator2912may not have permissions to request data from a first end point on the vehicle, but the first end point on the vehicle may have permissions to send data to the given external communicator2912. An example data request permission description2906includes data access permissions associated with one or more of: an external device; an external communicator; a flow associated with an end point, external device, and/or external communicator; a vehicle function associated with an end point, external device, and/or external communicator; and/or an application associated with an end point, external device, and/or external communicator. Example and non-limiting data access permissions2914include one or more of: an ability to request, transmit, and/or publish data; an ability to request, transmit, and/or particular data values; and/or an external communication bandwidth limitation (e.g., a data rate, aggregated data amount per unit time, and/or a share of an available bandwidth). An example system2800further includes the gatekeeper interface circuit2802regulating communications between end points of the network zones1612,1614with external devices1618(and/or external communicators2912) in response to the data request permission description2906and/or the data access permissions2914. An example gatekeeper interface circuit2802further regulates communications with external device(s)1618(and/or external communicator(s)2912) in response to one or more of: a flow associated with the regulated communication(s) (e.g., adjusting permissions based on a priority of the associated flow, a role of the associated flow and/or current operation conditions, etc.); a data type associated with the regulated communication(s) (e.g., prioritizing or de-prioritizing certain data types, limiting certain data types to certain communication conditions such as availability of high data rate communications, typing data according to criteria such as age of the data and adjusting permissions accordingly, etc.); a data service provider associated with the regulated communication(s) (e.g., configuring data rate, bandwidth, and/or aggregate data values in response to an associated data service provider for the data); a vehicle function associated with the regulated communication(s) (e.g., prioritizing certain vehicle functions); and/or a connection type of a communicative coupling with the external device(s)1618(and/or external communicator(s)2912) (e.g., allowing for greater communication rates when a high rate and/or low cost data connection is available). An example system2800includes a configuration circuit1604that receives a policy update (e.g., from the policy manager circuit1602) including a change to the network regulation description2904, and updating the configuration(s) of the first network interface circuit1608, second network interface circuit1610, and/or gatekeeper interface circuit2802in response to the change to the network regulation description2904. In a further example, the policy manager circuit1602interprets an authorization associated with the policy update, for example based on a permission of an external device1618and/or external communicator2912providing the policy update. The example policy manager circuit1602suppresses the policy update, in whole or part, in response to the authorization indicating the requesting unit (e.g., the external device1618and/or external communicator2912) is not authorized to make the change to the network regulation description of the policy update. In certain embodiments, policy manager circuit1602may additionally or alternatively provide one or more policy notifications1620, to the requesting unit and/or to other external devices1618or external communicators2912, in response to suppressing or partially suppressing the policy update (e.g., referenceFIG.16and the related description). Example and non-limiting requesting units include one or more of: an entity associated with the policy update; an application associated with the policy update; a flow associated with the policy update; a vehicle function associated with the policy update; an identifier of the external device communicating the policy update; and/or an identifier of an external communicator associated with the policy update. Again referencingFIG.28, an example policy manager circuit1602interprets a policy1606including a network usage permission description3004(referenceFIG.30). An example network usage permission description3004includes an external data access description3006, where the configuration circuit1604further configures the gatekeeper interface circuit2802in response to the external data access description3006, and where the gatekeeper interface circuit2802regulates communications with an external device1618in response to the external data access description3006. An example external data access description3006includes external access permission(s)3014associated with external communicator(s)3012, such as identified external devices1618, external applications, external flows, external entities (e.g., service, manufacturer, owner, operator, etc.), external addresses, etc. In certain embodiments, external communicators(s)3012include one or more local communicating devices requesting an external communication, such as a flow of the vehicle, an application, a network zone of the vehicle, an end point of a network zone, or the like. For example, an example gatekeeper interface circuit2802regulates external communications based on a flow association of a communicating one of the end points of the first network zone and/or the second network zone (e.g., limiting external communications to permitted communications according to the external access permission(s)3014, and/or allowing external communications that are not excluded by the external access permission(s)3014). An example gatekeeper interface circuit2802regulates external communications based on an application association of a communicating device (e.g., an external device1618, and/or an end point), for example limiting external communications to permitted communications according to the external access permission(s)3014and/or allowing external communications that are not excluded by the external access permission(s)3014. An example gatekeeper interface circuit2802regulates external communications based on a network zone association of a communicating device (e.g., a network zone associated with an end point that requests the external communication, or source zone; and/or that is the target of an external communication, or destination zone), for example limiting external communications to permitted communications according to the external access permission(s)3014and/or allowing external communications that are not excluded by the external access permission(s)3014. In certain embodiments, the first network zone and the second network zone may be separate virtual local area networks of the vehicle, and may have separate external access permissions3014. An example policy1606includes an external data quantity description (not shown), where the configuration circuit1604configures the gatekeeper interface circuit2802in response to the external data quantity description. An example external data quantity description includes a data limit for an application, and where the gatekeeper interface circuit further regulates external communications based on an association of a communicating device with the application. An application may be a vehicle operation related application (e.g., an application operating on the vehicle, and/or operating on an external device with communicative interactions with the vehicle) or an application not related to vehicle operation (e.g., a infotainment application, an operator application, web browsing utilizing a network zone of the vehicle, a third party application communicating with the vehicle, etc.). An example external data quantity description includes a data limit for an end point of one of the network zones, and the gatekeeper interface circuit regulates communications based on a source or a destination end point of regulated communications. An example external data quantity description includes a data limit for a flow, and the gatekeeper interface circuit regulates external communications based on an association of a communicating device with the flow. Example and non-limiting data limits include one or more of: an amount of communicated data corresponding to a selected time period (e.g., MB per hour, GB per month, etc.); an amount of communicated data corresponding to a selected vehicle operating condition (e.g., MB per trip; data rate during idling operation; data rate at rated operation; data rate during high transient operation; etc.); an amount of communicated data corresponding to a data provider associated with the application, end point, and/or flow; a bandwidth share of the transceiver utilized for the communications; a bandwidth volume of the transceiver utilized for the communications; a bandwidth share of a channel of the transceiver (e.g., where the transceiver includes more than one channel, where the bandwidth share is limited for channel(s) servicing external communications for the application, end point, and/or flow); and/or a bandwidth volume of a channel of the transceiver (e.g., where the transceiver includes more than one channel, where the bandwidth volume is limited for channel(s) servicing external communications for the application, end point, and/or flow). ReferencingFIG.31, an example network usage permission description3004includes a network utilization description3102corresponding to a network zone3104, and a communicating device description3106corresponding to a local communicating device, such as an end point, a flow, a vehicle function, and/or an application. In the example, the gatekeeper interface circuit2802further regulates external communications based on the network utilization description3102, and an associated communicating device (e.g., corresponding to the communicating device description3106) with the regulated communication. An example network utilization description3102includes determining a priority3108, an associated flow3110, an associated vehicle function3112, an associated application3114, and/or an associated condition or event3116(e.g., a triggering event to implement an aspect of the policy1606, vehicle or other conditions to be present to allow implementation of the aspect of the policy1606, and/or vehicle or other conditions which, if present, adjust or suppress an aspect of the policy1606) with the communicating device to regulate the external communications. The network utilization description3102may include one or more of: a bandwidth of the network zone3104available to be utilized to support external communications; a data rate on the network zone3104available to be utilized to support external communications; a bandwidth limitation of the network zone3104(e.g., where external communications would cause a general exceedance, they may be suppressed or reduced); and/or a data rate limitation of the network zone3104(e.g., where external communications would cause a general exceedance, they may be suppressed, reduced, or delayed). In certain embodiments, priorities3108or other information related to the external communications may be compared with priorities of on-vehicle communications utilizing the network zone, and an external communication may take priority over the on-vehicle communication, which may be suppressed, reduced, or delayed until the external communication is serviced. In certain embodiments, service requirements (e.g., QoS parameters) for on-vehicle end points, flows, applications, vehicle functions, etc. (e.g., local communicating devices), may be considered in determining an external communication permission, and the external communication may be allowed while the service requirements can be met. ReferencingFIG.32, an example vehicle102includes a first network zone3202, and a second network zone3204that is of a different type than the first network zone3202. The example vehicle includes a gatekeeper interface circuit3206that is interposed between the first network zone3202and an external device3210, and between the second network zone3204and the external device3210. The gatekeeper interface circuit3206may be physically interposed, for example where communications between the zones3202,3204and the external device3210pass through the gatekeeper interface circuit3206, or logically interposed, for example where communications between the zones3202,3204and the external device3210are regulated by the gatekeeper interface circuit3206. In the example ofFIG.32, a transceiver3208provides communicative coupling with the external device3210, and the gatekeeper interface circuit3206is interposed between the zones3202,3204and the transceiver3208. The transceiver3208ofFIG.32is depicted as a single device, although a given vehicle may have a number of transceivers (not shown). An example gatekeeper interface circuit3206regulates communications between a selected number of zones3202,3204on the vehicle102and selected transceiver(s)3208. For example and without limitation, operations of the gatekeeper interface circuit3206may limit external communications with selected zones3202,3204to ensure security of vehicle data and operations, to ensure protection of private and/or proprietary information, and to preserve functionality of the vehicle to perform the selected mission (e.g., limiting extraneous and/or malicious network traffic on the selected zones3202,3204). In another example and without limitation, operations of the gatekeeper interface circuit3206may limit utilization of selected transceiver(s)3208, preserving external communication bandwidth, limiting the amount and/or rate of data through the transceiver(s)3208, and/or ensuring external data communications are attributed to a proper local communicating device and/or data service provider. ReferencingFIG.33, an example CND108is depicted, consistent with the example ofFIG.32in certain embodiments of the present disclosure. The example CND108includes the gatekeeper interface circuit3206, and further includes a policy manager circuit3302that interprets a policy1606including a network regulation description, a configuration circuit3304that configures a first network interface circuit3306and/or a second network interface circuit3308in response to the policy1606, and where the network circuit(s)3306,3308regulate communications between end points of the respective network zone (intra-network communications) and/or between end points across the respective network zones (inter-network communications). The example ofFIG.33depicts two network interface circuits3306,3308, although operations of the gatekeeper interface circuit3206may be performed in relation to only one network interface circuit, a subset of available network interface circuits, or all network interface circuits. ReferencingFIG.34, an example CND108includes the second network interface circuit3308, where the gatekeeper interface circuit3206regulates communications between the second network zone3204and the external device3210. In the example ofFIG.34, external communications from the first network zone3202are provided through the first network interface circuit3306to the second network zone3204, and are thereby regulated by the gatekeeper interface circuit3206as communications on the second network zone3204. Additionally or alternatively, external communications from a network zone (such as the first network zone3202) may not be regulated by the gatekeeper interface circuit3206, and/or external communications from a network zone (such as the first network zone3202) may not be possible. ReferencingFIG.35, an example vehicle102includes a vehicle controller3502, where the gatekeeper interface circuit3206is positioned on the vehicle controller3502. The example gatekeeper interface circuit3206regulates external communications between selected network zones3204,3202and an external device3210. An example gatekeeper interface circuit3206may be an end point of the second network zone3204, and/or the vehicle controller3502may be an end point of the second network zone3204. ReferencingFIG.36, an example gatekeeper interface circuit3206is distributed between two vehicle controllers3502,3602, with each of the vehicle controllers3502,3602provided as an end point of the second network zone3204. In certain embodiments (not shown), the vehicle controllers3502,3602may be end points on separate network zones3204. In the examples where the gatekeeper interface circuit3206is distributed, each gatekeeper interface circuit3206portion may regulate portions of external communications, such as communications with an associated network zone, and/or may be capable to regulate all external communications of the selected network zones, for example to provide for redundant capability if communications with one of the gatekeeper interface circuit3206portions is lost or degraded. ReferencingFIG.37, an example gatekeeper interface circuit3206is distributed between a first portion on a CND108, and a second portion on a vehicle controller3702. The example vehicle controller3702is an end point on the second network zone3204. Similar to the example ofFIG.36, each gatekeeper interface circuit3206portion may regulate portions of external communications, such as communications with an associated network zone, and/or may be capable to regulate all external communications of the selected network zones, for example to provide for redundant capability if communications with one of the gatekeeper interface circuit3206portions is lost or degraded. ReferencingFIG.38, an example policy1606includes an external data routing description3802, where the configuration circuit1604configures the gatekeeper interface circuit in response to the external data routing description3802. An example external data routing description3802includes one or more of a local DNS3804, an external DNS3806, and/or one or more external data routing paths3808. ReferencingFIG.39, an example local DNS3804includes a number of local address values3904for end points3902of network zone(s), each corresponding to at least one non-local address value3906. An example local DNS3804may be stored as a data structure, as a part of the policy1606, and may be included with a local address table2100(referenceFIG.21) or as a separate data structure. The example local DNS3804may be utilized in a network address translation (NAT) operation. An example non-local address value3906includes an address utilized by an external device (e.g., an IPv4 or IPv6 address directed to an end point, where the IPv4 or IPv6 address may not match the local address value3904, but may be a value from a previous configuration, a normally used value by an entity associated with the external device, etc.). An example non-local address value3906includes a standardized value for the end point (e.g., an industry standard, a customary value, a value utilized by a standards body such as SAE, etc.). An example non-local address value3906includes a proprietary value for the end point (e.g., a value normally utilized by a manufacturer, after-market entity, etc.). An example non-local address value3906includes a previous local address value for an end point (e.g., a local address value3904utilized when the vehicle was manufactured, utilized for a previous configuration of the vehicle, utilized for a previous configuration of a related vehicle such as an earlier model year, etc.). The utilization of the local DNS3804allows for external devices to address end points3902of the vehicle using a separate non-local address value3906, without requiring knowledge of the network configuration, location, or other information about end points3902of the vehicle. The utilization of the local DNS3804additionally allows for changes to the vehicle configuration, such as the movement of end points between network zones, consolidation of end points, and/or any other changes to the end points of the vehicle and/or the network topology of the vehicle, while still allowing external devices, applications, and the like to function properly. The utilization of the local DNS3804also provides for separation of knowledge about the vehicle from external applications, allowing for a greater number of users to access vehicle information, isolating external users from the vehicle information, and reducing external application development times and/or resource requirements. The utilization of the local DNS3804also provides for ease of incremental changes to the network topology of related vehicles, such as migration of end points from a first network zone to a second network zone over a number of model years or other configuration iterations. An example policy manager circuit1602determines an address change of an end point of the first network zone and/or second network zone, and updates the local DNS3804in response to the address change. For example, the policy manager circuit1602may detect a move of the end point between network zones (e.g., detecting communications from the end point, receiving an identifier from the end point at a new location, and/or receiving a notification of the change from the end point, a service tool, or the like), and update the local DNS3804with a local address value3904corresponding to the new location (e.g., network zone, address value, etc.) in response to the move. In another example, the policy manager circuit1602may detect a change of a non-local address value3906for an end point, and updates the local DNS3804in response to the non-local address value3906change. For example, a change to the policy1606from an external device may indicate that a non-local address value3906change has occurred (e.g., “AmbTempSens” is now “Ambient temperature sensor”), and/or a published listing of non-local address values3906may be updated (e.g., a listing provided on a memory of a cloud server, where the policy manager circuit1602periodically and/or episodically surveys the listing for changes). An example policy manager circuit1602determines an authorization of an external device providing for the change of the non-local address value3906, for example allowing only authorized devices, entities, applications, or the like to adjust the non-local address value3906. The operations of the policy manager circuit1602to update the non-local address value(s)3906allow for convenient compliance with industry standards, manufacturer preferences, and/or systematic changes to a number of vehicles without having to configure individual vehicles when changes to proprietary or standard references to end points. It can be seen that operations to update the non-local address values3906can also improve memory utilization, as the size of the local DNS3804(and/or local address table2100) can be reduced over time as a group of related vehicles synchronize on accepted address values, and superfluous relationships of no longer utilized non-local address values3906are eliminated. ReferencingFIG.40, an example external data routing description includes an external DNS3806including a number of external address value(s)4004for external network access locations, each corresponding to a local communicating device4002. The external DNS3806allows for the gatekeeper interface circuit2802to control access for local communicating devices4002to external network access locations. In certain embodiments, the external DNS3806is operated to allow only permitted external access (e.g., where an external address value4004is provided). In certain embodiments, the external DNS3806is operated to prevent external access (e.g., where a listed external address4004listed may not be accessed). In certain embodiments, both the access permissions and/or the access type may be adjusted according to the local communicating device4002. For example, certain end points, flows, applications, vehicle functions, and the like may be limited to external access where an external address value4004is available, and other end points, flows, applications, vehicle functions, and the like may be permitted external access except where a particular external address value4004is listed preventing access. In certain embodiments, the external DNS3806includes a non-local address value3906—for example an IP address corresponding to the external address value4004which may be a common name such as a website address as listed in written language. The utilization of the non-local address value3906allows for rapid external access without having to use an external DNS (e.g., from a cloud server and/or internet provider), and also allows for differential response to local communicating devices4002for a given external address value4004(e.g., allowing for some local communicating devices to access a given external web address, and re-directing others to a selected location). Example and non-limiting external network access locations include one or more of: an internet address, a wide area network address, and/or an external device and/or external application identifier (e.g., “Route planning agent”, “Service assistance agent”, an IPv6 address, etc.). An example external data routing path3808includes a network zone trajectory of a regulated external communication corresponding to a local communicating device. An example network zone trajectory includes a data configuration for the communication, such as one or more of: an up-sampling description; a down-sampling description; an encapsulation description; a data processing description; a communication frame processing description; and/or a data rate description. For example, the network zone trajectory allows for external communications to be provided with selected processing of the communication, including a payload and/or a frame of the communication, and/or to be provided at a selected data rate. The selected data rate may be according to a request of the data rate from an external device, and/or according to a data rate limit associated with the external communication (e.g., to limit network utilization, transceiver utilization, data transmissions associated with a data provider, etc.). The network zone trajectory additionally or alternatively allows for selected encapsulation of the communication, for example when a message is passed through an intervening network zone (e.g., a CAN message from a first network zone passing through as an ethernet message on a second network zone) before being transmitted externally to the vehicle. An example network zone trajectory further includes an external communication portal4102(e.g., referenceFIG.41and the related description) for the regulated communication, where the gatekeeper interface circuit3206further regulates communication between the local communicating device (e.g., an end point of a network zone) and the external communication portal4102. Example and non-limiting external communication portals4102include a transceiver selection (e.g., where more than one transceiver is available), an access point name (APN) selection, a hardware port selection (e.g., a hardware port of a network zone, an OBD port, a proprietary communication port, a USB port, etc.), a WiFi adapter, a Bluetooth adapter, and/or a cellular communication. The example network zone trajectory allows for the gatekeeper interface circuit3206to utilize external communications having the lowest cost, lowest impact on vehicle and/or network performance, to attribute external communications to a proper service provider, to ensure QoS parameters for local communicating devices, and/or to ensure security of external communications. An example gatekeeper interface circuit3206adjusts the network zone trajectory in response to an operating condition of the vehicle (e.g., vehicle shutdown, service mode, idling, operating at a rated condition, available external communication portals4102, etc.). An example gatekeeper interface circuit3206adjusts the network zone trajectory in response to an operating condition of a network zone and/or a transceiver (e.g., current utilization, connectivity, fault status, etc.). An example external data routing path includes an APN of a regulated communication (e.g., specifying an associated data service provider for the communication). An example gatekeeper interface circuit3206adjusts the APN in response to an operating condition of the vehicle, a network zone, and/or the transceiver (e.g., where a communication is supporting more than one application, vehicle function, and/or flow, operations to adjust the APN in response to the operating condition of the vehicle allow for the regulated communication to be attributed to a “primary consumer” of the communication). An example gatekeeper interface circuit3206aggregates a regulated communication from a number of local communicating devices (e.g., where a communication supports more than one end point, application, vehicle function, and/or flow), and distributes the aggregated regulated communications between more than one APN associated with the local communicating devices (e.g., where a communication is supporting multiple consumers, the aggregate amount of communications can be distributed across APNs, allowing for a reduction in total external communications by avoiding redundancy, while attributing all external communications). In certain embodiments, operations to adjust APNs, aggregate regulated communications, and/or distribute aggregated regulated communications between APNs are performed in response to an attribution description of the policy1606. An example policy manager circuit1602determines a change to the external data routing path, for example provided by an external device1618, and updates the external data routing description in response to the change of the external data routing path. An example policy manager circuit1602determines an authorization of an external device providing the change of the external data routing path, and suppresses all or a portion of the change to the external data routing path in response to determining the change is not authorized or is not fully authorized. An example policy manager circuit1602changes the external data routing path in response to a change of a local communicating device (e.g., changing the routing in response to an end point moving from one network zone to another network zone). Example and non-limiting changes to a local communicating device include one or more of: a movement of an end point from one of the first network zone or the second network zone to the other one of the first network zone or the second network zone; a change in a flow, where the change comprises a change in a priority, a subscription, or a permission; a change in an application, where the change comprises a change in a priority, a subscription, or a permission; and/or or a change in a quantity, configuration, or type of data communicated by the local communicating device. ReferencingFIG.41, an example vehicle102includes a gatekeeper interface circuit3206that regulates communications between local communicating devices and an external device1618. The example vehicle102includes a local communicating device originating a communication and/or targeted to receive a communication from the external device1618(originating/receiving local communicating device4104), and the gatekeeper interface circuit3206providing a routed external communication4108in response to the originated or received communication, and further in response to a policy1606including an external data routing path, permissions associated with the local communicating device, and/or permissions associated with the external device1618. In certain embodiments, the gatekeeper interface circuit3206selects an external communication portal4102for the routed external communication4108, which includes selecting a device through which the routed external communication4108will be communicated to the external device1618. Example external communication portals4102include one or more of: a first transceiver4110and/or an APN selection4122for the first transceiver4110(e.g., allowing for selection of a data provider associated with the communication4108); a second transceiver4112, an APN selection4122, and/or a channel selection4124for the second transceiver4112(e.g., allowing for selection of a data provider and/or a channel of the transceiver4112); a second network zone connection4114(e.g., a port of an ethernet network zone); a WiFi adapter4116(e.g., utilizing a WiFi connection if available); a Bluetooth adapter4118(e.g., utilizing a Bluetooth connection if available); and/or a first network zone connection4120(e.g., a port of a CAN network zone). The example ofFIG.41depicts the first transceiver4110and second transceiver4112for convenience of the description to indicate that a transceiver4110,4112may have channels or not, although a given vehicle102may have any number of transceivers4110,4112, of which some, all, or none may have channeled operations. The example ofFIG.41depicts a single connection to each network zone for convenience of the description to indicate that any network zone may have a connection, although a given network zone may have no connection, or more than one connection (e.g., an OBD port and a proprietary port, etc.). Without limitation to any other aspect of the present disclosure, the gatekeeper interface circuit3206may adjust routing operations based on available external communication portals4102, vehicle operating conditions, network operating conditions, permissions of any entity in the communication chain, priority of any entity in the communication chain, service requirements of any entity related to the vehicle, and/or data rate and/or quantity limitations. ReferencingFIG.42, an example policy1606includes an external data service description4202, where the configuration circuit1604configures the gatekeeper interface circuit3206in response to the external data service description4202. An example external data service description4202includes a number of local communicating devices4204, each corresponding to a QoS value(s)4206. Example and non-limiting QoS values4206include one or more of: a priority value; a packet delay value (e.g., a maximum, average, or other packet delay description); a packet loss rate value (e.g., a maximum, average, longest gap time, or other packet loss description); a data rate value; a maximum drop-out time value; an acknowledgement value (e.g., whether acknowledgement for communications relevant to the associated local communicating device, if available, are required); a data buffering priority value (e.g., which may be utilized to determine a buffer size, buffer priority, and/or data expiration parameters for buffered data); a data buffering size value (e.g., a data buffer size, buffered time, or other storage size related parameters); and/or a data life cycle description (e.g., indicating storage life for associated data, expiration time, and/or deletion priority). Without limitation to any other aspect of the present disclosure, a local communicating device includes one or more of: an end point of a network zone; an application; a flow; a vehicle function; and/or a vehicle controller. In certain embodiments, the gatekeeper interface circuit3206regulates external communications using a QoS value4206corresponding to the local communicating device4204associated with a regulated communication. In certain embodiments, for example where more than one local communicating device4204is associated with a regulated communication (e.g., an end point and a flow), the gatekeeper interface circuit3206utilizes the QoS value(s)4206associated with a highest priority one of the local communicating devices4204, and/or applies a super-set of applicable QoS values4206that meet the highest service values for all of the associated local communicating devices4204. An example policy manager circuit1602determines a change of the external data service description, for example through an update of the policy from an external device, and the configuration circuit1604updates the configuration of the gatekeeper interface circuit3206in response to the updated policy. An example policy manager circuit1602determines an authorization of an external device providing the change of the external data service description, and suppresses all or a portion of the change to the external data service description in response to determining the change is not authorized or is not fully authorized. Again referencingFIG.40, an example external data routing description includes an external DNS including a number of external address values4004for external network access locations, each corresponding to a local communicating device4002(e.g., an end point of a network zone). An example gatekeeper interface circuit3206further accesses an off-vehicle external DNS (not shown) in response to a request by an end point to communicate with an external address value, where the requested external address value is not found on the external DNS3806. An example gatekeeper interface circuit3206further updates the external DNS3806in response to accessing the off-vehicle external DNS. Again referencingFIG.28, an example vehicle102includes a first network zone1612and a second network zone1614, where the second network zone1614is of a different type than the first network zone1612. The example vehicle102includes a policy manager circuit1602that interprets a policy1606including an eternal data routing description and an external data service description. The example vehicle102includes a configuration circuit1604that configures a gatekeeper interface circuit2802in response to the external data routing description and the external data service description. In the example, the gatekeeper interface circuit2802is interposed between the first network zone and at least one external communication portal4102(e.g., referenceFIG.41) selectively couplable to an external device1618, and further interposed between the second network zone and the at least one external communication portal4102. The gatekeeper interface circuit2802regulates communications between end points of the network zones1612,1614and the external communication portal(s)4102. An example external data routing description includes a number of local communicating devices, each corresponding to an external data routing path. Example external data routing paths include a network zone trajectory of a regulated communication. An example network zone trajectory includes a data configuration such as: an up-sampling description; a down-sampling description; an encapsulation description; a data processing description; a communication frame processing description; and/or a data rate description. An example network zone trajectory includes at least one external communication portal4102for the regulated communication. An example external data service description includes a number of local communicating devices, each corresponding to one or more QoS values. In a further example, the external communication portal(s)4102include a first transceiver and a second transceiver, where the gatekeeper interface circuit further distributes the regulated communications between the first transceiver and the second transceiver in response to the external data service description. In another example, the external communication portal(s)4102include a first channel associated with a transceiver and a second channel associated with the transceiver, and where the gatekeeper interface circuit further distributes the regulated communications between the first channel and the second channel in response to the external data service description. Example external communication portal(s)4102include one or more external access points such as: a transceiver; a wireless transceiver; a Bluetooth transceiver; a hardware port on the first network zone; a hardware port on the second network zone; an on-board diagnostic (OBD) port; a proprietary network port; an external network utilizing wireless communication with the vehicle (e.g., where communications with the external device are direct to the external network, and/or tunneled through the external network); an external network utilizing cellular communication with the vehicle; an external network utilizing Bluetooth communication with the vehicle (e.g., where communications with the external device are direct to the external network, and/or tunneled through the external network); more than one channel of a transceiver; more than one transceiver; and/or a number of channels distributed across at least two transceivers. An example gatekeeper interface circuit2802further distributes the regulated communications between the at least two external access points. In a further example, each QoS value includes a service description such as: a priority value; a packet delay value; a packet loss rate value; a data rate value; a maximum drop-out time value; an acknowledgment value; a data buffering priority value; a data buffering size value; and/or a data life cycle description. Certain aspects of the present disclosure are set forth as procedures to perform operations related to the present disclosure. Operations may be performed, without limitation, by any controllers, circuits, devices, components, sensors, actuators, logic circuits, or other aspects as set forth in the present disclosure. Procedures are depicted schematically as illustrative examples, and operations may be omitted, combined, divided, and/or re-ordered in whole or part. In certain embodiments, one or more operations of a first procedure may be combined with one or more operations of another procedure. ReferencingFIG.43, an example procedure4300to regulate communications between networks of a different type on a vehicle is schematically depicted. The example procedure4300includes an operation4302to interpret a policy including a network regulation description, and an operation4304to regulate communications between end points of a first network and end points of a second network in response to the network regulation description. ReferencingFIG.44, an example procedure4400to regulate communications between networks of a different type on a vehicle is schematically depicted. The example procedure4400includes an operation4302to interpret a policy including a network regulation description, and an operation4402to receive a policy communication from an external device. The procedure4400includes an operation4404to determine whether the policy is verified—for example if the external device is authorized to update the policy, if the system is capable to perform according to the policy, if the policy violates any security criteria, if the performance of the policy would exceed a data storage limit or a communication limit, etc. In response to operation4404indicating YES, the procedure4400includes an operation4406to store and/or update the policy, and the operation4304to regulate communications between end points of a first network and end points of a second network in response to the network regulation description. In response to operation4404indicating NO, the procedure4400optionally includes an operation4408to provide a notification to the external device (and/or to other external devices), and the operation4304to regulate communications between end points of a first network and end points of a second network in response to the network regulation description (e.g., utilizing the previous policy, a default policy, or the like). ReferencingFIG.45, an example procedure4500to regulate communications between networks of a different type on a vehicle is schematically depicted. The example procedure4500includes an operation4302to interpret a policy including a network regulation description, and an operation4402to receive a policy communication from an external device. The procedure4500includes an operation4404to determine whether the policy is verified—for example if the external device is authorized to update the policy, if the system is capable to perform according to the policy, if the policy violates any security criteria, if the performance of the policy would exceed a data storage limit or a communication limit, etc. In response to operation4404indicating YES, the procedure4500includes an operation4502to update local configuration files of one or more of: a network interface circuit, a CEG, a CES, and/or gateway interface circuit. In response to operation4404indicating NO, the procedure4500optionally includes an operation4408to provide a notification to the external device (and/or to other external devices). The procedure4500includes an operation4504to regulate intra-network, inter-network, and/or external communications, using the network interface circuit(s), CEG(s), CES(s), and/or gateway interface circuit(s) (e.g., whether updated or not). ReferencingFIG.46, an example procedure4600to command an actuator in response to a diagnostic command value is schematically depicted. The example procedure4600includes an operation4602to interpret a policy including an active diagnostic description, an operation4604to provide a diagnostic command value to an end point in response to the active diagnostic condition, and an operation4606to command an actuator in response to the diagnostic command value. ReferencingFIG.47, an example procedure4700to command an actuator in response to a diagnostic command value is schematically depicted. The example procedure4700includes an operation4702to interpret a policy including an active diagnostic description and a diagnostic execution condition, and an operation4704to determine whether a vehicle operating condition is consistent with the diagnostic execution condition and/or a diagnostic command value (e.g., determined from the active diagnostic description). In response to the operation4704determining YES, the procedure4700includes an operation4604to provide a diagnostic command value to an end point in response to the active diagnostic condition, and an operation4606to command an actuator in response to the diagnostic command value. ReferencingFIG.48, an example procedure4800to command an actuator in response to a diagnostic command value is schematically depicted. The example procedure4800includes an operation4602to interpret a policy including an active diagnostic description, and an operation4802to perform a diagnostic data collection operation in response to the active diagnostic description. The example procedure4800further includes an operation4604to provide a diagnostic command value to an end point in response to the active diagnostic condition, and an operation4606to command an actuator in response to the diagnostic command value. ReferencingFIG.49, an example procedure4802to perform a diagnostic data collection operation is schematically depicted. The example procedure4802includes an operation4902to process collected data (e.g., processing a payload and/or frame information of messages of the collected data), an operation4904to store the collected, processed data, and an operation4906to communicate at least a portion of the stored data to an external device. ReferencingFIG.50, an example procedure5000to store and/or communicate a diagnostic confirmation value is schematically depicted. The example procedure5000includes an operation4602to interpret a policy including an active diagnostic description, an operation4604to provide a diagnostic command value to an end point in response to the active diagnostic condition, and an operation4606to command an actuator in response to the diagnostic command value. The example procedure5000further includes an operation5002to determine a diagnostic confirmation value, and an operation5004to store and/or communicate the diagnostic confirmation value to one or more external devices. ReferencingFIG.51, an example procedure5100to command an actuator in response to a diagnostic command value is schematically depicted. In addition to operations recited in relation toFIG.46preceding, the example procedure5100includes an operation5102to determine whether a target device description points to a network address value for the target end point(s) related to a commanded actuator (e.g., if the target device description does not point to a network address value, or points to an incorrect network address value, then operation5102determines NO). In response to operation5102determining YES, the procedure5100proceeds to operation4604. In response to operation5102determining YES, the procedure5100includes an operation5104to supply or adjust a network address value for the target end point(s), and then to operation4604. ReferencingFIG.52, an example procedure5200to regulate communications between an external device and an end point of a network zone for a vehicle is schematically depicted. The example procedure5200includes an operation5202to interpret a policy including an external communication value, and an operation5204to regulate communications between the end point(s) of the network zone(s) and the external device in response to the external communication value. ReferencingFIG.53, an example procedure5204to regulate communications between an external device and an end point of a network zone for a vehicle is schematically depicted. The example procedure5204includes an operation5302to determine a type of the external communication value. In response to operation5302determining the type as an active diagnostic description, the procedure5204includes an operation5304to perform an active diagnostic operation. In response to operation5302determining the type as an active test description, the procedure5204includes an operation5306to perform an active test operation. In response to operation5302determining the type as a vehicle control command, the procedure5204includes an operation5308to perform a vehicle control operation. In response to operation5302determining the type as an active assistance operation, the procedure5204includes an operation5310to perform an active assistance operation. Example and non-limiting operations5310include one or more of: a service personnel contacting an operator of the vehicle, the service personnel commanding a specified active diagnostic operation5304, the service personnel commanding a specified active test operation5306, and/or the service personnel commanding a specified vehicle control operation5308. The example procedure5204further includes an operation5312to determine if the external communication value indicates further operations, and in response to operation5312indicating YES, the procedure5204includes returning to operation5302. ReferencingFIG.54, an example procedure5400to regulate communications between an external device and an end point of a network zone for a vehicle is schematically depicted. The example procedure5400includes an operation5402to interpret a policy including an external communication value and a target device description. The example procedure5400further includes an operation5404to determine whether the target device description points to a network address value for the target end point(s). In response to operation5404determining YES, the example procedure5400includes an operation5408to regulate communications between the external device and end point(s) of network zone(s) in response to the external communication value. In response to operation5404determining NO, the example procedure5400includes an operation5406to supply or adjust a network address value for the target end point(s), and operation5408. ReferencingFIG.55, an example procedure5500to transmit visualization data is schematically depicted. The example procedure5500includes an operation5502to interpret vehicle communications data, an operation5504to generate visualization data in response to the vehicle communications data, and an operation5506to transmit the visualization data. ReferencingFIG.56, an example procedure5600to transmit visualization data is schematically depicted. The example procedure5600includes an operation5502to interpret vehicle communications data, an operation5602to interpret a data filtering value, and an operation5604to filter at least a portion of the vehicle communications data based, at least in part, on the data filtering value. The example procedure5600further includes an operation5504to generate visualization data in response to the vehicle communications data, and an operation5506to transmit the visualization data. ReferencingFIG.57, an example procedure5700to regulate inter-network, intra-network, and/or extra-vehicle communications is schematically depicted. The example procedure5700includes an operation5702to interpret a policy including a network regulation description, an operation5704to configure network interface circuit(s) in response to the network regulation description, and an operation5706to regulate inter-network communications and/or intra-network communications using the configured network interface circuit(s). The example procedure5700further includes an operation5708to configure a gatekeeper interface circuit in response to the network regulation description, and an operation5710to regulate extra-vehicle communications using the configured gatekeeper interface circuit. ReferencingFIG.58, an example procedure5800to regulate inter-network, intra-network, and/or extra-vehicle communications is schematically depicted. In addition to operations depicted in relation to procedure5700, the example procedure5800includes an operation5802to receive a policy communication from an external device, and an operation5804to determine whether the policy is verified—for example if the external device is authorized to update the policy, if the system is capable to perform according to the policy, if the policy violates any security criteria, if the performance of the policy would exceed a data storage limit or a communication limit, etc. In response to operation5804determining YES, the example procedure includes operation5806to store and/or update the policy, and operations5704(which may further include configuring the gatekeeper interface circuit), operation5706(and/or operation5710). In response to operation5804determining NO, the example procedure5800optionally includes an operation5807to provide a notification to one or more external devices, and proceeds to operation5704. ReferencingFIG.59, an example procedure5900to regulate extra-vehicle communications is schematically depicted. The example procedure5900includes an operation5902to interpret a policy including a network usage permission description and/or an external data access description, an operation5904to configure network interface circuit(s) in response to the network usage permission description, and an operation5906to regulate intra-network and/or inter-network communications using the network interface circuit(s). The example procedure5900includes an operation5908to configure a gatekeeper interface circuit in response to external data access description, and an operation5910to regulate extra-vehicle communications using the gatekeeper interface circuit. ReferencingFIG.60an example procedure6000to regulate inter-network, intra-network, and/or extra-vehicle communications is schematically depicted. The example procedure6000includes an operation6002to determine an authorization for a local communicating device for a regulated communication, an operation6004to configure network interface circuit(s) and/or a gatekeeper interface circuit in response to the authorization, and an operation6006to regulate intra-network, inter-network, and/or extra-vehicle communications using the network interface circuit(s) and/or gatekeeper interface circuit. ReferencingFIG.61an example procedure6100to regulate extra-vehicle communications is schematically depicted. The example procedure6100includes an operation6102to interpret a policy including an external data quantity description, an operation6104to configure a gatekeeper interface circuit in response to the external data quantity description, and an operation6106to regulate extra-vehicle communications using the gatekeeper interface circuit. ReferencingFIG.62an example procedure6200to regulate extra-vehicle communications is schematically depicted. The example procedure6200includes an operation6202to interpret a policy including an external data routing description, an operation6204to configure a gatekeeper interface circuit in response to the external data routing description, and an operation6206to regulate extra-vehicle communications using the gatekeeper interface circuit. ReferencingFIG.63an example procedure6300to regulate extra-vehicle communications is schematically depicted. The example procedure6300includes an operation6302to interpret a policy including an external data routing path corresponding to each of a number of local communicating devices, an operation6304to configure a gatekeeper interface circuit in response to the external data routing path, and an operation6306to regulate extra-vehicle communications using the gatekeeper interface circuit. ReferencingFIG.64an example procedure6400to regulate extra-vehicle communications is schematically depicted. The example procedure6400includes an operation6402to interpret a policy including an external data service description, an operation6404to configure a gatekeeper interface circuit in response to the external data service description, and an operation6406to regulate extra-vehicle communications using the gatekeeper interface circuit. ReferencingFIG.65an example procedure6500to service a data request including access to an external device is schematically depicted. The example procedure6500includes an operation6502to interpret a data request including access to an external device, and an operation6504to determine whether an external DNS includes the external device. In response to operation6504determining YES, the example procedure6500includes an operation6506to service the data request using the external address value from the external DNS. In response to operation6504determining NO, the example procedure6500includes an operation6508to access an off-vehicle external DNS to determine an external address value for the external device, and an operation6510to service the data request using the external address value from the off-vehicle external DNS. ReferencingFIG.66an example procedure6600to provide an extra-vehicle communication using a selected network zone trajectory is schematically depicted. The example procedure includes an operation6602to provide an extra-vehicle communication using a selected network zone trajectory, and an operation6604to perform data configuration operations on the extra-vehicle communication based on the network zone trajectory. Example operations6604include one or more of: up-sampling, down-sampling, data processing, payload processing, frame processing, encapsulation operations, and/or data rate management operations. ReferencingFIG.67an example procedure6700to provide extra-vehicle communications using a selected QoS value is schematically depicted. The example procedure6700includes an operation6702to provide an extra-vehicle communication using a selected QoS value, and an operation6704to perform distribution of communications between extra-communication portal(s) and/or APNs based on the QoS value. ReferencingFIG.68, a number of illustrative examples of message translation and/or message encapsulation embodiments are schematically depicted. The examples ofFIG.16are illustrative to depict certain aspects of the present disclosure, but are non-limiting to the disclosure. In certain embodiments, operations depicted inFIG.68may be performed in whole or part by a CEG, a CES, a translation circuit, and/or the CND, and in certain embodiments operations depicted inFIG.68may be regulated by the CND. The first example message translation6802includes a message from a first network having a payload6810and other frame information6808. The other frame information may include headers, trailing aspects and/or termination bits, and further may be determined by the relevant protocol, network type, source end point, destination end point, or other aspects as known in the art. In certain embodiments, the payload6810may be the message data, a data value expressed by the message, or other information considered to be the content of the message. However, in certain embodiments, for certain operations, during certain operating conditions, and/or for certain end points, the payload6810may be some other aspect of the message. For example, a network monitoring operation may utilize a time stamp, acknowledgement information, source and/or destination information, or other portions of the message as the payload. The example message translation6802includes separating the payload6810, and packaging the payload into a new frame (or packet)6812, within information configured for the target network. Additionally or alternatively, the new frame6812may include adjustment of an identifier (e.g., a source or destination), a time stamp, or other information allowing end points on disparate networks to be abstracted from knowledge about each other. In certain embodiments, the payload6810may be processed, for example to change units utilized, bit depth (e.g., 2 bytes versus 4 bytes), expressed precision, floating point or fixed point conversions, or the like. The second example message translation6804includes the original message6808,6810, and is fully encapsulated within a new frame6812, for example to provide a target end point with the original message as provided by the original source (e.g., allowing a previously developed algorithm to operate as-is, without having to translate to a new message; to allow for certain network monitoring operations utilizing the full original message, etc.). In certain embodiments, either the original payload6810or message frame6808may be processed, for example processing the payload as described preceding, updating a source identifier, time stamp, or the like to a new convention that is translated to abstract end points from each other, but providing otherwise equivalent or systematically adjusted information. The third example message translation6806includes the original message6808,6810, with an adjusted payload6814. The adjustment to the payload6814can include translation of the payload6814in some manner (e.g., a corrected value, a virtually sensed or modeled value based on the original payload6810, an up-sampled or down-sampled payload6810, or the like), and may additionally or alternatively include processing of the payload. The third example message translation6806describes an adjusted payload6814, although an adjustment may additionally or alternatively be performed on other portions of the message frame6808. In the third example message, a new frame6812is applied for communication to another network. ReferencingFIG.69, a schematic depiction of an operation to down-sample a sequence of messages6902is schematically depicted. In the example ofFIG.69, a message sequence6902(e.g., a series of five communications, in the example) is received, for example, at a network interface circuit of one of the network gateway devices. In the example ofFIG.69, the down-sampling operation is responsive to any down-sampling operations described herein, for example to match a receiving end point data rate, to provide the data represented by the messages6902at a scheduled rate, to manage bandwidth on a network of the vehicle and/or for extra vehicle communications, to preserve buffer memory, or for any other purpose, including any down sampling operations of the present disclosure. In the example ofFIG.69, the down-sampling device6904, which may be a translation circuit, network interface circuit, the CND, a circuit associated with the CND, a circuit regulated by the CND, or the like, generates a translated sequence of messages6908(e.g., processed as depicted inFIG.16and the related disclosure, and/or according to any other message translation and/or message processing operations set forth herein). The example ofFIG.69depicts the translated sequence of messages6908for clarity of the description. However, the translated sequence of messages6908may not all be present at the same time, for example as messages are translated and sent they may be removed, deleted, expire from a cache, etc. The sequence of messages6908is depicted to illustrate aspects of the present disclosure. Additionally or alternatively, translation of the messages6908may be performed after down-sampling operations are performed, for example to reduce utilization of processing resources. For example, some of the messages may be eliminated as a part of the down-sampling before the translation operations (e.g., replacement of frame portions or metadata, encapsulation, processing of the payload and/or frame portions, etc.) are performed. In the example ofFIG.69, a down-sampled sequence of messages6906is provided and communicated, for example to a different network gateway device, to a different network of the vehicle from which the first sequence of messages6902is received, to an external device (e.g., service tool, cloud server, operator's mobile device, etc.), and/or stored on a memory storage device on the vehicle (e.g., for later data collection operations, as a part of stored vehicle data, etc.). In the example, the five messages of the original sequence6902are down-sampled to three messages of the down-sampled sequence6906. The down-sampling operations can include converting selected messages from the original sequence6902, for example changing an original 10 ms data stream6902to a down-sampled 20 ms data stream6906by utilizing every other data message. The down-sampling operations may, additionally or alternatively, include interpolation of data messages between original values. For example, where the original data stream6902is a 40 ms data stream, and the down-sampled data stream6906is a 100 ms data stream, the down-sampling may include either taking the closest-in-time messages, or performing an interpolation operation (e.g., applying a linear fit, spline fit, polynomial fit, or other interpolation operation for spanning data points), to be utilized as the down-sampled messages6906. Spanning data points or values, as utilized herein, indicate data values in the down-sampled messages6906that do not align in time with a corresponding original data message6902. Non-spanning data points or values, as utilized herein, indicate data values in the down-sampled messages6906that align in time, or are synchronized, with the corresponding original data message6902. It will be understood that messages of the original data message6902and down-sampled messages6906may additionally or alternatively have a phase difference, and accordingly, in certain embodiments, any or all of the original data messages6902may be non-spanning messages. In certain embodiments, even where a phase difference between the original data message6902and the down-sampled messages6906are present, certain messages of the original data messages6902may be treated as non-spanning or synchronized data messages, for example to provide a baseline down-sampled message6906stream that follows the progression character (e.g., in the time domain) of the original data message6902stream, and/or where any phase difference can be ignored for the purpose of devices or operations utilizing the down-sampled message6906(e.g., where such devices or operations have a response time, a required reaction time, or the like, that is significantly greater than the magnitude of any such phase difference). In a further example, synchronized data values (e.g., every 5thdata value when converting from 40 ms to 100 ms) may be utilized directly, or may also utilize a fitting function (e.g., to provide a smooth, filtered, or otherwise processed stream of data values). In certain embodiments, it may be desirable to utilize actual data values provided from the first data stream6902as the down-sampled data values6906, where minor transient behavior from the different time steps is either not relevant to how the down-sampled data value6906is utilized, or where time stamp data is also communicated with the messages and accordingly the differential time steps between messages can be accounted for in processes that utilize the down-sampled data6906. In certain embodiments, it may be desirable to utilize smoothed data values that simulate the time response behavior of the underlying data, which may be managed utilizing interpolated data for spanning data values (e.g., processes that are responsive to a rate-of-change in the down-sampled data6906, such as threshold checks on the rate-of-change). In certain embodiments, for example where a downstream process is particularly sensitive to time variation of the data messages6902(e.g., a derivative portion of a PID controller), it may be desirable to ensure that all down-sampled data messages6906are generated from the same process, and interpolation operations (or smoothing, filtering, or moving average values) may be performed to generate both spanning and non-spanning data values6906. In certain embodiments, down-sampled data messages6906may further include metadata or other embedded information indicating whether the message corresponds directly to an original data message6902or is a processed message (e.g., allowing more than one use for the down-sampled data messages6906, diagnostic operations for a device providing the original data message6902, and/or for any other purpose). It can be seen that the down-sampling operations ofFIG.69allow for communication between devices and/or procedures having differing data rate capabilities, expectations, and/or usage rates of the down-sampled data. Additionally, down-sampling operations ofFIG.69allow for reduction in network utilization while providing sufficient data for devices and/or procedures to perform the intended functions, and with expected time domain response (e.g., derivative behavior, integrating behavior, step change response, etc.) for proper functionality of devices and procedures that may rely upon the time dynamics of communicated data values. It can be seen that the down-sampling operations ofFIG.69allow for a progressive updating of communication aspects (e.g., components, devices, procedures, and/or operations each communicatively interacting with a network and/or other components, devices, procedures, and/or operations) of a mobile application having a mixed network configuration and/or a mix of legacy communication aspects (e.g., having a lower data rate capability and/or data rate expectation, and/or distinct network protocols, characteristics, message types, and the like) with updated communication aspects (e.g., having a higher data rate capability and/or data rate expectation, and/or distinct network protocols, characteristics, message types, and the like). ReferencingFIG.70, a schematic depiction of an operation to up-sample a sequence of messages7002is depicted. In the example ofFIG.70, a message sequence7006(e.g., a series of three communications, in the example) is received, for example, at a network interface circuit of one of the network gateway devices. In the example ofFIG.70, the up-sampling operation is responsive to any up-sampling operations described herein, for example to match a receiving end point data rate, to provide the data represented by the messages7006at a scheduled rate, to manage bandwidth on a network of the vehicle and/or for extra vehicle communications, to preserve buffer memory, or for any other purpose, including any up sampling operations of the present disclosure. In the example ofFIG.70, the up-sampling device7004, which may be a translation circuit, network interface circuit, the CND, a circuit associated with the CND, a circuit regulated by the CND, or the like, generates a translated sequence of messages7008(e.g., processed as depicted inFIG.16and the related disclosure, and/or according to any other message translation and/or message processing operations set forth herein, and). The example ofFIG.70depicts the translated sequence of messages7008for clarity of the description. However, the translated sequence of messages7008may not all be present at the same time, for example as messages are translated and sent they may be removed, deleted, expire from a cache, etc. The sequence of messages7008is depicted to illustrate aspects of the present disclosure. Additionally or alternatively, translation of the messages7008may be performed after up-sampling operations are performed, for example to reduce utilization of processing resources. For example, some of the messages may be eliminated or adjusted as a part of the up-sampling before the translation operations (e.g., replacement of frame portions or metadata, encapsulation, processing of the payload and/or frame portions, etc.) are performed. In the example ofFIG.70, an up-sampled sequence of messages7002is provided and communicated, for example to a different network gateway device, to a different network of the vehicle from which the first sequence of messages7006is received, to an external device (e.g., service tool, cloud server, operator's mobile device, etc.), and/or stored on a memory storage device on the vehicle (e.g., for later data collection operations, as a part of stored vehicle data, etc.). In the example, the three messages of the original sequence7006are up-sampled to five messages of the up-sampled sequence7002. The up-sampling operations can include converting selected messages from the original sequence7006, for example changing an original 50 ms data stream7006to an up-sampled 20 ms data stream7002by inserting one or more generated messages7010. The up-sampling operations may, additionally or alternatively, include interpolation and/or extrapolation of data messages between original values. For example, where the original data stream7006is a 50 ms data stream, and the up-sampled data stream7002is a 20 ms data stream, the up-sampling may include either taking the closest-in-time messages, or performing an interpolation and/or extrapolation operation (e.g., applying a linear fit, spline fit, polynomial fit, moving average, and/or a low-pass filtered progression between available data points and/or between an available data point and a predicted next data point), to be utilized as the up-sampled messages7002. Spanning data points or values, as utilized herein, indicate data values in the up-sampled messages7002that do not align in time with a corresponding original data message7006. Non-spanning data points or values, as utilized herein, indicate data values in the up-sampled messages7002that align in time, or are synchronized, with the corresponding original data message7006. It will be understood that messages of the original data message7006and up-sampled messages7002may additionally or alternatively have a phase difference, and accordingly, in certain embodiments, any or all of the original data messages7006may be non-spanning messages. In certain embodiments, even where a phase difference between the original data message7006and the up-sampled messages7002are present, certain messages of the original data messages7006may be treated as non-spanning or synchronized data messages, for example to provide a baseline up-sampled message7002stream that follows the progression character (e.g., in the time domain) of the original data message7006stream, and/or where any phase difference can be ignored for the purpose of devices or operations utilizing the up-sampled message7002(e.g., where such devices or operations have a response time, a required reaction time, or the like, that is significantly greater than the magnitude of any such phase difference). In a further example, synchronized data values (e.g., every other data value when converting from 50 ms to 20 ms, such as the 0 ms phase value and the 100 ms phase value) may be utilized directly, or may also utilize a fitting function (e.g., to provide a smooth, filtered, or otherwise processed stream of data values). In certain embodiments, it may be desirable to utilize actual data values provided from the first data stream7006as the up-sampled data values7002, for example where minor transient behavior from the different time steps is either not relevant to how the up-sampled data value7002is utilized, or where time stamp data is also communicated with the messages and accordingly the differential time steps between messages can be accounted for in processes that utilize the up-sampled data7002. Accordingly, in certain embodiments, each message of the up-sampled data values7002may correspond directly to one or more of the first data stream7006values (e.g., selecting a synchronized one, a closest one, and/or a most recent one (e.g., holding the communicated value until a next value is available) of the first data stream7006values). In certain embodiments, it may be desirable to utilize smoothed data values that simulate the time response behavior of the underlying data (e.g., original messages7006), which may be managed utilizing interpolated/extrapolated data for spanning data values (e.g., processes that are responsive to a rate-of-change in the up-sampled data7002, such as threshold checks on the rate-of-change), and/or also for non-spanning data values. In certain embodiments, for example where a downstream process is particularly sensitive to time variation of the data messages7006(e.g., a derivative portion of a PID controller), it may be desirable to ensure that all up-sampled data messages7002are generated from the same process, and interpolation/extrapolation operations (and/or smoothing, filtering, and/or moving average values) may be performed to generate both the spanning and non-spanning up-sampled data values7002. In certain embodiments, non-spanning up-sampled data values7002are utilized directly (e.g., to provide an up-sampled data7002stream having the actual content of the data messages7006to the extent possible), and spanning up-sampled data values are processed as described herein. In certain embodiments, all original messages7006are provided in the up-sampled data7002stream, with additional non-spanning messages added to achieve the data rate of the up-sampled data7002stream (e.g., to provide all of the original messages7006, and additionally support the up-sampling rate). In certain embodiments, up-sampled data messages7002may further include metadata or other embedded information indicating whether the message corresponds directly to an original data message7006or is a processed message (e.g., allowing more than one use for the up-sampled data messages7002, diagnostic operations for a device providing the original data message7006, and/or for any other purpose). In certain embodiments, spanning up-sampled data values7002may be determined based on predicted values between non-spanning data values, which may be performed based on a virtual sensor (e.g., a model of the value utilizing other information available in the system) and/or an extrapolation fitting operation. In certain embodiments, determination of spanning up-sampled data values7002additionally or alternatively includes providing predicted and/or interpolated/extrapolated values that provide an expressed rate of change of the up-sampled data values7002determined according to the original data values7006and/or adjusted according to the characteristics of a device, component, operation, and/or procedure utilizing the up-sampled data values7002. For example, up-sampling operations may include performing a predictive operation and/or interpolation/extrapolation to determine a rate of change for the value, and providing a final spanning up-sampled data value7002that provides the predicted rate of change for the up-sampled data value7002. In certain embodiments, operations to provide the up-sampled data values7002include an operation to determine a rate of change (or derivative) determination operation in a device utilizing the up-sampled data values7002, and adjusting the rate of change of the up-sampled data values7002in response to parameters of the rate of change determination in the device—for example interpreting data related to a time step utilized for the derivative operation (e.g., AT/5 ms, or change-in-temperature per 5 milliseconds) and/or a time constant (e.g., a time constant of a low-pass filter, a time constant implicit in a moving average calculation, etc.), where the up-sampled data value7002is adjusted to provide a desired response in the rate of change calculations that will be performed on the up-sampled data values7002. For example, where up-sampling operations have a significant difference in time steps between the original data value7006and the up-sampled data value7002(e.g., 50 ms to 5 ms), operations such as a linear interpolation/extrapolation of data values may provide significant distortion to the output of, for example, a low-pass filter operated by a device utilizing the up-sampled data value7002, which may be configured to process true 5-ms data. Accordingly, in the example, operations to up-sample the original data values7006may include adjusting the original data values7006in accordance with a predicted response of a device determining the values, which may provide significant differences in trajectory of the up-sampled data value7002between non-spanning data points relative to simple linear extrapolation, moving averages, or the like. Operations to adjust the expressed rate of change may be performed for up-sampled data7002, and/or for down-sampled data6906, or may be omitted. In certain embodiments, configuration information for up-sampling and/or down-sampling operations, such as: whether non-spanning original data values6902,7006are to be utilized directly; metadata to be stored with up-sampled and/or down-sampled data7002,6906; processing operations to be performed on spanning and/or non-spanning data values; whether all original data values6902,7006are to be communicated; operations to provide an expressed rate of change in the up-sampled and/or down-sampled data7002,6906; and/or parameters of a rate of change determination in a device utilizing the up-sampled and/or down-sampled data7002,6906(e.g., filter constants, derivative operations, etc.), may be provided in a memory storage location accessible to a controller and/or circuit performing up-sampling and/or down-sampling operations. Any such configuration information may be provided in whole or part at design time, such as when configuring a mobile application and devices communicating with various networks of the mobile application, and/or may be provided or updated during run-time operations. In certain embodiments, one or more aspects of the configuration information for up-sampling and/or down-sampling operations may be provided as a part of a policy, configuration instructions, and/or a configuration table, which may be accessible to a CND108regulating communications between devices on separate networks of the mobile application. In certain embodiments, one or more aspects of the configuration information for up-sampling and/or down-sampling operations may include default values which may be adjusted and/or updated, including as a part of a policy, configuration instructions, and/or a configuration table. ReferencingFIG.71, an example system for controlling inter-network communications, intra-network communications, and/or extra-vehicle communications utilizing a scheduled policy scheme is schematically depicted. The example system includes a vehicle102having at least one network (a first network zone7102and a second network zone7104, in the example ofFIG.71), a policy manager circuit7106that interprets a policy7108including external data communication parameters, such as an external data routing description and/or an external data service description. The example system includes a configuration circuit7110that configures a gatekeeper interface circuit7120in response to the policy7108, and that regulates communications between end points of the network zones7102,7104and an external communication portal7116. The external communication portal7116is selectively coupled to an external device7118. The external communication portal7116includes an external communication portal7116as set forth herein, including at least any one or more of the examples depicted in relation toFIG.41and the related description. In the example ofFIG.71, the gatekeeper interface circuit7120is depicted as coupled to the external communication portal(s)7116. However, the gatekeeper interface circuit7120may regulate communications in any manner, for example by further configuring the network interface circuit(s)7112,7114to allow selected communications, and/or communications having a selected processing, encapsulation, data file format, communication protocol, authorization, and/or any other regulation descriptions as described throughout the present disclosure. In the example ofFIG.71, the policy manager circuit7106, configuration circuit7110, and network interface circuit(s)7112,7114are depicted as positioned on the CND108. As described elsewhere herein, the CND108may provide instructions or otherwise regulate components, and the depicted components (and/or the CND108) may be distributed elsewhere on the vehicle102separate, in whole or part, from the CND108. ReferencingFIG.72, an example policy7108includes one or more of a secondary policy value7206, a primary policy value7204, and/or a default policy value7202. An example configuration circuit7110configures the gatekeeper interface circuit7120in response to the default policy value7202if there is no primary policy value7204and/or secondary policy value7206present (and/or if the primary policy value7204and/or secondary policy value7206are not valid), in response to the primary policy value7204if there is no secondary policy value7206present (and/or valid), and utilizing the secondary policy value7206if present (and valid). An example configuration circuit7110applies the policies if present (and/or if determined to be valid) in the order described (e.g., using the secondary policy value7206if present, and ignoring any remaining policy values7204,7202). An example configuration circuit7110applies more than one policy value if the policy values are compatible and/or consistent (e.g., applying a secondary policy value7206, and applying portions of the primary policy value7204that are not in conflict with the secondary policy value7206). In the example ofFIG.72the default policy value7202may be a permanent storage policy (e.g., a policy stored with main executable instructions stored on a computer readable medium that include instructions for at least a portion of operations of the CND108and/or associated circuits therefore). In certain embodiments, the primary policy value7204and/or the secondary policy value7206include policy values that are readily updated in real time, for example stored as data files (e.g., provided at selected memory locations, selected OS logic location, according to certain naming conventions, and/or stored with selected header information, metadata, or the like identifying each policy value as a primary policy value7204or a secondary policy value7206), stored as a part of a calibration set, trim set, or the like. An example primary policy7204is a tool supplied policy, such as a manufacturer tool, OEM tool, service tool, or the like. In certain embodiments, the secondary policy value7206is a downloaded policy value, for example a policy value received from an external device through an external communications portal, and from a web based tool, cloud application, or the like. The recited examples are non-limiting, and any of the policy values may be received from any external communications portal. An example implementation includes the default policy value7202provided at a time of initialization of the CND108or related control components (e.g., a first image file applied to a controller housing executable portions of the CND108, policy manager circuit7106, or the like), and which is not generally updated except, for example, as a part of an entire instruction set update (e.g., updating the executable instructions provided for the CND108and/or portions thereof). An example implementation includes the primary policy value7204provided at a time of manufacture, assembly, or other initial pre-mission service or assembly operation on the vehicle. An example implementation includes the secondary policy value7206provided as a downloaded operation, and/or provided during a service operation, trimming and/or application configuration operation (e.g., by an OEM, body builder, or the like). The utilization of the scheduled policy values7202,7204,7206allows for the implementation of a minimum capability (and/or lowest risk) policy, providing sufficient capability for devices of the vehicle to communicate externally, for example to download and/or act on a replacement policy such as a primary policy value7204and/or secondary policy value7206. The utilization of the scheduled policy values allows for various stakeholders in a manufacture, remanufacture, re-configuration, service, sale or transfer, mission change, or other vehicle related operation to ensure that policy requirements (e.g., permissions for local communicating devices to communicate within a network, across a network, to store data, and/or to communicate with external devices) are met, while allowing for ease of policy updates, implementations, and interfaces for third-parties, owner/operators, fleet owners, and the like to adjust policy values and resulting communication regulation operations. The utilization of the scheduled policy values7202,7204,7206allows for ease of policy updates, verification, and implementation. The utilization of scheduled policy values7202,7204,7206allows for re-configuration of a policy and/or regulatory response of communications to be adjusted in real time with a low impact to the mission of the vehicle (e.g., without controller reset operations, adjustment of primary executable instruction files, or the like), for example to adjust policies in response to regulatory characteristics such as geography (e.g., location of the vehicle), jurisdiction (e.g., jurisdictional location of the vehicle), and/or operations where direct control of the vehicle may not be available (e.g., after an accident, towing event, sale or other transfer, etc.). In certain embodiments, the scheduled policy values7202,7204,7206may be applied by one of a number of devices at different times, for example a default policy value7202applied by a first device, the primary policy value7204applied by a second device, and the secondary policy value7206applied by a third device. In certain embodiments, a given external device may apply more than one of the scheduled policy values7202,7204,7206, and/or apply a later version of one of the scheduled policy values7202,7204,7206at a later time relative to application of an earlier version. In certain embodiments, more than one version of a given policy value may be present (e.g., a secondary policy value7206) with a selected one of the versions utilized in response to operating conditions (e.g., vehicle operating conditions, geography, jurisdiction, off-nominal conditions and/or fault code conditions, etc.). In certain embodiments, a given policy value7206may include more than one version of an aspect of the policy, for example providing for different data collection operations for a given local communicating device, controller, flow, application, end point, etc., an selecting a version of the aspect of the policy in response to operating conditions. ReferencingFIG.73, an example policy7108includes a local DNS7302(e.g., including local addresses to be utilized by end points on any network zone, and/or including non-local addresses to be utilized by external devices, applications, or the like, and/or including external addresses to be utilized by end points on any network zone, etc.). The example policy7108further includes an authorization description7304, which may include authorizations of any type as referenced throughout the present disclosure, including authorizations of network utilization, data access descriptions, subscription authorizations, external access authorizations, policy change and/or update authorizations, and the like. The authorization description7304may reference flows, local communicating devices, external devices, end points, network zones, applications, service groups, vehicle controllers, source addresses, destination addresses, any other regulated components, and/or entities, users, and/or user roles associated with any of these. The example policy7108includes a firewall configuration description7306, which may include, for example descriptions utilized by a firewall implementing device (e.g., a gateway interface circuit, CND, and/or an external communication portal) to determine how to operate firewall operations. In certain embodiments, the firewall configuration description7306includes a default behavior description (e.g., handling for unknown or unspecified communications, such as blocking communications from unknown external devices or addresses), a data access description (e.g., components of the system having permissions to contact certain addresses, certain communication types such as external devices that are responding to a request by a component, and/or scheduled access according to permissions or authorizations according to the component), and/or a data blocking description (e.g., components of the system that do not have permissions to access external devices or addresses, selected external devices or addresses, external devices or addresses that are specifically blocked, and/or specific communication types that are specifically blocked such as incoming communications requesting access to certain data types, flows, applications, vehicle functions, vehicle controllers, end points, or the like). ReferencingFIG.74, an example policy7108includes a local DNS7302and an external data quantity description7402. The external data quantity description7402may include any aspects of an external data quantity description as referenced throughout the present disclosure, including at least data caps for regulated components, data limits (e.g., bandwidth, utilization, amount of data per regulating event such as per unit time, per trip, etc.), data caps or limits associated with APNs and/or data service providers, associated with particular external communication portals, or the like. The example policy7108includes an external data service description7406, which may include any aspects of an external data service description7406as referenced throughout the present disclosure (e.g., referenceFIGS.42,64, and67, and the related descriptions). ReferencingFIG.75, an example procedure7500for regulating external communications is depicted schematically. The example procedure7500includes an operation7502to utilize, in order, a secondary policy value if present, a primary policy value if present, and a default policy value (e.g., if neither a secondary policy value or a primary policy value is present). The example procedure7500further includes an operation7504to interpret a policy according to the utilized policy value(s), where the policy includes an external data routing description and an external data service description. The example procedure includes an operation7506to configure a gatekeeper interface circuit in response to the policy, and an operation7508to operate the gatekeeper interface circuit to regulate communications between networks of a vehicle and external communication portal(s) of the vehicle, thereby regulating communications between end points of network zones of the vehicle and external devices. ReferencingFIG.76, an example procedure7600for regulating external communications is schematically depicted. The example procedure7600includes an operation7602to interpret a policy including an external data quantity description, and an operation7604to determine a destination and/or source IP address (or other address), a destination and/or source port, and/or a destination and/or source identifier for a regulated communication and/or according to addresses, ports, and/or identifiers provided in the policy. The example procedure7600includes an operation7606to configure a gatekeeper interface circuit in response to the policy and the determined addresses, ports, and/or identifiers. The example procedure7600includes an operation7608to operate the gatekeeper interface circuit to regulate communications between networks of a vehicle and external communication portal(s) of the vehicle, thereby regulating communications between end points of network zones of the vehicle and external devices. ReferencingFIG.77, an example procedure7700for regulating external communications is schematically depicted. The example procedure7700includes an operation7702to interpret a policy including a data routing description, and an operation7704to configure a gatekeeper interface circuit in response to the policy and/or determined values (e.g., reference operation7604), and an operation7706to operate the gatekeeper interface circuit to regulate communications between networks and/or end points of a vehicle and external communication portal(s) of the vehicle, thereby regulating communications between end points of network zones of the vehicle and external devices. An example procedure7700provides for routing communications to or from an end point, and switching the external communication portal (e.g., switching from cellular communications to WiFi communications) during communications, where the end point or other regulated component is not aware of, and is not interrupted by, the switch. ReferencingFIG.78, an example procedure7706to operate the gatekeeper interface circuit to regulate communications between networks and/or end points of a vehicle and external communication portal(s) of the vehicle is schematically depicted. The example procedure7706includes an operation7802to determine a selected external communication portal for a portion of a set of related communications (e.g., communications to or from an end point and/or a regulated component over a period of time, communications related as a block of data and/or data file, communications performed during a set of operating conditions, etc.). The example procedure7706further includes an operation7804to send (or receive) the portion of the set of related communications utilizing the selected external communications portal, and an operation7806to determine if the set of related communications is complete (e.g., if the period of time has elapsed, the block of data and/or data file is sent, and/or if the operating conditions have changed—such as an operating condition related to the regulated component such as cruise control operation, vehicle backing, an indication from an application, flow, service group, external device, etc. that an operation has completed, etc.). In response to operation7806indicating NO, the procedure7706includes repeating operation7802, where the external communication portal may be changed for a next portion of the set of related communications. Operation7802to determine the external communications portal may be determined according to any criteria, including at least one or more of: a cost of using the external communications portal; a capability of the external communications portal based on current operating conditions and/or the regulated communications; a data rate availability of the external communications portal; an operating condition of the vehicle; a connection availability of the external communications portal; a data quantity limit of the external communications portal; a service delivery value of the external communications portal and/or associated with the regulated communications; and/or a time to complete the communication operations of the set of related communications utilizing the external communications portal. ReferencingFIG.79, an example procedure7900for regulating external communications is schematically depicted. The example procedure7900includes an operation7902to interpret a policy including a local DNS, an authorization description, and a firewall configuration description. The example procedure7900includes an operation7904to configure a gatekeeper interface circuit in response to the policy, and an operation7908to operate the gatekeeper interface circuit to regulate communications between networks and/or end points of a vehicle and external communication portal(s) of the vehicle, thereby regulating communications between end points of network zones of the vehicle and external devices. An example procedure7900optionally includes an operation7906to further configure the gatekeeper interface circuit in response to one or more of an external data quantity description, an external data service description, and/or a non-local DNS (e.g., from the policy). ReferencingFIG.80, an example procedure8000for regulating external communications is schematically depicted. The example procedure8000includes an operation8002to interpret a policy including a local DNS, an external data quantity description, and an external data service description. The example procedure8000includes an operation8004to configure a gatekeeper interface circuit in response to the policy, and an operation8008to operate the gatekeeper interface circuit to regulate communications between networks and/or end points of a vehicle and external communication portal(s) of the vehicle, thereby regulating communications between end points of network zones of the vehicle and external devices. An example procedure8000optionally includes an operation8006to further configure the gatekeeper interface circuit in response to one or more of a firewall configuration description, an authorization description, and/or a non-local DNS (e.g., from the policy). ReferencingFIG.81, an example system8100is depicted for regulating network communications on a vehicle, and/or for regulating extra-vehicle communications, using a CND that is externally configured. The example system8100includes a vehicle102having a first network zone8102and a second network zone8104, for example network zones of a different type, such as in a vehicle having a mixed network. The example system8100includes a CND108interposed (physically and/or logically) between the network zones8102,8104, and further interposed (physically and/or logically) between the network zones and one or more external devices8103. The example CND108includes a policy manager circuit8106that interprets a policy8114, where the policy8114is communicated to the CND108from an external device8103(e.g., with the external device8103providing a policy communication8120, where the CND108determines the policy8114in response to the policy communication8120). The example system8100includes a configuration circuit8108that configures network interface circuit(s) (e.g., a first network interface circuit8110and a second network interface circuit8112) in response to the policy8114. In certain embodiments, the configuration circuit8108may configure a gatekeeper interface circuit (not shown) in response to the policy8114. The system8100includes the network interface circuit(s)8110,8112regulating communications between end points of the first network zone8102and the second network zone8104, for example as configured by the configuration circuit8108. In certain embodiments, the system8100includes a gatekeeper interface circuit that regulates communications between end points of either or both network zones8102,8104and external communication portal(s) (not shown) and/or external devices8103. The external devices8103for regulated communications may be the same, or distinct, external devices8103that provide the policy8114. Regulating operations may be performed on inter-network communications (e.g., between network zones), intra-network communications (e.g., between devices on a given network zone), extra-vehicle communications, or other communications (e.g., communications to external devices, service tools, user devices, etc.). Any regulating operation described throughout the present disclosure are contemplated for system8100. The example ofFIG.81includes the policy communication8120having aspects such as inter-network regulation8122parameters, intra-network regulation8124parameters, extra-vehicle communication parameters (not shown), permissions and/or authorizations8126related to the policy, and/or data collection parameters8128related to the policy. The example aspects of the policy communication8120, and the corresponding implementation of these aspects in the policy8114on-vehicle, are non-limiting examples provided for illustration. A given embodiment may include additional aspects of the policy, and/or may omit one or more of the depicted aspects. An example system8100includes the external device8103being a cloud application (e.g., operating on a cloud server or other computing device, at least intermittently in communication with the vehicle), a web based tool, combinations of these, and/or having portions of the external device8103being one of these, with other portions provided through other implementations (e.g., a service tool, fleet tool, operator mobile device, etc.). An example external device8103includes a policy development interface8115that accepts policy input value(s)8132from a number of users (e.g., via user input device(s)8132), a policy formulation engine8116that compiles the policy input value(s)8132into a policy8114(and/or into one or more aspects of a policy communication8120utilized to provide the policy to the CND108), and a policy application engine8118that communicates the policy8114(and/or the policy communication8120) to the CND108. An example policy development interface8115interacts with user devices8132to accept policy input value(s)8132, for example operating a GUI with the user devices8132, operating an interacting application such as a web based tool, cloud application, mobile application, etc. to receive the policy input value(s)8132. In certain embodiments, the policy development interface8115accepts a configuration file (e.g., an XML file, standardized format file, etc.) from a user device8132as a policy input value8132. In certain embodiments, accepting the policy input value(s)8132includes operations such as: determining whether a policy input value8132is proper (e.g., formatting, permissions associated with the user device and/or entity associated with the user device, compatibility of the policy input with available parameters, functions, sampling rates, etc. on the vehicle, and the like); parsing the policy input value8132into portions (e.g., data collection, network usage permission, external vehicle communication permissions, associations such as flows, applications, vehicle functions, service groups, and the like for policy portions, etc.); associating metadata with the policy input value8132or portions thereof (e.g., time stamps; versions of a policy, related applications, etc.; identifiers associated therewith, such as a user, user role, related entity, user device identifier, etc.); and/or prioritizing between policy input values8132(e.g., such as when policy input values8132are not compatible, and/or cannot all be included such as when an aggregate policy size limitation would be exceeded, and which may be according to any aspect of the policy input value such as data type or related vehicle function, and/or according to any association with the policy input value8132such as an associated entity, etc.). An example system8100includes a policy interaction engine8119that generates policy interaction code8134, such as header file(s), parameter definition(s), and/or an API declaration. The policy interaction engine8119facilitates a user-friendly development of a policy and/or portions of a policy by users, applications, and/or tools, allowing users to conveniently interact with aspects of the policy that they are authorized to develop, to select available parameters, functions, control commands, and the like, and to minimize vehicle-specific knowledge requirements for users developing the policy and/or aspects of the policy. An example system8100includes a policy8114having a data collection definition (e.g., data parameters to be collected, and/or including information such as processing to be performed, data formats for individual data elements, data formats for storage of the data such as a file type for the stored data, communication parameters such as data rates, timeliness, treatment of aging data and/or expiration of data, etc., including any data collection parameters set forth throughout the present disclosure). An example data collection definition includes at least one local communicating device (e.g., an end point, flow, application, network zone, vehicle function, service group, etc. as described throughout the present disclosure) corresponding to at least one data collection parameter. An example system8100further includes a user entering an identifier, address, and/or port for a source and/or for a destination of the collected data (e.g., identifying the local communicating device(s) that is(are) the source for the collected data, and/or identifying a destination for the collected data)—e.g., by the user providing the data collection definition as a policy input value8132that is thereby implemented as a part of the policy8114. An example system8100includes the CND108performing a data collection operation utilizing the data collection definition, thereby collecting data from the vehicle according to the user entered parameters for the generated data source and/or destination. An example system8100includes an operation to provide all or a portion of the data collection definition, which may be performed instead of utilizing user-defined portions (e.g., where addresses or other information are intentionally hidden from the user for security purposes and/or to facilitate ease of implementation of user entry of policy input values), and/or in addition to utilizing user-defined portions (e.g., to correct a user-defined portion that may have an incorrect value, to translate a user-defined portion that may be utilizing a legacy addressing value for an end point, etc.). In certain embodiments, the CND108may perform operations to provide all or a portion of the data collection definition, for example utilizing translating information provided in the policy8114available to the CND108, to translate addresses where an end point of the vehicle has moved (e.g., between network zones and/or to a different address), or the like. In certain embodiments, the policy formulation engine8116may perform operations to provide all or a portion of the data collection definition, for example to mask addresses from a user device, to allow reference to data parameters according to an industry standard, simplified description, or the like, and/or where certain responsibilities to perform operations for providing, updating, and/or correcting the data collection definition are divided between the CND108and the policy formulation engine8116. For example, the CND108may perform certain operations to provide, update, and/or correct the data collection definition (e.g., local, vehicle-specific operations such as local address translations), and the policy formulation engine8116may perform other operations to provide, update, and/or correct the data collection definition (e.g., server-side operations such as data destination locations off-vehicle, providing scheduled information availability and/or capability to different users, user devices, applications, entities, and the like, etc.). ReferencingFIG.82, an example visualization management controller8212is depicted, which is configured to functionally execute operations to depict data flows on the vehicle, and/or to provide visualizations of the vehicle network and aspects of the network utilization, CND, end points, or the like. The example visualization management controller8212may be utilized with any system throughout the present disclosure, and/or to perform one or more aspects of operations throughout the present disclosure. The visualization management controller8212may be distributed across one or more vehicle controllers, the CND, and/or an external device, and/or may be provided on a single one of these. The aspects of the visualization management controller8212that are provided on-vehicle and/or external to the vehicle may vary depending upon the characteristics of the system, the entities (e.g., controllers, applications, flows, external devices, third-party applications, etc.) that are expected to access vehicle network data (and/or that will have capability to access vehicle network data), the communication plan (e.g., the scheme to communicate network data and/or visualization data from the vehicle and/or from a cloud storage location), and/or the processing plan (e.g., the scheme to process monitoring data into visualization data, the types of processing to be performed, and the number of distinct types of processing to be performed for various clients of the visualization data). A visualization management controller8212may be utilized to monitor vehicle networks (e.g., to diagnose issues on one or more networks, to monitor communications from local communicating devices, and/or to diagnose secondary issues that may be presented by unusual network utilization and/or data flow on the vehicle). The example visualization management controller8212includes a vehicle communication circuit8202that interprets vehicle communications data8208(e.g., data flow on a network zone, between network zones, through the CND or other regulating components, and/or related to particular end points, flows, service groups, vehicle controllers, vehicle functions, applications, etc.). Example vehicle communications data8208includes one or more of the following: communications between end points of a network zone of the vehicle (e.g., on the same or on different network zones); and/or communications between local communicating device (e.g., on the same or on different network zones, and/or distributed across more than one network zone). The example visualization management controller8212includes a visualization circuit8204that generates visualization data8210(e.g., referenceFIGS.20-27and the related descriptions), and a display interface circuit8206that transmits the visualization data8210, for example to an external device, to a user device (e.g., a service tool, network monitoring tool, a third-party application, and/or an application utilized by a user monitoring the network(s) of the vehicle and/or other aspects of the vehicle related to the networks and/or data flows of the vehicle). An example visualization management controller8212includes the vehicle communication circuit8202positioned, in whole or part, on the vehicle (e.g., on the CND, on a vehicle controller, and/or on a network interface circuit), where the vehicle communication data8208is provided to a port of a network zone (e.g., a monitoring port, a mirrored port, and/or a port otherwise accessible to an external device). An example visualization management controller8212includes the visualization circuit positioned on an external device, where the display interface circuit8206provides the visualization data8210to a user device communicatively coupled to the external device. Without limitation to any other aspect of the present disclosure, example visualization data8210includes one or more of the following: a graphical representation of at least a portion of communications between local communicating devices of the vehicle; a graphical flow representation of at least a portion of communications passing through the CND; a graphical flow representation of at least a portion of communications regulated by at least one of the first network interface circuit or the second network interface circuit; and/or a graphical flow representation of at least a portion of communications passing between the first network zone and the second network zone. Example and non-limiting graphical flow representations include a data table depicting data flows, and/or any aspects of data flows as described throughout the present disclosure. ReferencingFIG.83, an example procedure8300to transmit visualization data to an external device and/or a user device is schematically depicted. The example procedure8300includes an operation8302to interpret a policy from an external device, and an operation8304to configure network interface circuit(s) and/or a gatekeeper interface circuit in response to the policy. The example procedure8300includes an operation8308to regulate communications on the vehicle (inter-network, intra-network, and/or extra-vehicle communications), and an operation8308to determine source and/or destination definitions for data collection. The example procedure8300includes an operation8308to determine visualization data in response to the vehicle communications data (e.g., collected in response to the policy, and the source/destination definitions for the collected data), and an operation8312to transmit the visualization data (e.g., to an external device, user device, data storage, application, etc.). ReferencingFIG.84, an example procedure8302to interpret a policy for configuring regulation of inter-network, intra-network, and/or extra-vehicle communications is schematically depicted. The example procedure8302includes an operation8402to generate a policy interaction code, an operation8404to accept policy input value(s) in response to the policy interaction code, and an operation8406to generate a policy in response to the accepted input value(s). The example procedure8302further includes an operation8408to communicate the generated policy to a CND using an external device. The methods and systems described herein may be deployed in part or in whole through a machine having a computer, computing device, processor, circuit, and/or server that executes computer readable instructions, program codes, instructions, and/or includes hardware configured to functionally execute one or more operations of the methods and systems herein. The terms computer, computing device, processor, circuit, and/or server, (“computing device”) as utilized herein, should be understood broadly. An example computing device includes a computer of any type, capable to access instructions stored in communication thereto such as upon a non-transient computer readable medium, whereupon the computer performs operations of the computing device upon executing the instructions. In certain embodiments, such instructions themselves comprise a computing device. Additionally or alternatively, a computing device may be a separate hardware device, one or more computing resources distributed across hardware devices, and/or may include such aspects as logical circuits, embedded circuits, sensors, actuators, input and/or output devices, network and/or communication resources, memory resources of any type, processing resources of any type, and/or hardware devices configured to be responsive to determined conditions to functionally execute one or more operations of systems and methods herein. Network and/or communication resources include, without limitation, local area network, wide area network, wireless, internet, or any other known communication resources and protocols. Example and non-limiting hardware and/or computing devices include, without limitation, a general purpose computer, a server, an embedded computer, a mobile device, a virtual machine, and/or an emulated computing device. A computing device may be a distributed resource included as an aspect of several devices, included as an interoperable set of resources to perform described functions of the computing device, such that the distributed resources function together to perform the operations of the computing device. In certain embodiments, each computing device may be on separate hardware, and/or one or more hardware devices may include aspects of more than one computing device, for example as separately executable instructions stored on the device, and/or as logically partitioned aspects of a set of executable instructions, with some aspects comprising a part of one of a first computing device, and some aspects comprising a part of another of the computing devices. A computing device may be part of a server, client, network infrastructure, mobile computing platform, stationary computing platform, or other computing platform. A processor may be any kind of computational or processing device capable of executing program instructions, codes, binary instructions and the like. The processor may be or include a signal processor, digital processor, embedded processor, microprocessor or any variant such as a co-processor (math co-processor, graphic co-processor, communication co-processor and the like) and the like that may directly or indirectly facilitate execution of program code or program instructions stored thereon. In addition, the processor may enable execution of multiple programs, threads, and codes. The threads may be executed simultaneously to enhance the performance of the processor and to facilitate simultaneous operations of the application. By way of implementation, methods, program codes, program instructions and the like described herein may be implemented in one or more threads. The thread may spawn other threads that may have assigned priorities associated with them; the processor may execute these threads based on priority or any other order based on instructions provided in the program code. The processor may include memory that stores methods, codes, instructions and programs as described herein and elsewhere. The processor may access a storage medium through an interface that may store methods, codes, and instructions as described herein and elsewhere. The storage medium associated with the processor for storing methods, programs, codes, program instructions or other type of instructions capable of being executed by the computing or processing device may include but may not be limited to one or more of a CD-ROM, DVD, memory, hard disk, flash drive, RAM, ROM, cache and the like. A processor may include one or more cores that may enhance speed and performance of a multiprocessor. In embodiments, the process may be a dual core processor, quad core processors, other chip-level multiprocessor and the like that combine two or more independent cores (called a die). The methods and systems described herein may be deployed in part or in whole through a machine that executes computer readable instructions on a server, client, firewall, gateway, hub, router, or other such computer and/or networking hardware. The computer readable instructions may be associated with a server that may include a file server, print server, domain server, internet server, intranet server and other variants such as secondary server, host server, distributed server and the like. The server may include one or more of memories, processors, computer readable transitory and/or non-transitory media, storage media, ports (physical and virtual), communication devices, and interfaces capable of accessing other servers, clients, machines, and devices through a wired or a wireless medium, and the like. The methods, programs, or codes as described herein and elsewhere may be executed by the server. In addition, other devices required for execution of methods as described in this application may be considered as a part of the infrastructure associated with the server. The server may provide an interface to other devices including, without limitation, clients, other servers, printers, database servers, print servers, file servers, communication servers, distributed servers, and the like. Additionally, this coupling and/or connection may facilitate remote execution of instructions across the network. The networking of some or all of these devices may facilitate parallel processing of program code, instructions, and/or programs at one or more locations without deviating from the scope of the disclosure. In addition, all the devices attached to the server through an interface may include at least one storage medium capable of storing methods, program code, instructions, and/or programs. A central repository may provide program instructions to be executed on different devices. In this implementation, the remote repository may act as a storage medium for methods, program code, instructions, and/or programs. The methods, program code, instructions, and/or programs may be associated with a client that may include a file client, print client, domain client, internet client, intranet client and other variants such as secondary client, host client, distributed client and the like. The client may include one or more of memories, processors, computer readable transitory and/or non-transitory media, storage media, ports (physical and virtual), communication devices, and interfaces capable of accessing other clients, servers, machines, and devices through a wired or a wireless medium, and the like. The methods, program code, instructions, and/or programs as described herein and elsewhere may be executed by the client. In addition, other devices required for execution of methods as described in this application may be considered as a part of the infrastructure associated with the client. The client may provide an interface to other devices including, without limitation, servers, other clients, printers, database servers, print servers, file servers, communication servers, distributed servers, and the like. Additionally, this coupling and/or connection may facilitate remote execution of methods, program code, instructions, and/or programs across the network. The networking of some or all of these devices may facilitate parallel processing of methods, program code, instructions, and/or programs at one or more locations without deviating from the scope of the disclosure. In addition, all the devices attached to the client through an interface may include at least one storage medium capable of storing methods, program code, instructions, and/or programs. A central repository may provide program instructions to be executed on different devices. In this implementation, the remote repository may act as a storage medium for methods, program code, instructions, and/or programs. The methods and systems described herein may be deployed in part or in whole through network infrastructures. The network infrastructure may include elements such as computing devices, servers, routers, hubs, firewalls, clients, personal computers, communication devices, routing devices and other active and passive devices, modules, and/or components as known in the art. The computing and/or non-computing device(s) associated with the network infrastructure may include, apart from other components, a storage medium such as flash memory, buffer, stack, RAM, ROM and the like. The methods, program code, instructions, and/or programs described herein and elsewhere may be executed by one or more of the network infrastructural elements. The methods, program code, instructions, and/or programs described herein and elsewhere may be implemented on a cellular network having multiple cells. The cellular network may either be frequency division multiple access (FDMA) network or code division multiple access (CDMA) network. The cellular network may include mobile devices, cell sites, base stations, repeaters, antennas, towers, and the like. The methods, program code, instructions, and/or programs described herein and elsewhere may be implemented on or through mobile devices. The mobile devices may include navigation devices, cell phones, mobile phones, mobile personal digital assistants, laptops, palmtops, netbooks, pagers, electronic books readers, music players and the like. These devices may include, apart from other components, a storage medium such as a flash memory, buffer, RAM, ROM and one or more computing devices. The computing devices associated with mobile devices may be enabled to execute methods, program code, instructions, and/or programs stored thereon. Alternatively, the mobile devices may be configured to execute instructions in collaboration with other devices. The mobile devices may communicate with base stations interfaced with servers and configured to execute methods, program code, instructions, and/or programs. The mobile devices may communicate on a peer to peer network, mesh network, or other communications network. The methods, program code, instructions, and/or programs may be stored on the storage medium associated with the server and executed by a computing device embedded within the server. The base station may include a computing device and a storage medium. The storage device may store methods, program code, instructions, and/or programs executed by the computing devices associated with the base station. The methods, program code, instructions, and/or programs may be stored and/or accessed on machine readable transitory and/or non-transitory media that may include: computer components, devices, and recording media that retain digital data used for computing for some interval of time; semiconductor storage known as random access memory (RAM); mass storage typically for more permanent storage, such as optical discs, forms of magnetic storage like hard disks, tapes, drums, cards and other types; processor registers, cache memory, volatile memory, non-volatile memory; optical storage such as CD, DVD; removable media such as flash memory (e.g. USB sticks or keys), floppy disks, magnetic tape, paper tape, punch cards, standalone RAM disks, Zip drives, removable mass storage, off-line, and the like; other computer memory such as dynamic memory, static memory, read/write storage, mutable storage, read only, random access, sequential access, location addressable, file addressable, content addressable, network attached storage, storage area network, bar codes, magnetic ink, and the like. Certain operations described herein include interpreting, receiving, and/or determining one or more values, parameters, inputs, data, or other information (“receiving data”). Operations to receive data include, without limitation: receiving data via a user input; receiving data over a network of any type; reading a data value from a memory location in communication with the receiving device; utilizing a default value as a received data value; estimating, calculating, or deriving a data value based on other information available to the receiving device; and/or updating any of these in response to a later received data value. In certain embodiments, a data value may be received by a first operation, and later updated by a second operation, as part of the receiving a data value. For example, when communications are down, intermittent, or interrupted, a first receiving operation may be performed, and when communications are restored an updated receiving operation may be performed. Certain logical groupings of operations herein, for example methods or procedures of the current disclosure, are provided to illustrate aspects of the present disclosure. Operations described herein are schematically described and/or depicted, and operations may be combined, divided, re-ordered, added, or removed in a manner consistent with the disclosure herein. It is understood that the context of an operational description may require an ordering for one or more operations, and/or an order for one or more operations may be explicitly disclosed, but the order of operations should be understood broadly, where any equivalent grouping of operations to provide an equivalent outcome of operations is specifically contemplated herein. For example, if a value is used in one operational step, the determining of the value may be required before that operational step in certain contexts (e.g. where the time delay of data for an operation to achieve a certain effect is important), but may not be required before that operation step in other contexts (e.g. where usage of the value from a previous execution cycle of the operations would be sufficient for those purposes). Accordingly, in certain embodiments an order of operations and grouping of operations as described is explicitly contemplated herein, and in certain embodiments re-ordering, subdivision, and/or different grouping of operations is explicitly contemplated herein. The methods and systems described herein may transform physical and/or or intangible items from one state to another. The methods and systems described herein may also transform data representing physical and/or intangible items from one state to another. The methods and/or processes described above, and steps thereof, may be realized in hardware, program code, instructions, and/or programs or any combination of hardware and methods, program code, instructions, and/or programs suitable for a particular application. The hardware may include a dedicated computing device or specific computing device, a particular aspect or component of a specific computing device, and/or an arrangement of hardware components and/or logical circuits to perform one or more of the operations of a method and/or system. The processes may be realized in one or more microprocessors, microcontrollers, embedded microcontrollers, programmable digital signal processors or other programmable device, along with internal and/or external memory. The processes may also, or instead, be embodied in an application specific integrated circuit, a programmable gate array, programmable array logic, or any other device or combination of devices that may be configured to process electronic signals. It will further be appreciated that one or more of the processes may be realized as a computer executable code capable of being executed on a machine readable medium. The computer executable code may be created using a structured programming language such as C, an object oriented programming language such as C++, or any other high-level or low-level programming language (including assembly languages, hardware description languages, and database programming languages and technologies) that may be stored, compiled or interpreted to run on one of the above devices, as well as heterogeneous combinations of processors, processor architectures, or combinations of different hardware and computer readable instructions, or any other machine capable of executing program instructions. Thus, in one aspect, each method described above and combinations thereof may be embodied in computer executable code that, when executing on one or more computing devices, performs the steps thereof. In another aspect, the methods may be embodied in systems that perform the steps thereof, and may be distributed across devices in a number of ways, or all of the functionality may be integrated into a dedicated, standalone device or other hardware. In another aspect, the means for performing the steps associated with the processes described above may include any of the hardware and/or computer readable instructions described above. All such permutations and combinations are intended to fall within the scope of the present disclosure. While the disclosure has been disclosed in connection with certain embodiments shown and described in detail, various modifications and improvements thereon will become readily apparent to those skilled in the art. Accordingly, the spirit and scope of the present disclosure is not to be limited by the foregoing examples, but is to be understood in the broadest sense allowable by law.
310,080
11943110
DETAILED DESCRIPTION The embodiments set forth below represent the information to enable those skilled in the art to practice the embodiments and illustrate the best mode of practicing the embodiments. Upon reading the following description in light of the accompanying drawing figures, those skilled in the art will understand the concepts of the disclosure and will recognize applications of these concepts not particularly addressed herein. It should be understood that these concepts and applications fall within the scope of the disclosure and the accompanying claims. Any flowcharts discussed herein are necessarily discussed in some sequence for purposes of illustration, but unless otherwise explicitly indicated, the embodiments are not limited to any particular sequence of steps. The use herein of ordinals in conjunction with an element is solely for distinguishing what might otherwise be similar or identical labels, such as “first message” and “second message,” and does not imply a priority, a type, an importance, or other attribute, unless otherwise stated herein. The term “about” used herein in conjunction with a numeric value means any value that is within a range of ten percent greater than or ten percent less than the numeric value. As used herein and in the claims, the articles “a” and “an” in reference to an element refers to “one or more” of the element unless otherwise explicitly specified. The word “or” as used herein and in the claims is inclusive unless contextually impossible. As an example, the recitation of A or B means A, or B, or both A and B. Large-scale production networks, such as those implemented by national service providers to service customers, often require the addition of sub-networks that may impact the large-scale production network. A sub-network, however, may negatively impact the large-scale network if the sub-network is not properly designed. Accordingly, the sub-network may be implemented in a laboratory first, then tested, prior to implementation, in a production environment. This process is time-consuming and requires costly, specially skilled engineers who are a finite resource. The embodiments disclosed herein implement automated network generation, validation and implementation in a production environment. The embodiments receive configuration information that identifies a plurality of components that define a network. In some implementations, the configuration information may be generated by a network simulator. A plurality of virtual network functions (VNFs) are caused to be generated on a plurality of general-purpose computing devices to implement the network on the plurality of general-purpose computing devices, each VNF corresponding to one of the plurality of components. Each VNF is then configured based on the configuration information. Testing data and testing criteria are obtained, and submitted to the network. Results of the testing data and the testing criteria are output, and based on the results, it may be determined that the network is ready for implementation in a production environment, or that the network needs to be modified prior to implementation in a production environment. FIG.1is block diagram of an environment10in which embodiments may be practiced. The environment10includes a computing system12that comprises one or more computing devices14. Each of the computing devices14include one or more processor devices16and a memory18. The memory18may include one or more components, including, by way of non-limiting example, a network generator20, a simulated network implementor22, a network tester24, and a production network implementor26. The environment10also includes a simulation environment28, which includes a plurality of general-purpose computing devices30-1-30-N (generally, computing device30). The term “general-purpose” in this context refers to computing devices that are capable of implementing a desired functionality based on executable code files, rather than specialized computing devices that are manufactured to implement a specific functionality. Each of the computing devices30include one or more processor devices32and a memory34. The simulation environment28also includes one or more physical switch devices36-1-36-M (generally, switch devices36) that provide layer two switching functionality. The simulation environment28includes one or more storage devices38-1-38-P (generally, storage devices38). The computing system12is communicatively coupled to the simulation environment28. Each computing device30may have physical connections to one or more of the switch devices36and may also be directly or indirectly communicatively coupled to the storage devices38. A user40may interact with the network generator20to define a desired network that has a specific plurality of components. The network generator20may store the defined network in configuration information42on a storage device44. The configuration information42, in this example, defines a network46designed by the user40. The configuration information42identifies a plurality of components that make up the network46, including, in this example, a dynamic host configuration protocol (DHCP) server48, a controller routers52-1and52-2and four switches54-1-54-4. The configuration information42identifies connections between the routers52-1and52-2and each of the four switches54-1-54-4, as indicated by solid lines55. In some implementations, the configuration information42may include, by way of non-limiting example, information regarding the means for bringing a network element online, and various settings of the network element. The configuration information42identifies a first virtual local area network (VLAN)59-1between the switches54-1and54-3, as indicated by dashed lines. The configuration information42identifies a second virtual local area network (VLAN)59-2between the switches54-2and54-4, as indicated by dashed lines. The simulated network implementor22implements the network46identified by the configuration information42on the computing devices30. In some embodiments, the computing devices30-1-30-N each include corresponding software defined network (SDN) layers56-1-56-N and corresponding network function virtualization (NFV) layers58-1-58-N. The SDN layer56includes technology that uses software-based controllers and/or application programming interfaces (APIs) to communicate with underlying hardware infrastructure and direct traffic on a network. SDN is an architecture that abstracts the control plane from the data forwarding function. The network function virtualization layer58facilitates the virtualization of network services, such as routers, firewalls, load balancers, and the like, that are conventionally implemented in specialized hardware devices. The embodiments herein utilize SDN and NFV, via the SDN layer56and the NFV layer58, to implement a network61that corresponds to the network46identified by the configuration information42in the simulation environment28. In particular, the simulated network implementor22receives the configuration information42that defines the plurality of components that define the network46, and causes a plurality of virtual network function (VNF) instances60-1-60-4to be generated on the computing devices30-1-30-N. The VNF instances60-1-60-4correspond to components defined by the configuration information42. In particular, the simulated network implementor22causes the generation of a controller VNF instance60-1to be generated on the computing device30-1. The controller VNF instance60-1corresponds to the controller50of the defined network46. The simulated network implementor22causes the generation of a DHCP server VNF instance60-2to be generated on the computing device30-1. The DHCP server VNF instance60-2corresponds to the DHCP server48of the defined network46. The simulated network implementor22causes the generation of a router VNF instance60-3to be generated on the computing device30-1. The router VNF instance60-3corresponds to the router52-1of the defined network46. The simulated network implementor22causes the generation of a router VNF instance60-4to be generated on the computing device30-N. The router VNF instance60-4corresponds to the router52-2of the defined network46. The simulated network implementor22also, based on the configuration information42, configures the VNF instances60as appropriate. The configuration may occur at the time of generation of the respective VNF instance60or may comprise additional commands submitted subsequent to the generation of the respective VNF instance60. For example, the simulated network implementor22may provide commands similar to those illustrated below in Table 1 to generate and configure the router VNF instance60-3on the computing device30-1to utilize an inbound interface on one of the switch devices36and utilize an outbound interface on four of the switch devices36. The four switch devices36correspond to the switches54-1-54-4in the network46. TABLE 1OpenFlow Protocol#Define router{CPU==2.2 GHZLine speed == 2 GbpsInterface S0/1 inboundInterface S1/1 outboundInterface S1/2 outboundInterface S1/3 outboundInterface S1/4 outbound} As another example, the simulated network implementor22may submit commands similar to those illustrated below in Table 2 to the router VNF instance60-3to generate and configure a VLAN that corresponds to the VLAN59-1. In some embodiments, the simulated network implementor22may utilize the NETCONF protocol to implement the VLAN. TABLE 2#create vlan for users between switch 1 and 3Def Func_vlan( )Switch 1# config terminalinterface x0/xswitchport mode accessswitchport mode access vlan y After the network46is implemented in the simulation environment28, the network tester24may obtain testing data62and testing criteria64, and submit the testing data62to the network61. The network tester24receives results from the testing data62and, based on the testing criteria64and the results, outputs network test results that can be used to determine whether the network61operates as expected and is ready to be implemented in a production environment66, or requires additional configuration to improve performance prior to implementation in the production environment66. The testing may comprise any suitable network tests, including, by way of non-limiting example, ping tests to verify connectivity, throughput tests, link capacity tests, and the like. The testing may also include application-level testing, such as, by way of non-limiting example, voice testing, security testing, and the like. If the network test results indicate that the network61is ready to be implemented in the production environment66, the production network implementor26causes the network61to be implemented in the production environment66. It is noted that, solely for purposes of illustration and ease of explanation, the network generator20, the simulated network implementor22, the network tester24and the production network implementor26are illustrated as separate components, but it will be appreciated that the functionality described herein could be implemented in a single component, or a number of components greater than four. Moreover, because the network generator20, the simulated network implementor22, the network tester24and the production network implementor26are components of the computing system12, the functionality implemented by such components may be attributed to the computing system12generally. In examples where the network generator20, the simulated network implementor22, the network tester24and the production network implementor26comprise software instructions that program the processor devices16to carry out functionality discussed herein, functionality implemented by such components may be attributed herein to the processor devices16. Finally, while for purposes of illustration and ease of explanation the network generator20, the simulated network implementor22, the network tester24and the production network implementor26are illustrated as being implemented on a single computing device14of the computing system12, in other embodiments the components may be implemented on different computing devices14of the computing system12. FIG.2is a flowchart of a method for automated network generation, validation and implementation in a production environment according to one embodiment.FIG.2will be discussed in conjunction withFIG.1. The computing system12receives the configuration information42that identifies the plurality of components that define the network46(FIG.2, block1000). The computing system12causes, based on the configuration information42, the plurality of virtual network function (VNF) instances60to be generated on the plurality of general-purpose computing devices30to implement the network61on the plurality of general-purpose computing devices30, each VNF instance60corresponding to one of the plurality of components of the network46(FIG.2, block1002). The computing system12configures, based on the configuration information42, at least some of the VNF instances60of the plurality of VNF instances60(FIG.2, block1004). The computing system12obtains the testing data62and the testing criteria64(FIG.2, block1006). The computing system12submits the testing data62to the network61(FIG.2, block1008). The computing system12outputs network test results based on the testing data62and the testing criteria64(FIG.2, block1010). FIG.3is a block diagram of the environment10illustrating a command translation function (CTF)82that may be used to facilitate the generation of a simulated network according to one implementation. In this example, the CTF82is illustrated as being part of the simulated network implementor22, but in other implementations the CTF82may be separate from the simulated network implementor22and the simulated network implementor22may utilize the CTF82at an appropriate time when configuring the simulation environment28. In this example, requirements of the simulation environment28, such as VLAN requirements, link speed requirements, latency requirements, and the like may be provided to the CTF82. The CTF82accesses a commands database84to identify the appropriate commands for a given “flow”, such as a particular path through a plurality of network components. The commands database84may comprise a plurality of different sets86-1,86-2of commands for each different manufacturer of a network component88-1,88-2that is being simulated. Such sets86-1,86-2of commands may be updated by information obtained from the respective manufacturers as the manufacturers implement new components and/or functionality. When a new component manufacturer is added, the CTF82need not be reprogrammed as the CTF82interfaces with the commands database84via a standard protocol, and accesses the appropriate set86-1,86-2of commands based on the particular manufacturer of the network component being simulated. The CTF82may utilize the Simple Network Management Protocol (SNMP) protocol to configure the network components88-1,88-2via an SDN controller90. As another example, the CTF82may utilize a programming language, such as Python, to call a function that corresponds to a desired function to be implemented and send the result to the SDN controller90utilizing a NETCONF protocol91for configuration of the network components88-1,88-2. FIG.4is a block diagram illustrating the generation of a simulation environment according to another implementation. In this example, the network generator20generates a simulation model92. The simulated network implementor22(sometimes referred to herein as a test network generation function (TNGF)), interacts with the CTF82, which in turn interacts with the simulation model92to generate a test network environment, such as the simulation environment28. The test network environment may comprise a plurality of resources for emulating a test network, such as SDN servers that can be configured to be a switch, a router, an attenuator, end user emulators, or the like. A programming language with a library-like structure may be used to set this up. Applications may be used to generate different tests to test the test network environment (block94). If the tests are satisfactory, a real network96may be setup either automatically or semi-automatically. An artificial intelligence function98may record the configuration for future deployments as well as existing networks. FIG.5is a block diagram of an environment99illustrating mechanisms for generating a simulation environment100according to another implementation. The environment99includes the simulated network implementor22and the commands database84. A simulation environment100includes a plurality of pools of resources, such as, by way of non-limiting example, a server pool102of servers, an attenuator pool104of attenuators, a processor device pool106of processor devices, and a link pool108of links. FIG.6is a block diagram of a computing device14suitable for implementing examples according to one example. The computing device14may comprise any computing or electronic device capable of including firmware, hardware, and/or executing software instructions to implement the functionality described herein, such as a computer server, a desktop computing device, a laptop computing device, a smartphone, a computing tablet, or the like. The computing device14includes the processor device16, the memory18, and a system bus68. The system bus68provides an interface for system components including, but not limited to, the memory18and the processor device16. The processor device16can be any commercially available or proprietary processor. The system bus68may be any of several types of bus structures that may further interconnect to a memory bus (with or without a memory controller), a peripheral bus, and/or a local bus using any of a variety of commercially available bus architectures. The memory18may include non-volatile memory70(e.g., read-only memory (ROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), etc.), and volatile memory72(e.g., random-access memory (RAM)). A basic input/output system (BIOS)74may be stored in the non-volatile memory70and can include the basic routines that help to transfer information between elements within the computing device14. The volatile memory72may also include a high-speed RAM, such as static RAM, for caching data. The computing device14may further include or be coupled to a non-transitory computer-readable storage medium such as the storage device44, which may comprise, for example, an internal or external hard disk drive (HDD) (e.g., enhanced integrated drive electronics (EIDE) or serial advanced technology attachment (SATA)), HDD (e.g., EIDE or SATA) for storage, flash memory, or the like. The storage device44and other drives associated with computer-readable media and computer-usable media may provide non-volatile storage of data, data structures, computer-executable instructions, and the like. A number of modules can be stored in the storage device44and in the volatile memory72, including an operating system and one or more program modules, such as the network generator20, the simulated network implementor22, the network tester24, and the production network implementor26, which may implement the functionality described herein in whole or in part. All or a portion of the embodiments may be implemented as a computer program product76stored on a transitory or non-transitory computer-usable or computer-readable storage medium, such as the storage device44, which includes complex programming instructions, such as complex computer-readable program code, to cause the processor device16to carry out the steps described herein. Thus, the computer-readable program code can comprise software instructions for implementing the functionality of the examples described herein when executed on the processor device16. An operator may also be able to enter one or more configuration commands through a keyboard (not illustrated), a pointing device such as a mouse (not illustrated), or a touch-sensitive surface such as a display device. Such input devices may be connected to the processor device16through an input device interface78that is coupled to the system bus68but can be connected by other interfaces such as a parallel port, an Institute of Electrical and Electronic Engineers (IEEE) 1394 serial port, a Universal Serial Bus (USB) port, an IR interface, and the like. The computing device14may also include a communications interface80suitable for communicating with other computing devices as appropriate or desired. Those skilled in the art will recognize improvements and modifications to the preferred embodiments of the disclosure. All such improvements and modifications are considered within the scope of the concepts disclosed herein and the claims that follow.
20,976
11943111
DETAILED DESCRIPTION OF THE INVENTION Various aspects of the disclosure herein are embodied as a system, a method, or a non-transitory, computer-readable storage medium having one or more computer-readable program codes stored thereon. Accordingly, various embodiments of the disclosure herein take the form of an entirely hardware embodiment, an entirely software embodiment comprising, for example, microcode, firmware, software, etc., or an embodiment combining software and hardware aspects that are referred to herein as a “system”, a “module”, an “engine”, a “circuit”, or a “unit”. FIGS.1A-1Billustrate a flowchart of an embodiment of a method for implementing packet processing network functions as a service. The method employs a data plane service implementation (DPSI) platform defining computer program instructions executable by at least one processor core for implementing packet processing network functions as a service. In the method disclosed herein, the data plane service implementation (DPSI) platform is operably accessed via multiple application programming interfaces (APIs). Different users access the DPSI platform and invoke service creation via the APIs. The APIs are implemented101to allow creation, customization, and configuration of network service-supporting application logics and network service packages. As used herein, “application logics” refers to logics comprising one or more packet processing primitives in a particular configuration for implementing an application. The packet processing primitives are packet processing modules developed for use as building elements or building blocks of the application logics. As used herein, the term “packet processing primitive” refers to a source code or a computer-executable code implemented to be interpreted or compiled and rendered by the DPSI platform as a data plane packet processing function to a user, for example, a field engineer, for inclusion in application logics. Each packet processing primitive is a self-contained, functional module implemented to serve as a relatively independent part of a whole packet processing network function. The packet processing primitives constitute the basic building blocks of a data plane packet processing network function. Also, as used herein, “network service package” refers to a software package comprising one or more application logics created based on service specifications for a customer use case. The data plane service implementation (DPSI) platform allows users and customers to create, customize, and configure their specific data plane packet processing network solutions from end to end through the APIs manually or programmatically. In an embodiment, the APIs are role-based APIs configured to separate and control access of users with different privileges for facilitating creation, configuration, control, management, monitoring, and reporting actions to be applied to layered resources and layered entities of the system. The layered entities comprise, for example, a network interface and programmable switch physical layer, a processing unit physical layer, a network stack layer, a packet processing primitive implementation layer, an application logic layer, a service creation layer, and a control and application plane as disclosed in the description ofFIG.4. The APIs are accessible based on roles of the users categorized, for example, as authorized development users such as developers, authorized administrators such as field engineers, authorized operators such as customer administrators, etc. In an example, the APIs comprise application logic creation APIs accessible by field engineers associated with the DPSI platform for creating application logics. In another example, the APIs comprise service creation APIs accessible by the field engineers and the customer administrators for creating offerable network service packages. As used herein, “offerable network service packages” refer to network service packages that are configured to be offered to customers by the DPSI platform based on the customers' service specifications. In an embodiment, an offerable network service package is a package of one or more application logics created and/or selected by a customer. The offerable network service package is an external facing package with specifications understood and specified by the customer. In another example, the APIs comprise service creation, ordering, and configuration APIs accessible by the customers for creating, ordering, and configuring services. In the method disclosed herein, the data plane service implementation (DPSI) platform creates102a packet processing primitive by applying a use-case-specific primitive specification to a corresponding one of multiple primitive templates retrieved from a primitive repository, and stores the created packet processing primitive in the primitive repository. As used herein, “primitive specification” refers to a set of specific values to be applied to a packet processing primitive of a corresponding primitive template. For example, the primitive specification for a classifier primitive comprises classifier criteria, 3-tuple or 5-tuple from a packet header, values and/or ranges of the 3-tuple or 5-tuple parameters, etc. Other packet processing primitives have different primitive specifications. The primitive specifications are structured data input to be provided by users, for example, developers, when users, for example, customers, invoke a service creation API call. Also, as used herein, “use-case-specific primitive specification” refers to a primitive specification defined per use case. A general function can be used to support different use cases. When realizing the general function for a specific use case, a use-case-required specification, also referred to as a “use-case-specific” specification is applied. Also, as used herein, “primitive template” refers to a framework logic implemented to be customized with different primitive specifications to define different implementations of the same type of primitives. In an embodiment, the packet processing primitive is configured in a structured format comprising, for example, a name, a type, an identifier, and a specification of an instance of the packet processing primitive as disclosed in the descriptions ofFIGS.6-7. Each instance of the packet processing primitive is implemented as an independent, self-contained, functional part of a whole data plane packet processing network function of a specific deployment of a specific use case. The data plane packet processing function comprises, for example, any one or more of traffic parsing, traffic classification, class of service (CoS)/quality of service (QoS), access control list (ACL) or firewall, network access control (NAC), and traffic routing or switching functions. Traffic parsing is a traffic header and payload interpretation process for extracting traffic metadata specified in a primitive specification for a traffic parser. Traffic classification is a classification process configured to categorize network traffic or traffic flows based on multiple parameters referred to as the traffic metadata comprising, for example, source and/or destination internet protocol (IP) addresses, source and/or port number, protocol, etc. CoS/QoS provides a method for managing multiple traffic profiles over a network by prioritizing particular types of network traffic over other types in various priority categories, for example, latency, loss, jitter, etc. CoS marks the network traffic to allow the QoS to use the marking as a means to manipulate the network according to defined policies. While CoS prioritizes network traffic by allocating different levels of priority to different types of network traffic, QoS manages the network traffic by allocating different network resources to different types of network traffic. The ACL or firewall is disposed between a network element, for example, a router, and application servers to provide rule-based and policy-based access control. Network access control (NAC) is used for applying more sophisticated authentication and authorization policies to users and/or devices accessing network resources. Traffic routing or switching is used for appropriately dispatching packets through a network device towards their destination. The data plane packet processing functions further comprise, for example, traffic manipulation, traffic shaping and policing, traffic correlation/synchronization, queuing, and scheduling functions, employed in computer networking. Traffic manipulation comprises, for example, network address translation (NAT), encapsulation, decapsulation, encryption, decryption, replication, multicasting, broadcasting, etc. Traffic shaping and policing is used for regulating user traffic behavior in accordance with a traffic class and a network promise. Traffic correlation/synchronization is used for correlating/synchronizing relevant traffic flows in a specified manner to guarantee use case requirements to all the relevant traffic flows. Traffic queuing is used for queuing the traffic according to predetermined criteria for emitting priority and order. Scheduling provides the way and order in which the traffic queues are served for traffic emission. For the creation102of the packet processing primitive, the data plane service implementation (DPSI) platform creates102amultiple primitive templates and stores the created primitive templates in the primitive repository; renders102ba first user interface for creating the use-case-specific primitive specification; retrieves102cone of the created primitive templates corresponding to the use-case-specific primitive specification, from the primitive repository; and creates102dthe packet processing primitive by applying the use-case-specific primitive specification to the retrieved primitive template. The primitive templates are developed as software functional modules and/or configurations of hardware functional modules, including programmable hardware modules, and/or a combination of software functional modules and configurations of hardware functional modules, and stored in the primitive repository. In an embodiment, the primitive templates are created as software functional modules. In another embodiment, the primitive templates are created as configurations of hardware functional modules. In another embodiment, the primitive templates are created as a combination of software functional modules and configurations of hardware functional modules. In an embodiment, a user, for example, a developer, develops the use-case-specific primitive specification via the first user interface. The DPSI platform stores the created packet processing primitive in the primitive repository. In an embodiment, the DPSI platform derives the use-case-specific primitive specifications from user-specific service creation requests and applies the use-case-specific primitive specifications to the corresponding primitive templates to spawn out the use-case-required primitive instances for a requesting user. In an embodiment, the data plane service implementation (DPSI) platform implements the packet processing primitives on a dedicated packet processor, for example, a programming protocol-independent packet processors (P4), as hardware functions. P4 refers to an open source, domain-specific programming language for programming the data plane of network devices, for example, switches, network interface controllers (NICs), routers, filters, etc. P4 specifies methods of processing packets of data by the network devices and for controlling packet-forwarding planes in the network devices. In another embodiment, the DPSI platform implements the packet processing primitives on one or more processing units, for example, a central processing unit (CPU) and/or a network processing unit (NPU), as software functions. In another embodiment, the DPSI platform implements the packet processing primitives on both the dedicated packet processor as hardware functions and on the CPU and/or the NPU as software functions. The DPSI platform optimizes implementation of the packet processing primitives based on complexity of logic of the packet processing primitives and running environment. In an embodiment, the DPSI platform optimizes implementation of the packet processing primitives in programmable hardware. In another embodiment, the DPSI platform optimizes implementation of the packet processing primitives as one or more P4 modules. In another embodiment, the DPSI platform optimizes implementation of the packet processing primitives in general software to be run on one or more processing units, for example, a CPU and/or an NPU. In another embodiment, the DPSI platform optimizes implementation of the packet processing primitives in programmable hardware, and/or as one or more P4 modules, and/or in general software to be run on the CPU and/or the NPU as disclosed in the descriptions ofFIGS.9-10. In an embodiment, the DPSI platform selects one or more of the packet processing primitives as building blocks for a customer use case based on user input. Furthermore, in the method disclosed herein, the data plane service implementation (DPSI) platform creates103an application logic based on a use-case-specific application logic specification using one or more of multiple packet processing primitives corresponding to the use-case-specific application logic specification, and stores the created application logic in an application logic repository. The application logic is an end-to-end technical solution to realize a specific customer use case function. Moreover, the application logic is an internal implementation of the technical solution with details independent of the customer. Furthermore, the application logic is composed of various relevant packet processing primitives logically connected via appropriate connectors and orders. As used herein, “use-case-specific application logic specification” refers to an application logic specification comprising the packet processing primitives required and their logical connections as defined per use case. The application logic is created as an internal implementation of a whole data plane packet processing network function required by one or more user-specific data plane packet processing services as disclosed in the descriptions ofFIGS.2-3. For the creation103of the application logic, the data plane service implementation (DPSI) platform renders103aa second user interface for creating the use-case-specific application logic specification; retrieves103bone or more of the packet processing primitives corresponding to the use-case-specific application logic specification, from the primitive repository; and creates103cthe application logic based on the use-case-specific application logic specification using the retrieved packet processing primitive(s). In an embodiment, the packet processing primitives are pre-built packet processing primitives. In another embodiment, the packet processing primitives are created by applying primitive specifications derived from the use-case-specific application logic specification to the primitive templates retrieved from the primitive repository. In an embodiment, a user, for example, a developer or a field engineer, develops the use-case-specific application logic specification via the second user interface. In another embodiment, a user, for example, a field engineer, inputs the use-case-specific application logic specification into the DPSI platform via an API call. In an embodiment, the DPSI platform creates the application logics using pre-built application logics. In another embodiment, the DPSI platform creates the application logics using one or more of the packet processing primitives configured and linked together per the application logic specifications via one or more of the APIs. In another embodiment, the DPSI platform creates the application logics using both the pre-built application logics and one or more APIs. The DPSI platform stores the created application logic(s) in the application logic repository. The data plane service implementation (DPSI) platform creates104a network service package comprising one or more application logics corresponding to a use-case-specific service specification as disclosed in the descriptions ofFIGS.2-3andFIGS.15-16. As used herein, “use-case-specific service specification” refers to a service specification defined per use case. The use-case-specific service specification comprises, for example, a set of application logics selected by a customer to serve their function, fields to be enabled or disabled, a field range, a required application logic flow, required packet processing primitives, and the use-case-specific primitive specifications. For the creation104of the network service package, the DPSI platform renders104aa third user interface for creating the use-case-specific service specification containing application logic specifications; retrieves104bone or more application logics corresponding to the use-case-specific service specification, from the application logic repository; and creates104cthe network service package comprising the retrieved application logic(s). The DPSI platform stores the created network service package in a service repository. In an embodiment, a user, for example, a developer or a field engineer, develops the use-case-specific service specification via the third user interface. In another embodiment, a user, for example, a field engineer or a customer administrator, inputs the use-case-specific service specification into the DPSI platform via an API call. In an embodiment, for the creation of an offerable network service package, the data plane service implementation (DPSI) platform selects one or more pre-built services stored in the service repository. In another embodiment, for the creation of the network service package, the DPSI platform maps a custom-built service to one or more created application logics, where the custom-built service is developed, for example, by a field engineer, at deployment time using one or more of the APIs. The mapping comprises a determination of a relationship between the service required and the application logic(s) that would render the service. In an embodiment, the mapping is performed by the DPSI platform based on inputs received from a user, for example, a field engineer or a customer administrator. To implement the mapping process, the DPSI platform renders a list of existing application logics retrieved from the application logic repository, on a user interface, based on allowed visibility privileges assigned to the user. In an embodiment, the DPSI platform assigns different visibility privileges to different users, for example, field engineers, customer administrators, etc., thereby allowing different users to view different subsets of the application logics stored in the application logic repository. The user views the rendered list of existing application logics, selects the required application logics, and creates a use-case-specific service specification. The DPSI platform then executes a service creation logic to map the use-case-specific service specification to the application logics and creates a network service package accordingly. In another embodiment, for the creation of the network service package, the DPSI platform maps a customer-built service to one or more created application logics, where the customer-built service is developed by a customer at operation time using one or more of the APIs. In an embodiment, the DPSI platform flexibly implements logic of a customer use case using a fully integrated packet processing primitive based on the use-case-specific service specification as disclosed in the description ofFIG.5A. In another embodiment, the DPSI platform flexibly implements logic of a customer use case using multiple separated packet processing primitives based on the use-case-specific service specification as disclosed in the description ofFIG.5B. In an embodiment, the data plane service implementation (DPSI) platform stores: (a) primitive templates built at development time in the primitive repository; (b) pre-built application logics and customized application logics built at deployment time in the application logic repository; and (c) pre-built services, custom-built services developed at deployment time and offered to a customer, and customer-built network service packages published by customers in the service repository. The pre-built application logics refer to application logics created by authorized development users, for example, developers, of the DPSI platform using predefined primitive specifications and primitive templates comprising source codes. The customized application logics refer to the application logics created by authorized administrators, for example, field engineers, using application logic creation APIs. The pre-built services are created by the authorized development users, for example, the developers, of the DPSI platform. The custom-built services are created by the authorized administrators, for example, the field engineers, using service creation APIs. The customer-built network service packages are created and published by the authorized operators, for example, the customer administrators, using service creation APIs. In an embodiment, the DPSI platform is operably coupled to an API server. The API server is communicatively coupled to the APIs for receiving and handling calls to the APIs as disclosed in the descriptions ofFIGS.12-13andFIGS.14B-14C. In an embodiment, the data plane service implementation (DPSI) platform implements a tiered service creation capability. In this embodiment, the DPSI platform renders service creation building elements, namely, the packet processing primitives, the application logics, and pre-built services in tiered forms. The tiered forms comprise, for example, a modularized source code form, a modularized binary form, and a customization service form. The modularized source code form is configured to invoke code compiling, linking, and packaging for creation of offerable network service packages. The DPSI platform invokes the modularized source code form for internal development tasks, for example, pre-building services for well-known common use cases, performed by developers. In the modularized source code form, the DPSI platform invokes compiling, linking, and packaging for service creation. The modularized binary form is configured to render pre-built application logic binaries with linking and service required wrapping, for the creation of the offerable network service packages. The DPSI platform invokes the modularized binary form for assisting field engineers in creating customer required services at deployment time. In the modularized binary form, the DPSI platform provides pre-built application logic binaries that are ready for being packaged into offerable services, where only necessary linking and service required wrapping is needed for creating services. The customization service form is configured to render the pre-built services for customization via configuration by customers directly. In the customization service form, only service configuration is required for creating customer required services. In an embodiment, the data plane service implementation (DPSI) platform comprises a set of service creation APIs, for example, representational state transfer (REST) APIs or RESTful APIs, for creating network service packages. In an embodiment, the DPSI platform further comprises a service creation user interface, for example, a graphical user interface (GUI), configured as an overlay of one or more of the APIs, for example, the service creation APIs, for creating network service packages. The service creation user interface is configured to be rendered on user devices for creating the network service packages as illustrated inFIG.17. The DPSI platform renders the packet processing primitives, the application logics, the network service packages, and logic connectors as interface elements, for example, GUI elements, on the service creation user interface for selection and configuration by users. FIG.2illustrates an exemplary schematic showing an architectural view of an application programming interface (API)-centric, customizable data plane service implementation (DPSI) solution provided by the DPSI platform. In an embodiment, the DPSI platform implements fully API-centric service creation in three levels from bottom to the top as illustrated inFIG.2, where a first level comprises creation of packet processing primitives202; a second level comprises creation of application logics203; and a third level comprises creation of offerable services204. As illustrated inFIG.2, the first level at the bottom shows a primitive pool, the second level shows an application logic pool on top of the primitive pool, and the third level shows an offerable service pool on top of the application logic pool. In the first level, the DPSI platform allows users, for example, development engineers or developers associated with the DPSI platform, to develop primitive templates as software, or hardware, or software-hardware-combined functional modules and store the primitive templates in the primitive repository. The DPSI platform realizes the packet processing primitives202by applying use-case-specific primitive specifications to corresponding primitive templates pulled from the primitive repository. In the second level, the data plane service implementation (DPSI) platform creates an application logic203as an internal implementation of a whole data plane packet processing network function required by one or more user-specific data plane packet processing services. The DPSI platform builds the application logics203from various required packet processing primitives202that were created by applying the use-case-specific primitive specifications to the corresponding primitive templates pulled from the primitive repository. The DPSI platform executes application creation logic for deriving the use-case-specific primitive specifications from user-specific service creation requests. In an embodiment, the DPSI platform allows users, for example, development engineers, to create the application logics203by direct software development. In another embodiment, the DPSI platform allows users, for example, field engineers, to create the application logics203via application logic creation APIs. The DPSI platform stores the created application logics203in the application logic repository for further offerable service creation. In the third level, the data plane service implementation (DPSI) platform creates offerable services204as customer-friendly packages of internal packet processing application logics to be offered to end customers. For example, offerable service 1 in the offerable service pool shown in the third level is a quality of service (QoS)/class of service (CoS)-based routing/switching service. In another example, offerable service 2 in the offerable service pool is a service to execute an access control list (ACL) function on qualified traffic, to execute a network access control (NAC) function on other qualified traffic, and then to perform appropriate routing/switching of the traffic through queuing and scheduling. In another example, offerable service 3 in the offerable service pool is a service to apply NAC rules to incoming traffic, then apply appropriate traffic shaping according to the NAC rules, and then perform route/switching of the traffic via queuing and scheduling. In another example, offerable service 4 in the offerable service pool is a service to apply traffic shaping and network address translation (NAT) to qualified traffic while allowing other traffic to pass through to routing/switching, queuing, and scheduling. In an embodiment, the data plane service implementation (DPSI) platform allows users, for example, software engineers or developers, to create offerable services204by direct software development. In another embodiment, the DPSI platform allows users, for example, field engineers or customer administrators, to create offerable services204via service creation APIs based on use-case-specific service specifications. The use-case-specific service specifications are used to derive a required application logic flow, and the required packet processing primitives and corresponding specifications. The DPSI platform creates and offers services204to customers by packaging the created application logics203with use-case-specific service specifications. The DPSI platform performs customer use cases to offerable services mapping205using the APIs201. The mapping205comprises a determination of what offerable services204would match customer business logic206and the corresponding customer use case. In an embodiment, the mapping205is performed by the DPSI platform based on inputs received from a user, for example, a field engineer or a customer administrator. To implement the mapping process, the DPSI platform renders a list of existing offerable services204retrieved from the service repository, on a user interface, based on allowed visibility privileges assigned to the user. In an embodiment, the DPSI platform assigns different visibility privileges to different users, for example, field engineers, customer administrators, etc., thereby allowing different users to view different subsets of the offerable services204stored in the service repository. The user views the rendered list of offerable services and selects the required offerable services. The DPSI platform then executes a service creation logic to map the customer user cases to the offerable services204based on a use-case-specific service specification. The service specifications are defined in the customer business logic206that is communicated to the data plane service implementation (DPSI) platform via the APIs201. The customer business logic206comprises requirements, for example, separate long-term evolution (LTE) control and user traffic, provide quality of service (QoS) class identifier (QCI) support, provide allocation and retention priority (ARP) support, etc. QCI is a mechanism provided by LTE networks to ensure carrier traffic is allocated appropriate QoS, which indicates overall performance of a service. ARP is an LTE mechanism for indicating a priority level for the allocation and retention of bearers or network resources. A mobile network uses ARP to decide whether to accept a request to establish a bearer, or reject the request when network resources are limited. When performing admission control and the network resources are limited, the mobile network uses the ARP to prioritize establishing or modifying bearers with a higher ARP over bearers with a lower ARP. The APIs201comprise, for example, service creation and management APIs, configured to perform service creation, configuration, and runtime management in conjunction with key performance indicators (KPIs), alarms, logging, accounting, etc. Examples of the packet processing primitives202comprise a parser function, a classifier function, an access control list (ACL)/firewall function, a network access control (NAC) function, a network address translation (NAT) function, a traffic shaping function, a routing/switching function, a queuing function, a scheduling function, etc. The DPSI platform builds the application logics203using the packet processing primitives202. For example, the DPSI platform builds application logic 1 using the parser function and the classifier function; application logic 2 using the ACL/firewall function; application logic 3 using the NAT function; application logic 4 using the NAC function; application logic 5 using the traffic shaping function; application logic 6 using the queuing and scheduling function; and application logic 7 using the routing/switching function as illustrated inFIG.2. In an embodiment, the DPSI platform builds a single application logic comprising any combination of the application logics 1, 2, 3, 4, 5, 6, and 7. The data plane service implementation (DPSI) platform then proceeds to create offerable network service packages comprising offerable services204using one or more of the application logics203. For example, the DPSI platform creates an offerable service 1 using application logic 1, application logic 7, and application logic 6; an offerable service 2 using application logic 1, application logic 2, application logic 4, application logic 7, and application logic 6; an offerable service 3 using application logic 1, application logic 4, application logic 5, application logic 7, and application logic 6; and offerable service 4 using application logic 1, application logic 5, application logic 3, application logic 7, and application logic 6, as illustrated inFIG.2. The DPSI platform packages one or more application logics in a user-friendly manner as an offerable service to customers. The DPSI platform creates the sequence of the application logics in each of the offerable services204using the customer business logic206. The DPSI platform performs the customer user cases to offerable services mapping205to create the offerable network service packages. The DPSI platform stores the offerable services204in the service repository for future deployment. Customer administrators can then retrieve the created services from the service repository and deploy the retrieved services through service orchestration logic typically executed in computing and computer networking arts. Furthermore, customer administrators can then configure the deployed services through service configuration logic typically executed in computing and computer networking arts. FIG.3illustrates an exemplary schematic of an embodiment of a system300and operation flows for providing packet processing network functions as a service using the data plane service implementation (DPSI) platform. In an embodiment, the DPSI platform of the system300is a solution configured to render a user space300a, a backend server space300b, and a deployment field space300cfor implementing a role-based service creation capability. The role-based service creation capability of the DPSI platform facilitates creation of offerable network service packages based on roles of different users301,302, and303, for example, authorized development users, authorized administrators, and authorized operators. The DPSI platform allows the authorized development users, for example, developers301, to create packet processing primitives, application logics, and offerable services; and build and store additional packet processing primitives as needed into the primitive repository305, for creation of application logics. In an embodiment, the primitive repository305is hosted on a code hosting platform, for example, GitHub® of GitHub, Inc. The DPSI platform allows the developers301to pre-build and store well-known application logics307in the application logic repository308for service creation. The DPSI platform also allows the developers301to pre-build and store well-known common services309in the service repository310for retrieval by customers. Moreover, the DPSI platform allows the authorized administrators, for example, field engineers302, to create application logics and offerable services. The DPSI platform allows the field engineers302to build customer-required application logics at deployment time and store the customer-required application logics in the application logic repository308for service creation. The DPSI platform also allows the field engineers302to build customer-required services at deployment time and store the customer-required services in the service repository310for retrieval by customers. Furthermore, the DPSI platform allows the authorized operators, for example, customer administrators303, to create required services on their own and optionally publish to the service repository310as candidates offered to other customers. As illustrated inFIG.3, a developer301, using a developer device301ain the user space300a, develops and stores primitive templates304in the primitive repository305. The developer301also defines primitive specifications and pulls one or more primitive templates from the primitive repository305for creation of packet processing primitives. The data plane service implementation (DPSI) platform comprises a compiler/builder, also referred to as a primitive builder306, for compiling the primitive specifications with the primitive template(s) for building the packet processing primitives, in the backend server space300b. The primitive builder306stores the created packet processing primitives in the primitive repository305in the backend server space300b. In an embodiment, the DPSI platform allows the developer301to create the application logics by direct software development and store the created application logics in the application logic repository308in the backend server space300b. The application logic repository308also stores application logics307pre-built by the developer301. The developer301also creates and stores pre-built services309in the service repository310. A field engineer302, using a field engineer device302ain the user space300a, creates application logics312via an application logic creation application programming interface (API)311at operation time and stores the created application logics312in the application logic repository308in the backend server space300b. The field engineer302also creates offerable services313via a service creation API311at operation time and stores the created offerable services313in the service repository310. In an embodiment, the DPSI platform further comprises a service creation module, also referred to as a service builder316, that performs service to application logic mapping and creates offerable services for storage in the service repository310, in the backend server space300b. The service builder316retrieves selected application logics from the application logic repository308and utilizes services309pre-built by the developer301and/or offerable services313built by the field engineer302for creating offerable network service packages. The service builder316stores the created offerable network service packages in the service repository310in the backend server space300b. In an embodiment, the DPSI platform also allows the field engineer302to perform platform installation/orchestration, as typically performed in the computing and computer networking arts, on a networking/computing hardware platform318in the deployment field space300c. The networking/computing hardware platform318illustrated inFIG.3indicates the hardware deployed by the field engineer302. The field engineer302then loads the software into the networking/computing hardware platform318to execute the packet processing network functions. The networking/computing hardware platform318with the loaded software is represented as networking/computing nodes319inFIG.3. A customer administrator303, using a customer device303ain the user space300a, creates offerable services315via a service creation API314at operation time and publishes the created offerable services315for storage in the service repository310. In an embodiment, the DPSI platform provides a service ordering API314for allowing the customer administrator303to perform service ordering, that is, place an order for an offerable service. In an embodiment, during service ordering, the customer administrator303defines customer service specifications for service creation. The DPSI platform further comprises a service orchestrator317configured to receive a service order from the customer administrator303and perform service orchestration as typically performed in the computing and computer networking arts, in the backend server space300b. The service orchestrator317executes operational and functional processes involved in designing, creating, and delivering an end-to-end service. The service orchestrator317controls interactions involved in deploying the network service packages on networking/computing nodes319in the deployment field space300c. The service orchestrator317provides a robust bridge linking network automation and orchestration tools with the underlying physical and virtual infrastructure. The service orchestrator317comprises a set of software interfaces and APIs that allow service deployment on the networking/computing nodes319in the deployment field space300c. In an embodiment, the DPSI platform provides a service configuration API314for allowing the customer administrator303to perform service configuration. The DPSI platform further comprises a service configurator320configured to receive service configuration information from the customer administrator303and perform service configuration as typically performed in the computing and computer networking arts, in the backend server space300b. The service configurator320configures the created service, which is represented as the networking/computing nodes321in service in the deployment field space300c. The networking/computing nodes321in service illustrated inFIG.3represent the deployment of the service with the customer's configuration. FIG.4illustrates an architectural block diagram of an exemplary implementation of the system for implementing packet processing network functions as a service, in a layered form. In an embodiment, the packet processing primitives provided by the data plane service implementation (DPSI) platform are implemented on programmable hardware, for example, a programming protocol-independent packet processors (P4) programmable packet processor as hardware solutions. In another embodiment, the packet processing primitives are implemented on a processing unit, for example, a central processing unit (CPU), as software solutions. In another embodiment (not shown), the packet processing primitives are implemented on a processing unit, for example, a network processing unit (NPU), as software solutions. In another embodiment, the packet processing primitives are implemented on both programmable hardware as hardware solutions and on a processing unit, for example, a CPU, as software solutions. In an embodiment, when implemented as CPU software solutions, the packet processing primitives are built on either a data plane development kit (DPDK)403aor an operating system (OS) network stack, for example, a Linux® network stack403b. The DPDK403acomprises libraries and drivers configured for fast packet processing. The DPDK403areceives and sends packets within a minimum number of CPU cycles; allows development of fast packet capture algorithms; and executes third-party fast path stacks optimally. An exemplary implementation of the system with the P4 programmable packet processor is illustrated inFIG.4. In this exemplary implementation considered from a perspective of packet processing, the architecture400of the system comprises a network interface and P4 programmable switch hardware layer401, a higher physical layer (PHY-H)402, a DPDK/OS kernel layer403, a primitive implementation layer404, an application logic layer405, an offerable service layer406, and an application and control plane layer407. The network interface and P4 programmable switch hardware layer401is a low layer, namely, a physical-low (PHY-L) layer, comprising an ingress network interface controller (NIC)401aand an egress NIC401b. The ingress NIC401areceives external traffic and routes traffic to a service. The egress NIC401bcontrols egress traffic that exits a host network. In an embodiment, the network interface and P4 programmable switch hardware layer401further comprises a P4 programmed packet processor chip401c, for example, the Intel® Tofino™ chip of Intel Corporation. The P4 programmed packet processor chip401cperforms packet manipulation without invoking a CPU402aand implements the packet processing primitives in programmable hardware. In an embodiment, the P4 programmed packet processor chip401ccomprises an arbiter401d, a programmable parser401e, a parser runtime block401f, a programmable match-action pipeline401g, a programmable deparser401h, and a demultiplexer (demux)/queue401iin accordance with the P4 specification. The arbiter401dreceives packets of data from the ingress NIC401aand schedules the packets for entry into the programmable match-action pipeline401g. In an embodiment, the arbiter401dalso receives packets from the CPU402aand/or through recirculation from the demux/queue401i. The arbiter401ddirects packet flows arriving from different inputs to an appropriate output. In an embodiment, the arbiter401dswitches between input queues in a round robin sequence, and each time, selects a non-empty queue and moves one packet from the non-empty queue to the next stage in the data path. The arbiter401dtransfers each scheduled packet to the programmable parser401e. The programmable parser401eparses each packet and extracts input headers contained in the packets. The programmable parser401etransfers the input headers containing destination addresses and other field data to the programmable match-action pipeline401g. The parser runtime block401foperates in concert with the programmable parser401eand provides an error code to the programmable match-action pipeline401g, based on actions of the programmable parser401e. The parser runtime block401falso provides information about packet payload, for example, size of the remaining payload data, to the demux/queue401i. Completion of processing of a packet by the programmable parser401einvokes the programmable match-action pipeline401g. The programmable parser401efeeds associated metadata, for example, input headers, user-defined metadata, etc., as inputs to the programmable match-action pipeline401g. The programmable match-action pipeline401gcomprises match-action tables configured to match an input header of each packet comprising predefined rules and performs a corresponding action on the packet. Actions use the packet processing primitives to modify non-persistent resources, that is, the headers or metadata, of each packet. The programmable deparser401hreceives output headers from the programmable match-action pipeline401gand constructs the outgoing packet by reassembling the output headers as computed by the programmable match-action pipeline401g. The demux/queue401ireceives the headers for the outgoing packet from the programmable deparser401hand the packet payload from the programmable parser401e, and assembles the output headers and the packet payload into a new packet to send the result to the correct output port of the egress NIC401b. In an embodiment, the higher physical (PHY-H) layer402comprises the CPU402aconfigured to execute the software implemented-P4 or non-P4 packet processing primitives for packets coming from the ingress NIC401aor from the programmable packet processor through either the DPDK403aor the Linux® network stack403b. The DPDK/OS kernel layer403serves as the network stacks comprising, for example, the DPDK403aand the Linux® network stack403bin a conventional software-based packet processing manner. The primitive implementation layer404comprises a collection of packet processing primitives implemented using various individual or combined capabilities of the data plane service implementation (DPSI) platform. The packet processing primitives comprise, for example: (a) packet processing primitives that are directly implemented in the P4-programmable packet processor in a hardware manner, and that are relatively simple packet processing primitives suitable for low latency and high performance; and/or (b) packet processing primitives that are implemented in general software and run on the CPU402a. The packet processing primitives comprising relatively complicated logic are implemented in general software and run on the CPU402a. The packet processing primitives that are implemented in general software and run on the CPU402acomprise P4-programmed, general software primitives and non-P4, general software-implemented primitives. The P4-programmed, general software primitives comprise, for example: (a) DPDK-based, P4-programmed, general software primitives; and (b) Linux® kernel-based, P4-programmed, general software primitives. The Linux® kernel is a primary component of the Linux® operating system and provides a core interface between hardware of a device, for example, a network device, and processes of the device. The non-P4, general software-implemented primitives comprise, for example: (a) DPDK-based, non-P4-programmed, general software primitives; and (b) Linux® kernel-based, non-P4-programmed, general software primitives. The primitive implementation layer404performs implementation of packet processing primitives over general DPDK404a; implementation of packet processing primitives over P4-DPDK404b; implementation of packet processing primitives over P4 behavioral model version 2 (BMv2)404cand DPDK; implementation of packet processing primitives over P4 BMv2404cand Linux®; and implementation of packet processing primitives over general Linux®404eas illustrated inFIG.4. BMv2 allows developers to implement their own P4-programmable architecture as a software switch. The primitive implementation layer404communicates with the CPU402avia a P4-programmable switch hardware interface404d. In an embodiment, the primitive implementation layer404communicates with the P4-programmed packet processor chip401cvia the P4-programmable switch hardware interface404d. Performance and capability wise, the following orders are descending in performance and ascending in complexity/capability: (a) P4-programmable packet processor hardware-implemented primitives; (b) DPDK-based, P4 or non-P4 software-implemented primitives; and (c) Linux® kernel-based, P4 or non-P4 software-implemented primitives. Pure software P4 capability allows seamless transition from a general server implementation to a P4 packet processor enhanced implementation. An initial pure software solution allows for early product introduction, for example, based on cost, P4 chip availability, etc. Seamless transition allows migration of software implementations of P4-based primitives to hardware implementations without significant changes. A common P4-based solution allows compatibility with all P4 chips. On top of the above types of packet processing primitives, the application logic layer405comprises a set of common, pre-built, and/or runtime customized, independently functional application logics405aconfigured to satisfy end user needs. The application logic layer405is built from the packet processing primitives implemented at the primitive implementation layer404disposed below the application logic layer405. The application logics405aserve as functional units for creating various offerable services406a. Built on top of the application logic layer405and comparing with its internal implementation facing nature, the offerable service layer406is more external customer facing and offers meaningful service solutions to customers. In an embodiment, the offerable services406acomprise individual application logics405a. In another embodiment, the offerable services406acomprise combinations of application logics wrapped around by appropriate application programming interfaces (APIs) to customers. The DPSI platform provides the APIs in a role-based API set410as illustrated inFIG.4. The application and control plane layer407comprises an integrated computing/networking node control and application plane407a. The integrated computing/networking node control and application plane407acomprises advanced applications409and data plane packet processing functions, for example, routing protocol409a, policy and charging rules functions (PCRFs)409b, quality-of-service (QoS) functions409c, Communications Assistance for Law Enforcement Act (CALEA) functions409d, firewall functions409e, and miscellaneous functions409fsuch as fault, configuration, accounting, performance, and security (FCAPS), control, interface, and protocol functions, required for a fully functional, integrated computing/networking operation. All the layers401,402,403,404,405,406, and407of the architecture400of the system disclosed herein are accessible via role-based APIs. The role-based API set410comprises APIs for building packet processing primitives410a, configuring packet processing primitives410b, building application logics410c, creation and orchestration of services410d, and service control and management410eas illustrated inFIG.4. The role-based APIs allow creation, configuration, control, management, monitoring, and reporting to be applied to corresponding layered resources and layered entities of the system. The role-based APIs separate and control the scope of accessibilities of users with different privileges as follows. The data plane service implementation (DPSI) platform provides primitive creation users with first level privileges that allow the primitive creation users to create and manage all levels of entities, for example,405,406, and407, down to the primitive implementation layer404. The DPSI platform provides application creation users with second level privileges that allow the application creation users to create and manage all levels of entities, for example,406and407, down to the application logic layer405. The DPSI platform provides service creation users with third level privileges that allow the service creation users to create and manage all levels of entities at the offerable service layer406. The DPSI platform provides customer service creation users with fourth level privileges that allow the customer service creation users to create and manage owned entities at the offerable service layer406. The DPSI platform provides customer service administration (admin) users with fifth level privileges that allow the customer service admin users to manage owned entities at the offerable service layer406. The DPSI platform provides customer service operation users with sixth level privileges that allow the customer service operation users to view owned entities at the offerable service layer406. The DPSI platform provides customer service monitor users with seventh level privileges that allow the customer service monitor users to monitor and view subsets of owned entities at the offerable service layer406. The DPSI platform provides general users with the lowest eighth level privileges that allow the general users to only view publicly accessible information, if any. In an embodiment, the DPSI platform allows administration411and enterprise information technology (IT) administration412via the role-based API set410. The data plane service implementation (DPSI) platform optimizes primitive implementations based on complexity of the primitive logic and suitable running environments. The DPSI platform implements simple and high-performance primitive logics in a programmable hardware form, for example, in Intel® Tofino™ P4 chips. The DPSI platform implements packet processing primitives with moderate complexity and/or requiring interactions with the control plane, for example, a P4 control plane408, that is, normally run on the CPU402a, as P4 program modules for offloading the P4 hardware burden and preserving options for future hardware implementation. In an embodiment, the P4 control plane408operates with an open network operating system (ONOS), which provides and exposes a set of high-level abstractions and models to users, for example, the developers, for extension by applications at runtime. The ONOS enables creation of next-generation network applications that use software-defined networks. The DPSI platform implements packet processing primitives with high complexity and/or requiring heavy interactions with the control plane408and/or external networking entities, for example, authentication services, etc., in software forms to be run on the CPU402a. FIG.5Aillustrates a block diagram showing an implementation of logic of a customer use case using a fully integrated packet processing primitive500in a nested function form. In an embodiment, the data plane service implementation (DPSI) platform provides a flexible implementation framework of packet processing primitives, application logics, and offerable services, where the DPSI platform implements a use case-required application logic and offerable service in a fully integrated packet processing primitive500as illustrated inFIG.5A. Consider an example where a customer requests for an access control list (ACL) function for a customer user case. Based on user input, the DPSI platform integrates a parser primitive501and a classifier primitive502into an ACL function-specific primitive503with associated function-specific actions as illustrated inFIG.5A, and implements the ACL function logic using the fully integrated packet processing primitive500. The selections of the packet processing primitives are controlled at service creation time based on service creation specifications defined by the customer. FIG.5Billustrates a block diagram showing an implementation of logic of a customer use case using multiple separated packet processing primitives501,502, and503connected in an order required by a packet processing network function. In an embodiment, the data plane service implementation (DPSI) platform provides a flexible implementation framework of packet processing primitives, application logics, and offerable services, where the DPSI platform implements a use case-required application logic and offerable service with multiple fully separated packet processing primitives501,502, and503as illustrated inFIG.5B. Consider an example where a customer requests for an access control list (ACL) function for a customer user case. Based on user input, the DPSI platform selects a parser primitive501, a classifier primitive502, and an ACL function-specific primitive503with associated function-specific actions and connects the packet processing primitives501,502, and503in the order illustrated inFIG.5B, at service creation time based on service creation specifications defined by the customer. The DPSI platform implements the ACL function logic using the fully separated packet processing primitives501,502, and503as illustrated inFIG.5B. FIG.6illustrates a block diagram showing a generic structured packet processing primitive601and ways primitives601,605,606,607, and608are connected to provide a desired packet processing network function. For achieving the implementation flexibility disclosed in the descriptions ofFIGS.5A-5B, the data plane service implementation (DPSI) platform structures service creation primitives in a uniform manner as generic structured packet processing primitives. Each generic structured packet processing primitive601comprises a name, a type, an identifier (ID), and a specification of a primitive instance. The name is an optional field to specify the name of the primitive instance. The type is a field denoting the type of the primitive instance. The ID is a global unique reference ID of the primitive instance. The specification is a field providing a detailed specification of the primitive instance. The detailed specification of the primitive instance comprises an array of components602,603,604, etc., incorporated in a tier1primitive logic block as illustrated inFIG.6. Each component comprises: (a) name: name of the component; and (b) enable: boolean to indicate whether the component is to be concerned in a particular primitive instance. Each component comprises an array of elements incorporated in a tier2primitive logic block. For example, the component602comprises an array of elements602a,602b,602c, etc.; the component603comprises an array of elements603a,603b,603c, etc.; and the component604comprises an array of elements604a,604b,604c, etc.; as illustrated inFIG.6. Each element comprises: (a) name: name of the element; (b) enable: boolean to indicate whether the element is to be concerned in a particular primitive instance; and (c) match actions: a list of matching criteria and corresponding handles of the element in the particular primitive instance. An exemplary list of matching criteria and corresponding handles of an element in a particular primitive instance is disclosed below:(a) range: a match range of the relevant element(i) [start, end]: a range from start to end(ii) [value]: a single value range(b) action: action to be taken(i) noop: no operation. if “action” does not exist, treat as “noop”(ii) drop: drop the packet(iii) pass-up: pass the packet up to the upper tier; e.g., when the first packet is received by a network access control (NAC) primitive, the extensible authentication protocol (EAP) authentication needs to be invoked, and the EAP authentication is done by a general software implementation. Other examples comprise deep packet inspection (DPI)-based firewall, correlation-based classification, sophisticated packet processing, etc.(iv) primitive specific action: follow the action instructions of the current primitive for the matching; e.g., do network address translation (NAT), access control list (ACL), routing/switching, etc.(c) next: next step of processing(i) noop: no more processing, forward to a network interface609for transmitting.If “next” does not exist, treat as “noop”(ii) continue: continue and forward to the next element in array, or to the next component if last element.(iii) break: breakout of the element array and forward to the referenced next component in current primitive(iv) primitive: forward to the next downstream primitive605,606,607, or608(v) handle: detailed specification of the next processing handle(vi) for primitive type handle: reference ID of the receiving primitive, receiving primitive type, inputs to receiving primitive(vii) for break type handle: reference to the receiving component within the primitive is provided. If “handle” does not exist, default reference is the next component in sequence within the primitive.(viii) for continue type handle: reference to the receiving element within the component or the receiving component within the primitive if last element in the array. If no reference is provided, continue to the next element or component in sequence. FIG.7illustrates a schematic showing creation of an executable primitive instance, for example,706,707, etc., from a primitive template701and a primitive specification702or703. The data plane service implementation (DPSI) platform implements each primitive instance706,707, etc., as an independent, self-contained, functional part of a whole data plane packet processing network function of different use cases. The DPSI platform creates a packet processing primitive by applying a use-case-specific primitive specification, for example,702or703, to a corresponding primitive template701retrieved from the primitive repository. The primitive template701is a framework logic that is customizable with different primitive specifications, for example,702and703, to define different implementations, for example,704and705, of the same type of primitives. Examples of primitive templates comprise templates for a packet parser primitive, a traffic classification primitive, an access control list (ACL)/firewall primitive, a network access control (NAC) primitive, a network address translation (NAT) primitive, a traffic shaping/policing primitive, a traffic routing/switching primitive, a traffic manipulation primitive, a traffic queuing and scheduling primitive, a traffic correlation/synchronization primitive, etc. A primitive specification, for example,702or703, is a set of primitive-specific parameters configured to define and scope coverage of functionalities of the primitive implementation, for example,704or705, out of the corresponding primitive template701. A user, for example, developer, creates the primitive specification by setting various parameters associated with a primitive. The developer may enable or disable different parameters in the primitive specification. The primitive specification allows customization of a primitive template701to serve a function. For example, a classifier primitive classifies incoming traffic based on different fields of a packet header such as source and/or destination internet protocol (IP) addresses, source and/or port number, protocol, etc. The DPSI allows the developer to control the parameters in the primitive specification by enabling or disabling different fields of the packet header for classifying the incoming traffic as per the use case. A primitive implementation, for example,704, is the combination of a primitive template701and a primitive specification, for example,702, that can be used to spawn a primitive instance706,707, etc. A primitive instance, for example,706,707, etc., is a realization of a primitive implementation704with necessary configurations. Disclosed below is a general definition of a primitive specification. Details for different types of primitives are differently customized in the general format disclosed below.{“name”: “optional name of the primitive”,“type”: “type of the primitive, e.g. classifier and so on”,“id”: “global unique identification of the primitive”,“specification”: {// per type specific primitive specification as defined in the following sections “components”: [// array of logical components of the primitive,// e.g. the protocol layers in traffic classification primitive{“name”: “name or tag of the first component”,“enable”: true/false, // boolean to indicate if this component shall be enabled“elements”: [// array of elements within this component{“name”: “name or tag of the first element”,“enable”: true/false, // boolean to indicate if this element shall be enabled“match actions”: [// array of matching criteria and corresponding handling instructions for this element{“range”: [“range1-start”,“range1-end”], // matching range of this element“action”: “any primitive specific action”, // do primitive specific action“next”: “continue”, // continue the packet processing to the next referenced element within this component“handle”: {“$ref”: “#/specification/components[0][elements[1]”// continue to next element{},{“range”: [“range2-start”,“range2-end”], // matching range of this element“action”: “any primitive specific action”,“next”: “continue”, // continue the packet processing to the next element in sequence},{“range”: [“range3-start”,“range3-end”], // matching range of this element“action”: “any primitive specific action”,“next”: “break”, // continue the packet processing to the next referenced component within this primitive“handle”: {“$ref”: “#/specification/components[0]”// continue to next component}},,{“range”: [“range4-start”,“range4-end”], // matching range of this element“action”: “any primitive specific action”,“next”: “break”, // continue the packet processing to the next component in sequence within this primitive},{“range”: [“range5-start”,“range5-end”], // matching range of this element“action”: “any primitive specific action”, // do primitive specific action, e.g. NAT“next”: “primitive”, // continue the packet processing by the specified downstream primitive“handle”: {// handling primitive when match found“handling_primitive_type”: “type of the receiving primitive”,“handling_primitive_id”: “global instance id of the receiving primitive”“handling_primitive_inputs”: {// meta data set from parser“meta data”: {// meta data set object output from parser primitive}// per primitive type specific inputs required by the receiving primitive}}},// more match-action items if any{// last match-action item“range”: [“range6-start”,“range6-end”], // matching range of this element“action”: “any primitive specific action”, // do primitive specific action, e.g. NAT“next”: “noop”// no more operation, forward to NIC for transmission},{// default handle“range”: nil, // all remaining values out of above explicitly specified ranges“action”: “ . . . ”,“next”: “noop”// no more operation, forward to NIC for transmission}]},{// second element if exist},// more elements if any{// last element}]], // first component{// second component// specification of the second component// . . . . . .}, // second component// more components if any{// specification of the last component}]}} A user, for example, a developer, creates a use-case-specific primitive specification by customizing details, for example, name, type, identifier, specification, etc., in the general format of the above-disclosed primitive specification for different types of primitives. Disclosed below is an example of a primitive specification created for a packet parser primitive. // Packet parser primitive specification example { “name”: “enterprise-A-parser”,“type”: “parser”,“id”: “123e4567-e89b-12d3-a456-426614174000”,“specification”: {“components”: [// components [0]—layer 2{“name”: “layer2”,“enable”: true,“elements”: [// first element—layer 2 dest mac address{“name”: “dest_mac”,“enable”: true,“next”: “continue”// continue to check the next element in sequence},// second element—layer 2 source mac address{“name”: “src_mac”,“enable”: true,“next”: “continue”// continue to check the next element in sequence},// 3rd element{“name”: “tpid”,“enable”: true,“next”: “continue”// continue to check the next element in sequence},// 4th element{“name”: “priority”,“enable”: true,“next”: “continue”// continue to check the next element in sequence},// 5th element{“name”: “cfi”,“enable”: false,“next”: “continue”// continue to check the next element in sequence},// . . .// last element{“name”: “vlan”,“enable”: true,“next”: “continue”// continue to check the next element in sequence}]}, // components[0]// components[1] layer 3 version{“name”: “layer3-version”,“enable”: true,“elements”: [{“name”: “version”, // IPv4 or IPv6“enable”: true,“match-actions”: [{“range”: [“ipv4”],“next”: “continue”,“handle”: {“$ref”: “#/specification/components[2]”// point to the layer3-v4 component}},{“range”: [“ipv6”],“next”: “continue”,“handle”: {“$ref”: “#/specification/components[3]”// point to the layer3-v6 component}}]}]}, // components[1]// components[2] layer 3 IPv4{“name”: “layer3-v4”,“enable”: true,“elements”: [{“name”: “tos”, // type of service“enable”: true,“next”: “continue”// continue to check the next element in sequence}, // tos{“name”: “identification”,“enable”: false}, // identification{“name”: “flags_df”,“enable”: false},{“name”: “flags_mf”,“enable”: false},{“name”: “ttl”,“enable”: false},{“name”: “protocol”,“enable”: true,“next”: “continue”// continue to check the next element in sequence}, // protocol{“name”: “dest_addr”,“enable”: true,“next”: “continue”// continue to check the next element in sequence}// dest_addr{“name”: “src_addr”,“enable”: true,“next”: “continue”// continue to check the next element in sequence}// src_addr]// elements}// components[2]// components[3] layer 3 IPv6{“name”: “layer3-v6”,“enable”: true,“elements”: [{“name”: “traffic_class”, // type of service“enable”: true,“next”: “continue”// continue to check the next element in sequence},{“name”: “flow_label”,“enable”: true,“next”: “continue”// continue to check the next element in sequence},{“name”: “next_hdr”,“enable”: true,“next”: “continue”// continue to check the next element in sequence},{“name”: “hop_limit”,“enable”: false},{“name”: “dest_addr”,“enable”: true,“next”: “continue”// continue to check the next element in sequence}// dest_addr{“name”: “src_addr”,“enable”: true,“next”: “continue”// continue to check the next element in sequence}// src_addr]}, // components[3]// components[4] layer4 TCP{“name”: “layer4-tcp”,“enable”: true,“elements”: [// . . . . . .]},// components[5] layer4 UDP{“name”: “layer4-udp”,“enable”: true,“elements”: [// . . . . . .]},// components[6] layer4 SCTP{“name”: “layer4-sctp”,“enable”: true,“elements”: [// . . . . . .},// any other layer 4 protocol specific component can be defined in a similar manner// e.g. components[7] through components[n]// components[n+1]{“name”: “layer5-gtpu”,“enable”: true,“elements”: [// . . . . . .],// additional components if any// . . . . . .// last component{“name”: “last-component”,“enable”: true,“elements”: [// . . . . . .]}]// components}// specification} Another example of a primitive specification created for a network address translation (NAT) primitive is disclosed below. { “name”: “enterprise-A-nat-1”,“type”: “nat”,“id”: “123e4567-e89b-12d3-a456-426614174010”,“specification”: {“components”: [{// component[0]“rule”: “nat”,“enable”: true/false, // boolean to indicate if this component shall be enabled“elements”: [// array of elements within this component{“nat table”: “nat table id”, // governing NAT table“enable”: true/false, // boolean to indicate if this element shall be enabled“match actions”: [// array of matching criteria and corresponding handling instructions for this element{“range”: [“nat_table_1”], // matching value of this element“action”: “follow/bypass”, // follow or bypass the NAT instructions“next”: “primitive”, // next handling entity (e.g. a downstream primitive) of fall through traffic“handle”: {// handling primitive when match found“handling_primitive_type”: “type of the receiving primitive”,“handling_primitive_id”: “global instance id of the receiving primitive”“handling_primitive_inputs”: {“meta_data”: {// meta data set object output from parser primitive}// per primitive type specific inputs required by the receiving primitive}}},{“range”: [“nat_table_3”,“nat_table_4”], // matching range of this element, ?? do we have such use case??“action”: “follow/bypass”// follow or bypass the NAT instructions“next”: “primitive”, // next handling entity (e.g. a downstream primitive) of fall through traffic“handle”: {// handling primitive when match found“handling_primitive_type”: “type of the receiving primitive”,“handling_primitive_id”: “global instance id of the receiving primitive”“handling_primitive_inputs”: {“meta_data”: {// meta data set object output from parser primitive}// per primitive type specific inputs required by the receiving primitive}}},// more match-action items if any{// last match-action item},{// default handle“range”: nil, // all remaining values out of above explicitly specified ranges“handle”: {// . . . . . .}}]}]}// component[0]]} } Another example of a primitive specification created for a traffic shaping and policing primitive is disclosed below. // example of traffic shaping primitive { “name”: “enterprise-A-1-shaping-1”,“type”: “shaping”,“id”: “123e4567-e89b-12d3-a456-426614174001”,“specification”: {“components”: [// components[0]—rate control{“algorithm”: “rate”, // rate control“enable”: true/false, // indicate if this algorithm is enabled“elements”: [// parameters of the algorithm// first element—max allowed average rate{“enable”: true/false, // optional—indicate if average rate control is enabled, if false, rate is unlimited“rate”: 5000000// max allowed rate 5 Mbps}]}, // components[0]—rate control// components [1]—burst rate control{“algorithm”: “burst”, // burst rate control“enable”: true/false, // indicate if this algorithm is enabled“elements”: [// parameters of the algorithm// first element—max allowed burst rate{“enable”: true/false, // optional—indicate if burst rate is enabled, if false, no limit to the burst rate“burst”: 10000000// max allowed burst rate 10 Mbps}]}, // components[1]—burst rate control// any other components{// . . . . . .}]// components}// specification } As illustrated inFIG.7, the data plane service implementation (DPSI) platform creates a primitive implementation1704by combining the primitive template701and the primitive specification1702. The DPSI platform then uses this combination, that is, the created primitive implementation1704to spawn the primitive instance1706through to the primitive instance n707using different configurations of the primitive implementation1704. Similarly, the DPSI platform creates a primitive implementation2705by combining the primitive template701and the primitive specification2703. FIG.8illustrates an exemplary flow diagram showing handling of a long-term evolution (LTE)/fifth generation (5G) quality of service (QoS) flow by execution of an application logic created through the data plane service implementation (DPSI) platform. The DPSI platform creates an application logic based on a use-case-specific application logic specification using one or more packet processing primitives corresponding to the use-case-specific application logic specification. In an example, for LTE/5G QoS flow handling, the DPSI platform creates an application logic responsible for: (a) classifying and separating control and user traffic, and (b) classifying user plane traffic based on mobile QoS settings comprising, for example, QoS class identifier (QCI), allocation and retention priority (ARP), guaranteed bit rate (GBR)/maximum bit rate (MBR)/aggregate maximum bit rate (AMBR), etc. In the step of classifying and separating control and user traffic, the application logic is responsible for delivering control traffic to a mobile core, for example, a 5G/LTE core805, and based on the user plane traffic mobile QoS settings, for delivering user traffic to a packet data network (PDN)807or808via a packet data network gateway (PGW)806. In the step of classifying user plane traffic based on mobile QoS settings, the application logic is responsible for handling different user plane traffic differently via queuing and scheduling as illustrated inFIG.8. The LTE/5G QoS flow from user equipment801comprising mobile devices or other network-enabled devices, to an evolved node B (eNB)803via dedicated bearers802, and then to an integrated computing/networking node804that applies the application logic to ingress traffic is illustrated inFIG.8. An ingress network interface controller (NIC)401areceives the ingress traffic and routes the ingress traffic to the integrated computing/networking node804. The application logic is applied to the ingress traffic in the integrated computing/networking node804. As illustrated inFIG.8, the ingress traffic undergoes protocol-based classification, from where control traffic undergoes priority queuing and priority scheduling to pass output traffic to an egress NIC401b. Furthermore, user traffic undergoes QCI-based classification and optionally, priority queuing and priority/weighted round robin (WRR) scheduling, prior to undergoing ARP-based classification and optionally, queuing and WRR scheduling. After ARP-based classification and optional queuing and WRR scheduling, the output traffic undergoes priority queuing and priority scheduling and passes to the egress NIC401b. The egress NIC401bcontrols egress traffic that exits the integrated computing/networking node804. Based on the application logic, the integrated computing/networking node804delivers control traffic to the 5G/LTE core805, and based on the user plane traffic mobile QoS settings, delivers user traffic to the PDN807or808via the PGW806as illustrated inFIG.8. FIG.9illustrates a flow diagram showing execution of exemplary access control list (ACL), firewall, and correlation-based network access control (NAC) application logics to be applied to different application traffic.FIG.9shows a customized solution for sophisticated traffic flow-based traffic control, shaping, queuing, and scheduling by employing three application logics 1, 2, and 3 created through the data plane service implementation (DPSI) platform. The three application logics 1, 2, and 3 are disclosed below. Application logic 1 1.1 do layer 2 through 4 access control list (ACL)1.2 composed of 4 packet processing primitives(a) header-based traffic classifier for traffic classification(b) ACL based on layer 2-4 rules(c) multiple priority First-in, First-out (FIFO) queues for classified pass-through traffic(d) do priority scheduling on priority FIFO queues1.3 handles all incoming traffic and forwards some of the qualified traffic to Application logic 2 for further processing Application logic 22.1 composed of 4 packet processing primitives(a) Deep packet inspection (DPI)-based traffic classifier for in-depth traffic classification(b) Firewall based on layer 5-7 information out of DPI(c) multiple push-in, first-out (PIFO) queues to assist traffic shaping/policing at enqueuing(d) Round Robin (RR) scheduling2.2 Handles traffic deferred from Application logic 1 and forwards further qualified traffic to Application logic 3. Application logic 33.1 composed of 4 packet processing primitives(a) correlation-based traffic classification for further traffic classification(b) do correlation-based network access control (NAC)(c) multiple priority FIFO queues(d) Weighted Round Robin (WRR) scheduling Output traffic from Application logics 2 and 3 are enqueued into queues in Application logic 1 for sending to a network interface. In an exemplary implementation, packet processing primitives that constitute the application logics 1, 2, and 3 disclosed above are implemented on programmable hardware, for example, on a programming protocol-independent packet processors (P4) hardware switch901; as a P4 module, for example, on a P4 software switch902to be run on a processing unit, for example, a central processing unit (CPU)402a; and in general software903as a general software module to be run on the CPU402a, respectively, as illustrated inFIG.9. As illustrated inFIG.9, header-based traffic classification is performed by a P4 hardware primitive implementation on the P4 hardware switch901. That is, a header-based traffic classifier is implemented on the P4 hardware switch901. Header-based traffic classification is a data plane packet processing function configured to classify headers of each incoming packet. Moreover, an additional DPI-based traffic classification is performed by a P4 software primitive implementation on the P4 software switch902running on the CPU402a. That is, a DPI-based traffic classifier is implemented on the P4 software switch902running on the CPU402a. DPI-based traffic classification is a data plane packet processing function configured to classify each incoming packet based on a detailed examination of content and a large range of metadata of the packet as the packet passes through a checkpoint on a network. Furthermore, an additional correlation-based traffic classification is performed by a general software primitive implementation as general software903running on the CPU402a. That is, a correlation-based traffic classifier is implemented as general software903running on the CPU402a. Correlation-based traffic classification is a data plane packet processing function configured to classify each incoming packet based on, for example, mathematically-defined similarity and dissimilarity measures that correlate different components to each other, where the resulting correlation coefficients reflect the magnitude of the co-linear relationship of the components. The correlation logic is executed in the CPU402aas a general software module. As exemplarily illustrated inFIG.9, on receiving the incoming traffic, also referred to as “ingress traffic”, from an ingress network interface controller (NIC)401a, the header-based traffic classifier on the P4 hardware switch901selects packets to be forwarded to the P4 software switch902running on the CPU402a, based on the header-based traffic classification performed on the ingress traffic. Furthermore, according to the application logic 1 disclosed above, the processed packets output from the header-based traffic classifier undergo ACL rules based on layer 2 through 4 metadata. Classified pass-through traffic is then sent to the FIFO queues, from where priority scheduling is performed on priority FIFO queues. The output traffic from the P4 hardware switch901is forwarded to the network interface, for example, an egress NIC401b. The P4 software switch902, implemented on a data plane development kit (DPDK)403aor on an operating system (OS) network stack, for example, a Linux® network stack403bas illustrated inFIG.4, receives the selected packets from the P4 hardware switch901. The DPI-based traffic classifier on the P4 software switch902further selects packets to be forwarded to the general software903running on the CPU402a, based on an in-depth traffic classification, also referred to as the DPI-based traffic classification, performed on the previously selected packets. Furthermore, the processed packets output from the DPI-based traffic classifier undergo processing by the remaining packet processing primitives, that is, the firewall based on layer 5-7 metadata, multiple PIFO queues that assist traffic shaping/policing at enqueuing, and round robin (RR) scheduling that constitute the application logic 2 disclosed above. The output traffic from the P4 software switch902undergoes FIFO queuing and priority scheduling as specified in the application logic 1 in the P4 hardware switch901and is forwarded to the egress NIC401b. The general software903running on the CPU402a, implemented on a DPDK403aor on an OS network stack, for example, a Linux® network stack403bas illustrated in FIG.4, receives the further selected packets from the P4 software switch902. The correlation-based traffic classifier in the general software903processes the further selected packets by performing the correlation-based traffic classification on the further selected packets. Furthermore, the processed packets output from the correlation-based traffic classifier undergo a correlation-based NAC, priority FIFO queuing, and weighted round robin (WRR) scheduling as specified in the application logic 3 disclosed above. The output traffic from the general software903undergoes FIFO queuing and priority scheduling as specified in the application logic 1 in the P4 hardware switch901, from where the output traffic is forwarded to the egress NIC401b. FIG.10illustrates an exemplary flow diagram showing an optimized implementation of packet processing primitives in an advanced traffic classification use case in which deep packet inspection (DPI) and traffic flow correlation-based classification logic are applied to selected traffic that need such level of classification per user functional requirements. The data plane service implementation (DPSI) platform defines traffic classification goals for the advanced traffic classification use case. The traffic classification goals comprise, for example: (1) classifying incoming traffic by headers; (2) performing additional packet payload, DPI-based traffic classification on selected packets; (3) performing further correlation-based traffic classification on qualified packets received in step (2); and processing packets according to the classification. In an exemplary implementation of the traffic classification use case, the packet processing primitives are implemented on programmable hardware, for example, on a programming protocol-independent packet processors (P4) hardware switch901; as a P4 module, for example, on a P4 software switch902to be run on a central processing unit (CPU)402a; and in general software903to be run on the CPU402a, as illustrated inFIG.10. Furthermore, in this exemplarily implementation, the packet processing primitives comprise a header-based traffic classifier configured to perform header-based traffic classification; an additional deep packet inspection (DPI)-based traffic classifier configured to perform DPI-based traffic classification; and an additional correlation-based traffic classifier configured to perform correlation-based classification as disclosed in the description ofFIG.9. The header-based traffic classifier is implemented on the P4 hardware switch901. The DPI-based traffic classifier is implemented on the P4 software switch902running on the CPU402a. The correlation-based traffic classifier is implemented as general software903running on the CPU402a. As exemplarily illustrated inFIG.10, on receiving the incoming traffic, also referred to as “ingress traffic”, from the ingress network interface controller (NIC)401a, the header-based classifier implemented on the P4 hardware switch901selects packets to be forwarded to the P4 software switch902running on the CPU402a, based on the header-based traffic classification performed on the incoming packets. The output traffic from the P4 hardware switch901undergoes packet handling1001and is forwarded to the egress NIC401b. The P4 software switch902, implemented on a data plane development kit (DPDK)403aor on an operating system (OS) network stack, for example, a Linux® network stack403bas illustrated inFIG.4, receives the selected packets from the P4 hardware switch901. The DPI-based traffic classifier implemented on the P4 software switch902further selects packets to be forwarded to the general software903running on the CPU402a, based on the additional DPI-based traffic classification performed on the previously selected packets. The output traffic from the P4 software switch902undergoes DPI-based packet handling1002and is forwarded to the egress NIC401b. The general software903running on the CPU402a, implemented on a DPDK403aor on an OS network stack, for example, a Linux® network stack403bas illustrated inFIG.4, receives the further selected packets from the P4 software switch902. The correlation-based traffic classifier in the general software903processes the further selected packets by performing the correlation-based traffic classification on the further selected packets. The output traffic from the general software903undergoes correlation-based packet handling1003and is forwarded to the egress NIC401b. FIG.11illustrates a schematic showing primitive, application logic, and service creation flows performed by a developer user301of the data plane service implementation (DPSI) platform. A developer user301, herein referred to as a “developer”, typically understands technical details of overall service creation procedures and is, therefore, capable of creating primitive templates, packet processing primitives herein referred to as “primitives”, application logics, and network service packages herein referred to as “offerable services”, through direct creation procedures without dependency on application programming interfaces (APIs) for efficiency. These direct creation procedures are more suitable for building primitive templates, primitives, and all commonly used pre-built application logics and offerable services. As such, the DPSI platform allows the developer301to perform service creation related activities comprising developing primitive templates, building an application logic, and building an offerable service, using a developer device301a, for example, a desktop computer, a laptop, a tablet computing device, etc. The developer301accesses user interfaces rendered by the DPSI platform on the developer device301afor performing the service creation related activities. The DPSI platform allows the developer301to develop and store primitive templates in a source code form and/or a binary form in the primitive repository305. To build an application logic, the DPSI platform allows the developer301to build primitives, and then build the application logic using one or more of the primitives. The DPSI platform allows the developer301to define1101an application logic and define1102relevant primitive specifications via one or more user interfaces rendered on the developer device301a. The DPSI platform retrieves or pulls relevant primitive templates corresponding to the defined primitive specifications from the primitive repository305. The DPSI platform allows the developer301to apply1103the primitive specifications to the primitive templates to build the primitives. The DPSI platform then allows the developer301to build1104the application logic based on the defined application logic using the built primitives and stores the built application logic in the application logic repository308. To build an offerable service, the DPSI platform allows the developer301to define1105the offerable service and specify1106one or more application logics of the offerable service via one or more user interfaces rendered on the developer device301a. The DPSI platform retrieves or pulls the specified application logic(s) required by the offerable service from the application logic repository308. The DPSI platform then allows the developer301to build1107the offerable service comprising the application logic(s) and stores the offerable service in the service repository310. FIG.12illustrates a schematic showing application logic and service creation flows performed by a field engineer user302of the data plane service implementation (DPSI) platform. A field engineer user302, herein referred to as a “field engineer”, typically has less knowledge of service creation internals than a developer, but has more knowledge of service creation internals than a customer administrator. The field engineer302is, therefore, capable of creating primitives, application logics, and offerable services using the service creation application programming interfaces (APIs). The DPSI platform allows the field engineer302to pre-build primitives, application logics, and offerable services in accordance with customer requests. The DPSI platform allows the field engineer302to perform service creation related activities comprising creation of an application logic and creation of an offerable service using an API client deployed on a field engineer device302a, for example, a desktop computer, a laptop, a tablet computing device, etc. The API client communicates with an API server1208via a network, for example, the internet. In an embodiment, the API server1208comprises at least one processor and is communicatively coupled to the service creation APIs and operably coupled to the data plane service implementation (DPSI) platform. The API server1208receives and handles calls to the APIs. The API server1208interprets use-case-specific primitive specifications, use-case-specific application logic specifications, and use-case-specific service specifications inputted into the DPSI platform, via the calls to one or more of the APIs. The API server1208derives primitive specifications from a use-case-specific application logic specification and a use-case-specific service specification. The API server1208triggers the modules of the DPSI platform to create the packet processing primitive, the application logic, and the network service package herein referred to as the “offerable service”, based on a use-case-specific primitive specification, the use-case-specific application logic specification, and the use-case-specific service specification, respectively. The API server1208responds to the calls to the APIs with a status message, for example, a success or fail message. To create an application logic, the data plane service implementation (DPSI) platform allows the field engineer302to define and specify1201an application logic via one or more user interfaces rendered by the DPSI platform on the field engineer device302a. The DPSI platform allows the field engineer302to create the application logic via the APIs in operable communication with the API server1208, with the application logic specification as an input. The API server1208receives an API call with the application logic specification from the API client on the field engineer device302avia one or more of the APIs. The API server1208extracts1202relevant primitive specifications from the application logic specification. The API server1208then retrieves or pulls relevant primitive templates corresponding to the extracted primitive specifications from the primitive repository305. The API server1208then applies1203the extracted primitive specifications to the retrieved primitive templates to build the primitives. In an embodiment, the API server1208triggers the primitive builder306illustrated inFIG.3, to build the primitives as disclosed above. Furthermore, the API server1208builds1204the application logic based on the application logic specification using the built primitives. The API server1208stores the built application logic in the application logic repository308and sends an application logic creation response to the API client on the field engineer device302avia the API(s). In an embodiment, the API server1208triggers the application logic builder1402illustrated inFIGS.14B-14C, to build and store the application logic as disclosed above. To create an offerable service, the data plane service implementation (DPSI) platform allows the field engineer302to define1205the offerable service via one or more user interfaces rendered by the DPSI platform on the field engineer device302a. In response to an API call from the API client on the field engineer device302ato the API server1208, the API client gets an application logic list via one or more of the APIs from the API server1208. The API server1208gets the application logic list from the application logic repository308by query filters and sends the application logic list to the API client on the field engineer device302avia the API(s). The DPSI platform allows the field engineer302to select1206one or more application logics from the application logic list to build a service specification. The DPSI platform allows the field engineer302to create the offerable service via the API(s) in operable communication with the API server1208, with the service specification as an input. The API server1208receives an API call with the service specification from the API client on the field engineer device302avia the API(s). The API server1208retrieves or pulls one or more application logics required by the offerable service from the application logic repository308. The API server1208builds1207the offerable service comprising the application logic(s) and stores the built offerable service in the service repository310. The API server1208sends a service creation response to the API client on the field engineer device302avia the API(s). In an embodiment, the API server1208triggers the service builder316illustrated inFIG.3, to build and store the offerable service as disclosed above. FIG.13illustrates a schematic showing a service creation flow performed by a customer administrator user303of the data plane service implementation (DPSI) platform. A customer administrator user303, herein referred to as a “customer administrator”, typically has no knowledge of service creation internals and only understands service requirements from a customer side. The customer administrator303is, therefore, only capable of creating services from existing application logics, ordering and deploying the services, and configuring the services using the service creation application programming interfaces (APIs). The DPSI platform allows the customer administrator303to perform service creation, ordering, and configuration related activities comprising creating an offerable service, selecting an offerable service, ordering a service, and configuring a service using an API client deployed on a customer device303a, for example, a desktop computer, a laptop, a tablet computing device, etc. The API client communicates with an API server1208via a network, for example, the internet. To create an offerable service, the data plane service implementation (DPSI) platform allows the customer administrator303to define1301the offerable service via one or more user interfaces rendered by the DPSI platform on the customer device303a. In response to an API call from the API client on the customer device303ato the API server1208, the API client gets an application logic list via one or more of the APIs from the API server1208. The API server1208gets the application logic list from the application logic repository308by query filters and sends the application logic list to the API client on the customer device303avia the API(s). The DPSI platform allows the customer administrator303to select1302one or more application logics from the application logic list to build a service specification. The DPSI platform allows the customer administrator303to create the offerable service via the API(s) in operable communication with the API server1208, with the service specification as an input. The API server1208receives an API call with the service specification from the API client on the customer device303avia the API(s). The API server1208retrieves or pulls one or more application logics required by the offerable service from the application logic repository308. The API server1208builds1303the offerable service comprising the application logic(s) and stores the built offerable service in the service repository310. The API server1208sends a service creation response to the API client on the customer device303avia the API(s). In an embodiment, the API server1208triggers the service builder316illustrated inFIG.3, to build and store the offerable service as disclosed above. In addition to creating an offerable service, the DPSI platform allows the customer303to select an existing offerable service. To select an offerable service, in response to an API call from the API client on the customer device303ato the API server1208, the API client gets a service list via one or more of the APIs from the API server1208. The API server1208gets the service list via query filters from the application logic repository308and sends the service list to the API client on the customer device303avia the APIs. The DPSI platform allows the customer administrator303to select1304a service that matches the requirement from the service list. Furthermore, the data plane service implementation (DPSI) platform allows the customer administrator303to order a created or selected service via one or more of the APIs. In an embodiment, the customer administrator303orders a created or selected service by selecting or providing a service identifier (ID) of the service. The API server1208communicates with the service orchestrator317of the DPSI platform illustrated inFIG.3, to orchestrate1305the ordered service. The API server1208retrieves or pulls the required service from the service repository310and sends a service order response to the API client on the customer device303avia the API(s). The service orchestrator317deploys the orchestrated service on a service hosting platform, for example, a networking/computing hardware platform318as illustrated inFIG.3. When the service is deployed1306on the networking/computing hardware platform318, the data plane service implementation (DPSI) platform allows the customer administrator303to configure the ordered service via one or more of the APIs. The API server1208receives a configuration for the ordered service from the customer device303avia an API call. The API server1208communicates with the service configurator320of the DPSI platform illustrated inFIG.3, to configure1307the ordered service. In response to a trigger from the API server1208, the service configurator320applies the configuration to the orchestrated service on the networking/computing hardware platform318, thereby configuring1308the orchestrated service on the networking/computing hardware platform318. The orchestrated and configured service is then fully functional on the networking/computing hardware platform318. The API server1208sends a service configuration response to the API client on the customer device303avia the API(s). FIGS.14A-14Cillustrate flow diagrams showing architectural components of the data plane service implementation (DPSI) platform and procedures performed by different users of the DPSI platform in different environments. In an embodiment, the system and the method disclosed herein render three environments, namely, a service creation environment1400afor primitive, application logic, and service creation tasks; a service deployment and management environment1400bfor orchestrating, deploying, and configuring services; and a service hosting/execution environment1400cfor executing the services. The packet processing primitives, herein referred to as “primitives”, the application logics, and the services are created in the service creation environment1400a. The service creation environment1400acomprises a collection of functional elements that can be run on one or more physical servers. The functional elements comprise one or more development elements, one or more primitive repository elements, one or more primitive builder elements, one or more application logic repository elements, one or more application logic builder elements, one or more service repository elements, one or more service builder elements, and one or more application programming interface (API) elements.FIG.14A,FIG.14B, andFIG.14Cillustrate the corresponding environments and procedures from perspectives of a developer301, a field engineer302, and a customer administrator303, respectively. FIG.14Aillustrates a flow diagram showing architectural components of the data plane service implementation (DPSI) platform and procedures performed by a developer301associated with the DPSI platform in the service creation environment1400a. In an embodiment, the DPSI platform comprises a development server1401, the primitive repository305, the primitive builder306, an application logic builder1402, the application logic repository308, the service builder316, and the service repository310, which constitute the functional elements of the service creation environment1400aas illustrated inFIG.14A. Service creation procedures are manual and/or automated processes that invoke the functional elements of the service creation environment1400ato fulfill the service creation required functions as disclosed herein. Consider an example where a developer301performs service creation related activities comprising developing primitive templates, building a primitive, building an application logic, and building a network service package also referred to as an “offerable service”, using a developer device301a. The developer301creates a primitive template via a user interface rendered on the developer device301aby the DPSI platform. For creating the primitive template, the developer301implements the primitive template on the development server1401and publishes the primitive template to the primitive repository305. For creating a primitive in the service creation environment1400a, the developer301first creates a primitive specification via the user interface. The primitive builder306then pulls a primitive template that corresponds to the created primitive specification from the primitive repository305; builds the primitive by applying the primitive specification to the primitive template; and publishes the primitive to the primitive repository305. The primitive repository305stores primitive templates and primitives. In an embodiment, the data plane service implementation (DPSI) platform allows creation of an application logic using primitive templates. For creating an application logic using primitive templates in the service creation environment1400a, the developer301first creates an application logic specification via a user interface rendered on the developer device301aby the DPSI platform. The application logic builder1402then derives relevant primitive specifications from the application logic specification; pulls relevant primitive templates from the primitive repository305; builds the application logic in accordance with the application logic specification using primitives built by applying the relevant primitive specifications to the relevant primitive templates; and publishes the application logic to the application logic repository308. In another embodiment, the DPSI platform allows creation of an application logic using existing primitives. For creating an application logic using existing primitives, the developer301first creates an application logic specification via the user interface. The application logic builder1402then pulls relevant primitives from the primitive repository305; builds the application logic in accordance with the application logic specification using the relevant primitives; and publishes the application logic to the application logic repository308. For creating an offerable service in the service creation environment1400a, the developer301first creates a service specification containing required application logic specifications via a user interface rendered on the developer device301aby the DPSI platform. The service builder316then pulls relevant application logics from the application logic repository308; builds the offerable service from the relevant application logics; and publishes the offerable service to the service repository310. FIG.14Billustrates a flow diagram showing architectural components of the data plane service implementation (DPSI) platform and procedures performed by a field engineer302associated with the DPSI platform in the service creation environment1400a. In an embodiment, the DPSI platform comprises the application programming interface (API) server1208, the primitive repository305, the primitive builder306, the application logic builder1402, the application logic repository308, the service builder316, and the service repository310, which constitute the functional elements of the service creation environment1400aas illustrated inFIG.14B. The API server1208is configured to receive and handle user API calls; interpret input primitive, application logic, and service specifications; derive primitive specifications from application logic and service specifications; build or trigger building of a primitive, an application logic, and a service as per corresponding specifications; and respond to API calls with a success or fail message. Consider an example where the field engineer302performs service creation related activities comprising creation of a primitive, creation of an application logic, and creation of an offerable service using an API client deployed on a field engineer device302a. For creating a primitive, the field engineer302first inputs a primitive specification via an API call to the API server1208. The API server1208then triggers the primitive builder306to pull a primitive template corresponding to the primitive specification from the primitive repository305. The primitive builder306then builds the primitive by applying the primitive specification to the primitive template and publishes the primitive to the primitive repository305. In an embodiment, the data plane service implementation (DPSI) platform allows the field engineer302to create an application logic using primitive templates. For creating an application logic using primitive templates, the field engineer302first inputs an application logic specification via an API call to the API server1208. The API server1208then triggers the application logic builder1402to derive relevant primitive specifications from the application logic specification. The application logic builder1402then pulls primitive templates relevant to the derived primitive specifications from the primitive repository305; builds the application logic in accordance with the application logic specification using primitives built by applying the derived primitive specifications to the relevant primitive templates; and publishes the application logic to the application logic repository308. In another embodiment, the data plane service implementation (DPSI) platform allows the field engineer302to create an application logic using existing primitives. For creating an application logic using existing primitives, the field engineer302first inputs an application logic specification via an API call to the API server1208. The API server1208then triggers the application logic builder1402to pull primitives relevant to the application logic specification from the primitive repository305. The application logic builder1402then builds the application logic in accordance with the application logic specification using the relevant primitives and publishes the application logic to the application logic repository308. For creating an offerable service, the field engineer302first inputs a service specification containing all required application logic specifications via an API call to the API server1208. The API server1208then triggers the service builder316to pull application logics relevant to the application logic specifications from the application logic repository308. The service builder316then builds the offerable service from the application logics and publishes the offerable service to the service repository310. FIG.14Cillustrates a flow diagram showing architectural components of the data plane service implementation (DPSI) platform and procedures performed by a customer administrator303associated with the DPSI platform in the service creation environment1400a, the service deployment and management environment1400b, and the service hosting/execution environment1400c. In an embodiment, the DPSI platform comprises the application programming interface (API) server1208, the application logic repository308, the service builder316, and the service repository310, that constitute the functional elements of the service creation environment1400aas illustrated inFIG.14C. The service deployment and management environment1400bcomprises one or more service orchestration elements and one or more service configuration elements. Service deployment and management procedures are manual and/or automated processes that invoke the service deployment and management environment elements to fulfill the service orchestration, deployment, and configuration-required functions as disclosed herein. In this embodiment, the DPSI platform further comprises a service orchestrator317operably coupled to the API server1208. The service orchestrator317, in the service deployment and management environment1400b, orchestrates a network service package based on an order received from a customer device303avia an API call; retrieves the network service package from the service repository310; and deploys the retrieved network service package to a service hosting platform1403in the service hosting/execution environment1400c. In this embodiment, the DPSI platform further comprises a service configurator320operably coupled to the API server1208. The service configurator320, in the service deployment and management environment1400b, receives a configuration for the network service package from the customer device303avia an API call; in response to a trigger from the API server1208, configures the network service package based on the received configuration; and applies the configuration to the network service package deployed on the service hosting platform1403in the service hosting/execution environment1400c. The service orchestrator317and the service configurator320that are operably coupled to the API server1208constitute the functional elements of the service deployment and management environment1400bas illustrated inFIG.14C. The service hosting platform1403is configured to execute packet processing network functions of the network service package in the service hosting/execution environment1400c. The services in the network service package are deployed and run in the service hosting/execution environment1400c. The network service package is herein referred to as an “offerable service”. The service hosting/execution environment1400ccomprise an edge packet processing element as the core service execution element and one or more surrounding network elements supporting the operation of the edge packet processing element in monitoring the system operation, collecting the system operation related events, alarms, statistics (stats) and notifications, and interfacing to other external network functions such as wireless core interfaces, data network interfaces, enterprise network interfaces, etc. Service execution procedures are the actual execution of the created services within one or more edge packet processing elements. Consider an example where a customer administrator303performs service creation, ordering, and configuration related activities comprising creating an offerable service, ordering a service, and configuring a service using an API client deployed on a customer device303a. For creating the offerable service, the customer administrator303first inputs a service specification containing required application logic specifications via an API call to the API server1208. The API server1208then triggers the service builder316to pull relevant application logics from the application logic repository308. The service builder316then builds the offerable service from the relevant application logics and publishes the offerable service to the service repository310. The data plane service implementation (DPSI) platform also allows the customer administrator303to order an offerable service. The customer administrator303orders a created offerable service or an existing offerable service via an API call to the API server1208. The API server1208orchestrates the required offerable service through the service orchestrator317in the service deployment and management environment1400b. The service orchestrator317pulls the required offerable service from the service repository310and deploys the offerable service to the service hosting platform1403in the service hosting/execution environment1400c. The DPSI platform also allows the customer administrator303to configure an offerable service. The customer administrator303configures an offerable service via an API call to the API server1208. The API server1208triggers the service configurator320to configure the offerable service and apply the configuration to the offerable service deployed on the service hosting platform1403in the service hosting/execution environment1400c. FIG.15illustrates a flow diagram of an embodiment of a method for creating an offerable network service package comprising an application logic1505based on a use-case-specific service specification1502for an example customer use case1501. Consider an example of a customer user case1501where a customer requires a mobile quality of service (QoS) class identifier (QCI) traffic handling service. The mobile QCI traffic handling service is a well-known service and is, therefore, pre-built and published in the service repository310illustrated inFIG.3. In this example, the customer requires the service to help separate long-term evolution (LTE) control and user plane traffic by delivering control traffic to a mobile core; adhere to user plane traffic mobile QoS settings comprising, for example, for example, QCI, allocation and retention priority (ARP), guaranteed bit rate (GBR)/maximum bit rate (MBR)/aggregate maximum bit rate (AMBR), etc., and deliver user traffic to a packet data network (PDN) via a packet data network gateway (PGW) accordingly. The customer creates a use-case-specific service specification1502comprising a QCI handling specification and an ARP handling specification via user interfaces rendered on a customer device by the data plane service implementation (DPSI) platform. The GBR/MBR/AMBR settings are handled in accordance with the Third Generation Partnership Project (3GPP) standard. The DPSI platform allows the customer to select/order the pre-built mobile QCI traffic handling service via the service creation application programming interfaces (APIs). In the use-case-specific service specification1502, the customer specifies service capabilities and configures the service according to a local network environment via a provided service template. The API server in the DPSI platform interprets the customer's service selection and the service specifications created by the customer and orchestrates and configures the pre-built service accordingly. As illustrated inFIG.15, the exemplary customer user case1501comprises the following requirements: (1) separate long-term evolution (LTE) control and user traffic; (2) QoS class identifier (QCI) support as per the use-case-specific service specification1502; and (3) allocation and retention priority (ARP) support as per the use-case-specific service specification1502. The data plane service implementation (DPSI) platform receives the use-case-specific service specification1502comprising the required customer-specific service settings and configurations from the customer device via the service creation and management APIs201and proceeds to create an offerable network service package based on the received use-case-specific service specification1502. The DPSI platform maintains a pool1503of pre-built, common offerable service templates in the service repository310as illustrated inFIG.3. The common offerable service templates are built by authorized development users, for example, developers, of the DPSI platform. The pool1503of pre-built, common offerable service templates comprises, for example, an LTE traffic processing service template1503aassociated with classification, QCI, ARP, etc., and other offerable service templates1503b. For service creation from the pre-built, common offerable service templates, the DPSI platform applies customer-specific configurations to the pre-built, common offerable service templates. The DPSI platform performs a mapping1504of the service templates to application logics stored in the application logic repository308illustrated inFIG.3, and proceeds to create an offerable network service package from the mapped application logic1505for realizing the customer use case1501. In this example, the application logic1505in the offerable network service package for the above example customer user case1501comprises a protocol-based classification1506, a QCI-based classification1507, and an ARP-based classification1510, all performed, for example, by P4 hardware primitive implementations, with optional queuing and scheduling steps as illustrated inFIG.15. For example, on receiving incoming traffic, the protocol-based classification1506is performed on the incoming traffic by a P4 hardware primitive implementation, for example, on a P4 hardware switch. The output user traffic from the protocol-based classification1506then undergoes the QCI-based classification1507, optional priority queuing1508, and optional priority and/or weighted round robin (WRR) scheduling1509, prior to undergoing the ARP-based classification1510. The QCI-based classification1507and the ARP-based classification1510are both performed by P4 hardware primitive implementations, for example, on a P4 hardware switch. The output user traffic from the ARP-based classification1510then undergoes optional queuing1511and optional WRR scheduling1512, prior to priority queuing1513and priority scheduling1514for passing egress traffic. Furthermore, control traffic output from the protocol-based classification1506also undergoes priority queuing1513and priority scheduling1514for passing egress traffic. FIG.16illustrates a flow diagram of an embodiment of a method for creating an offerable network service package comprising an application logic based on a use-case-specific service specification1602for another example customer use case1601. Consider an example of a customer user case1601where a customer requires a sophisticated, multi-level access control list (ACL), firewall, and correlation-based network access control (NAC) service to control network access security and provide specified quality of service (QoS) handling to the allowed traffic. The requirements of the service are disclosed below: (1) do layer 2 through layer 4 ACL to incoming traffic 1.1 deny identified harmful traffic according to ACL rules1.2 do additional layer 5-7 deep packet inspection (DPI)-based firewall to qualified traffic requires this additional handling1.2.1 deny identified harmful traffic according to DPI-based firewall rules1.2.2 do additional correlation-based NAC to further qualified traffic requires this additional handling1.2.2(a) deny identified harmful access according to NAC authentication and authorization rules1.2.2(b) apply specified QoS rules to allowed traffic1.2.1(c) apply specified QoS rules to allowed traffic1.2.3 apply specified QoS rules to allowed traffic For this customer use case1601, the data plane service implementation (DPSI) platform allows the customer to build the service via the service creation APIs. To build the service, the customer selects three pre-built application logics; and specifies the application logics via provided application logic templates. The customer also configures the service according to a local network environment via a provided service template. The API server in the DPSI platform interprets the customer's service selection and specification and triggers orchestration and configuration of the application logics and the service accordingly. As illustrated inFIG.16, the customer user case1601comprises a requirement for a sophisticated multi-level access control list (ACL), firewall, and network access control (NAC) solution with various traffic shaping and quality of service (QoS) handlings. The data plane service implementation (DPSI) platform receives a use-case-specific service specification1602comprising the required customer-specific service settings and configurations from a customer device via the service creation and management APIs201and proceeds to create an offerable network service package based on the received use-case-specific service specification1602. In an embodiment, the DPSI platform provides a service building workspace1603to allow the customer to develop their own service template, herein referred to as a customer-built service template1604. The DPSI platform stores the customer-built service template1604in the service repository310illustrated inFIG.3. The DPSI platform performs a mapping1605of the customer-built service template1604to application logics stored in the application logic repository308illustrated inFIG.3. For service creation from pre-built application logics/primitives, mapping of a service to the application logics is required. The DPSI platform proceeds to create an offerable network service package from the mapped application logics, for example, application logic 1, application logic 2, and application logic 3 in accordance with the use-case-specific service specification1602for realizing the customer use case1601as illustrated inFIG.16. As illustrated inFIG.16, the application logic 1 comprises, for example, a header-based classification1607aperformed by a P4 hardware primitive implementation; an ACL1607bbased on layers 2, 3, and 4 metadata; First-in, First-out (FIFO) queuing priority queues1607c; and priority scheduling1607d. A header-based classifier is implemented on a P4 hardware switch1606to perform the header-based classification1607a. The ACL1607bcontains a set of rules that apply to ingress traffic. The P4 hardware switch1606receives the ingress traffic from an ingress network interface controller (NIC)401a. The received ingress traffic undergoes header-based classification1607a, from where the selected packets are forwarded to a P4 software switch1608running on a central processing unit (CPU)402afor execution of the application logic 2. Furthermore, the processed packets from the header-based classifier undergo the rules of the ACL1607bbased on layers 2, 3, and 4 metadata, and thereafter pass through the FIFO queuing priority queues1607c. The output packets from the FIFO queuing priority queues1607cundergo priority scheduling1607dand exit to an egress NIC401b. The P4 software switch1608, implemented on a data plane development kit (DPDK)403aor on an operating system (OS) network stack, for example, a Linux® network stack403bas illustrated inFIG.4, receives the selected packets from the header-based classifier implemented on the P4 hardware switch1606, for execution of the application logic 2. As illustrated inFIG.16, the application logic 2 comprises, for example, an additional deep packet inspection (DPI)-based classification1609aperformed by a P4 software primitive implementation, a layer 5-7 firewall1609b, Push-in, First-out (PIFO) queuing fair queues1609c, and round robin (RR) scheduling1609d. A DPI-based classifier is implemented on the P4 software switch1608to perform the DPI-based classification1609a. The selected packets undergo the DPI-based classification1609a, which forwards further selected packets to general software1610running on the CPU402afor execution of the application logic 3. Furthermore, the processed packets from the DPI-based classifier are passed to the layer 5-7 firewall1609b, and thereafter to the PIFO queuing fair queues1609c. The output packets from the PIFO queuing fair queues1609cundergo RR scheduling1609dand pass through the FIFO queuing priority queues1607cin the P4 hardware switch1606, and thereafter undergo priority scheduling1607dprior to exit to the egress NIC401b. The general software1610, implemented on a data plane development kit (DPDK)403aor on an operating system (OS) network stack, for example, a Linux® network stack403bas illustrated inFIG.4, receives the further selected packets from the DPI-based classifier implemented on the P4 software switch1608, for execution of the application logic 3. As illustrated inFIG.16, the application logic 3 comprises, for example, an additional correlation-based classification1611aperformed by a general software primitive implementation, correlation-based network access control (NAC)1611b, First-in, First-out (FIFO) queuing priority queues1611c, and weighted round robin (WRR) scheduling1611d. A correlation-based classifier is implemented on the general software1610to perform the correlation-based classification1611a. The further selected packets undergo correlation-based classification1611a. The processed packets from the correlation-based classifier undergo the correlation-based NAC1611band pass through the FIFO queuing priority queues1611c. The output packets from the FIFO queuing priority queues1611cundergo the WRR scheduling1611dand pass through the FIFO queuing priority queues1607cin the P4 hardware switch1606, and thereafter undergo priority scheduling1607dprior to exit to the egress NIC401b. FIG.17illustrates an exemplary screenshot of a graphical user interface (GUI)1700provided by the data plane service implementation (DPSI) platform for implementing packet processing network functions as a service. In an embodiment, the DPSI platform renders a user interface, for example, a GUI1700illustrated inFIG.17, to users for creating offerable network service packages. The GUI1700serves as a service creation interface configured as an overlay of one or more application programming interfaces (APIs). In an embodiment, the DPSI platform renders programmable APIs to users as service creation interfaces. The DPSI platform renders the GUI1700on user devices for creating offerable network service packages. The GUI1700illustrated inFIG.17, displays a service creation workbench with a menu1701comprising packet processing primitives, application logics, service templates, logic connectors, etc., for selection by users for creating the offerable network service packages. A user may select one or more packet processing primitives from the menu1701for creation of application logics, for example, application logics 1, 2, and 3 as disclosed in the description ofFIG.16. For example, the user may select packet processing primitives such as a header-based classification1607aor First-in, First-out (FIFO) queuing priority queues1607cand position the packet processing primitives at required positions in the application logic 1 by performing a drag and drop action on the GUI1700. The GUI1700also displays options to set1702parameters for logic connectors between the packet processing primitives in the application logics and to specify1703criteria and parameters for the packet processing primitives. The GUI1700illustrated inFIG.17displays the application logics 1, 2, and 3 created by a user based on a use-case-specific service specification for creation of an offerable network service package for the example customer use case disclosed in the description ofFIG.16. The GUI1700is, for example, any one of an online web interface, a web-based downloadable application interface, a mobile-based downloadable application interface, etc. In an embodiment, the GUI1700is a user interface, for example, a webpage, rendered by the computing and networking server(s) of the DPSI platform. In this embodiment, the GUI1700displays a menu of packet processing primitives, application logics, services, logic connectors, etc., for selection by users as illustrated inFIG.17, for creating offerable network service packages. FIG.18illustrates an architectural block diagram of an exemplary implementation of the system300for implementing packet processing network functions as a service. The system300disclosed herein comprises multiple computing and networking servers1804that constitute the data plane service implementation (DPSI) platform along with the application programming interface (API) server1208, the primitive repository305, the application logic repository308, and the service repository310. The computing and networking servers1804of the DPSI platform are accessible to users through a broad spectrum of technologies and user devices, for example, a developer device301a, a field engineer device302a, a customer device303a, etc. Each computing and networking server1804is responsible for a particular portion of the DPSI platform procedures and functions as backend enablers of corresponding frontend APIs201. The computing and networking servers1804of the DPSI platform are accessible to the user devices301a,302a, and303avia a network1803. The user devices301a,302a, and303aare electronic devices, for example, one or more of personal computers with access to the internet, tablet computing devices, mobile computers, mobile phones, internet-enabled cellular phones, smartphones, portable computing devices, laptops, personal digital assistants, touch-centric devices, workstations, client devices, portable electronic devices, network-enabled computing devices, interactive network-enabled communication devices, devices capable of running web browsers, any other suitable computing equipment, combinations of multiple pieces of computing equipment, etc. In an embodiment, the user devices, for example, the field engineer device302aand the customer device303a, access the computing and networking servers1804of the DPSI platform via APIs201, for example, service creation and management APIs, as disclosed in the descriptions ofFIGS.2-4andFIGS.15-16. The APIs201are implemented to allow creation, customization, and configuration of network service-supporting application logics and network service packages. The user devices302aand303acomprise API clients1801and1802, respectively, configured to communicate with the API server1208via the APIs201as illustrated inFIG.18. Each user may login to the DPSI platform, for example, using their unique identifier or any other authentication mechanism provided by the system300. In an embodiment, the computing and networking servers1804of the DPSI platform access the primitive repository305, the application logic repository308, and the service repository310via a network1813. In an embodiment, the DPSI platform is implemented in a cloud computing environment. In another embodiment, the DPSI platform is a cloud-based platform implemented as a service for implementing packet processing network functions as a service. For example, the DPSI platform is configured as a software as a service (SaaS) platform or a cloud-based software as a service (CSaaS) platform that implements packet processing network functions as a service. In an embodiment, the data plane service implementation (DPSI) platform is configured as a server or a network of servers in a cloud computing platform, for example, the Amazon Web Services (AWS®) platform of Amazon Technologies, Inc., the Microsoft Azure ° platform of Microsoft Corporation, etc. In another embodiment, the DPSI platform is configured as a cluster of computing and networking servers1804that is maintained at a fixed location. In another embodiment, the DPSI platform is implemented locally as an on-premise platform comprising on-premise software installed and run on client systems on the premises of an organization to meet privacy and security requirements. The networks1803and1813that connect the user devices301a,302a, and303aand the repositories305,308, and310, respectively, to the computing and networking servers1804of the data plane service implementation (DPSI) platform are short-range networks or long-range networks. For example, each of the networks1803and1813is any one of: the internet, satellite internet, an intranet, a wired network, a wireless network, a communication network that implements Bluetooth® of Bluetooth Sig, Inc., a network that implements Wi-Fi® of Wi-Fi Alliance Corporation, an ultra-wideband (UWB) communication network, a wireless universal serial bus (USB) communication network, a communication network that implements ZigBee® of ZigBee Alliance Corporation, a general packet radio service (GPRS) network, a mobile telecommunication network such as a global system for mobile (GSM) communications network, a code division multiple access (CDMA) network, a third generation (3G) mobile communication network, a fourth generation (4G) mobile communication network, a fifth generation (5G) mobile communication network, a long-term evolution (LTE) mobile communication network, a public telephone network, etc., a local area network, a wide area network, an internet connection network, an infrared communication network, etc., or a network formed from any combination of these networks. The computing and networking servers1804of the data plane service implementation (DPSI) platform interface with the user devices301a,302a, and303a, networking elements on which offerable network service packages are deployed, and in an embodiment, with one or more database systems comprising the repositories305,308, and310to implement the packet processing network functions as a service, and therefore more than one specifically programmed computing system is used for implementing the packet processing network functions as a service. One of the networking elements is, for example, a networking/computing hardware platform318as illustrated inFIG.18. In an embodiment, the computing and networking servers1804, the user devices301a,302a, and303a, the repositories305,308, and310, and the networking elements on which the offerable network service packages are deployed, constitute interconnected components of the system300that are deployed at different locations, but all coordinate with each other through the networks1803and1813, to provide an improvement in implementing packet processing network functions. Each of the computing and networking servers1804of the data plane service implementation (DPSI) platform is a computer system programmable using high-level computer programming languages. In an exemplary embodiment illustrated inFIG.1, each of the computing and networking servers1804of the DPSI platform comprises at least one processor1805and a memory unit1810. The memory unit1810is operably and communicatively coupled to the processor(s)1805and configured to store the computer program instructions defined by one or more modules1811of the DPSI platform. The modules of the DPSI platform comprise a primitive creation module, an application logic creation module, and a service creation module configured to define computer program instructions executable by the processor(s)1805. The primitive creation module, the application logic creation module, and the service creation module are also referred to as the primitive builder306, the application logic builder1402, and the service builder316, respectively, and are implemented in the service creation environment1400aas illustrated inFIGS.14A-14C. The modules of the DPSI platform further comprise the service orchestrator317and the service configurator320configured to define computer program instructions executable by the processor(s)1805. The service orchestrator317and the service configurator320are implemented in the service deployment and management environment1400bas illustrated inFIG.14C. In an embodiment, multiple databases comprising the primitive repository305, the application logic repository308, and the service repository310are remotely accessible to the computing and networking servers1804of the data plane service implementation (DPSI) platform via the network1813as illustrated inFIG.18. In another embodiment (not shown), the repositories305,308, and310are stored locally in the memory unit(s)1810of one or more of the computing and networking servers1804of the DPSI platform. The primitive repository305stores primitive templates and multiple packet processing primitives comprising, for example, pre-built packet processing primitives, provided by the DPSI platform as building blocks for customer use cases. The application logic repository308stores application logics, for example, pre-built application logics, custom-built application logics developed by field engineers, etc. The service repository310stores services comprising, for example, pre-built services, custom-built services, and customer-built services that constitute the offerable network service packages. Each of the repositories305,308, and310is a storage area or a storage medium configured to be used for storing data and files. Each of the repositories305,308, and310can be, for example, any of a structured query language (SQL) data store or a not only SQL (NoSQL) data store such as the Microsoft® SQL Server®, the Oracle® servers, the MySQL® database of MySQL AB Limited Company, the mongoDB® of MongoDB, Inc., the Neo4j graph database of Neo Technology Corporation, the Cassandra database of the Apache Software Foundation, the HBase® database of the Apache Software Foundation, etc. In an embodiment, each of the repositories305,308, and310can also be a location on a file system. In another embodiment, each of the repositories305,308, and310is configured as a cloud-based database implemented in a cloud computing environment. The memory unit1810is a storage unit used for recording, storing, and reproducing data, program instructions, and applications. In an embodiment, the memory unit1810comprises a random-access memory (RAM) or another type of dynamic storage device that serves as a read and write internal memory and provides short-term or temporary storage for information and instructions executable by the processor(s)1805. The memory unit1810also stores temporary variables and other intermediate information used during execution of the instructions by the processor(s)1805. In another embodiment, the memory unit1810further comprises a read-only memory (ROM) or another type of static storage device that stores firmware, static information, and instructions for execution by the processor(s)1805. In an embodiment, the platform modules1811, namely, the primitive builder306, the application logic builder1402, the service builder316, the service orchestrator317, and the service configurator320, are stored in the memory unit(s)1810of any one or more of the computing and networking servers1804of the data plane service implementation (DPSI) platform. For purposes of illustration, the platform module(s)1811is exemplarily shown to be a part of an in-memory system of the computing and networking server1804; however, the scope of the system300disclosed herein is not limited to the platform module(s)1811being part of an in-memory system, but extends to the platform module(s)1811being distributed across a cluster of multiple computer systems, for example, computers, servers, virtual machines, containers, nodes, etc., coupled to the network1813, where the computer systems operate as a team and coherently communicate and coordinate with each other to share resources, distribute workload, and execute different portions of the logic to implement packet processing network functions as a service. Each computer system in the cluster executes a part of the logic, and coordinates with other computer systems in the cluster to provide the complete functionality of the system300and the method disclosed herein. The processor(s)1805in any one or more of the computing and networking servers1804is configured to execute the platform module(s)1811for implementing packet processing network functions as a service. The platform module(s)1811, when loaded into the memory unit1810and executed by the processor(s)1805, transforms the corresponding computing and networking server1804into a specially-programmed, special purpose computing device configured to implement the functionality disclosed herein. The processor(s)1805refers to one or more microprocessors, central processing unit (CPU) devices, finite state machines, computers, microcontrollers, digital signal processors, logic, logic devices, application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), chips, etc., or any combination thereof, capable of executing computer programs or a series of commands, instructions, or state transitions. In an embodiment, the processor(s)1805is implemented as a processor set comprising, for example, a programmed microprocessor and a math or graphics co-processor. The computing and networking server(s)1804is not limited to employing the processor(s)1805. In an embodiment, the computing and networking server(s)1804employs a controller or a microcontroller. Also illustrated inFIG.18is a data bus1812, a display unit1806, a network interface1807, one or more storage devices1808, and common modules1809of the computing and networking server(s)1804. The data bus1812permits communications and exchange of data between the components, for example,1805,1806,1807,1808,1809, and1810of the computing and networking server(s)1804. The data bus1812transfers data to and from the memory unit1810and into or out of the processor(s)1805. The display unit1806, via a user interface, for example, a graphical user interface (GUI), displays user interface elements such as input fields, menus, etc., for allowing a user, for example, a developer, a field engineer, etc., of the data plane service implementation (DPSI) platform to input data into the DPSI platform. The network interface1807is configured to connect the computing and networking server(s)1804of the data plane service implementation (DPSI) platform to the networks1803and1813. In an embodiment, the network interface1807is provided as an interface card also referred to as a line card. The network interface1807is, for example, one or more of infrared interfaces, interfaces implementing Wi-Fi® of Wi-Fi Alliance Corporation, universal serial bus (USB) interfaces, Ethernet interfaces, frame relay interfaces, cable interfaces, digital subscriber line interfaces, token ring interfaces, peripheral component interconnect (PCI) interfaces, local area network (LAN) interfaces, wide area network (WAN) interfaces, interfaces using serial protocols, interfaces using parallel protocols, asynchronous transfer mode interfaces, fiber distributed data interfaces (FDDI), interfaces based on transmission control protocol (TCP)/internet protocol (IP), interfaces based on wireless communications technology such as satellite technology, radio frequency technology, near field communication, etc. The storage device(s)1808comprise non-transitory, computer-readable storage media, for example, fixed media drives such as hard drives for storing an operating system, application programs, data files, etc.; removable media drives for receiving removable media; etc. As used herein, “non-transitory, computer-readable storage medium” refers to all computer-readable media that contain and store computer programs and data. Examples of the computer-readable media comprise hard drives, solid state drives, optical discs or magnetic disks, memory chips, a read-only memory (ROM), a register memory, a processor cache, a random-access memory (RAM), etc. The common modules1809of the DPSI platform comprise, for example, input/output (I/O) controllers, input devices, output devices, etc. The output devices output the results of operations performed by the platform module(s)1811. Computer applications and programs are used for operating the platform module(s)1811. The programs are loaded onto fixed media drives and into the memory unit1810via the removable media drives. In an embodiment, the computer applications and the programs are loaded into the memory unit1810directly via the network1813. The platform module(s)1811is deployed and implemented in the computing and networking server(s)1804of the data plane service implementation (DPSI) platform using programmed and purposeful hardware as exemplarily illustrated inFIG.18. In an embodiment, the platform modules1811are computer-embeddable systems that implement packet processing network functions as a service. As disclosed above, the platform modules1811comprise the primitive creation module, the application logic creation module, the service creation module, the service orchestrator317, and the service configurator320. The primitive creation module, also referred to as the primitive builder306, defines computer program instructions for creating a packet processing primitive by applying a use-case-specific primitive specification to a corresponding one of multiple primitive templates retrieved from the primitive repository305, and for storing the created packet processing primitive in the primitive repository305as disclosed in the descriptions ofFIGS.1A-1B,FIGS.11-12, andFIGS.14A-14B. For the creation of the packet processing primitive, the primitive creation module creates multiple primitive templates and stores the created primitive templates in the primitive repository305; renders a first user interface for creating the use-case-specific primitive specification; retrieves one of the created primitive templates corresponding to the use-case-specific primitive specification, from the primitive repository305; and creates the packet processing primitive by applying the use-case-specific primitive specification to the retrieved primitive template. In an embodiment, the primitive creation module provides user interfaces that allow users, for example, development engineers, to develop and store primitive templates as software functional modules and/or as configurations of hardware functional modules in the primitive repository305. The primitive creation module allows an authorized development user, for example, a developer using the developer device301a, to develop the use-case-specific primitive specification via the first user interface. Through the primitive creation module, the developers create and store packet processing primitives into the primitive repository305, customize packet processing primitives, and select one or more of the packet processing primitives from the primitive repository305as building blocks for a customer use case based on user input. The application logic creation module, also referred to as the application logic builder1402, defines computer program instructions for creating an application logic based on a use-case-specific application logic specification using one or more of multiple packet processing primitives corresponding to the use-case-specific application logic specification, and for storing the created application logic in the application logic repository308as disclosed in the descriptions ofFIGS.1A-1B,FIGS.11-12, andFIGS.14A-14B. For the creation of the application logic, the application logic creation module renders a second user interface for creating the use-case-specific application logic specification; retrieves one or more of the packet processing primitives corresponding to the use-case-specific application logic specification, from the primitive repository305; and creates the application logic based on the use-case-specific application logic specification using the retrieved packet processing primitive(s). In an embodiment, the application logic creation module allows users, for example, a developer using a developer device301aand/or a field engineer using a field engineer device302a, to develop the use-case-specific application logic specification via the second user interface. In another embodiment, the field engineer inputs the use-case-specific application logic specification into the application logic creation module via an API call to the API server1208. In an embodiment, the application logic creation module provides user interfaces that allow users, for example, authorized development users such as developers and authorized administrators such as field engineers to create application logics using pre-built application logics stored in the application logic repository308. The application logic creation module creates application logics as internal implementations of a whole data plane packet processing network function required by one or more user-specific data plane packet processing services, using the created packet processing primitive(s) for realizing the customer use case. In an embodiment, the application logic creation module allows users, for example, development engineers, to create the application logics by direct software development and store the created application logics in the application logic repository308. In an embodiment, the application logic creation module allows field engineers to create application logics via one or more of the APIs201, for example, application logic creation APIs, and store the created application logics in the application logic repository308. The service creation module, also referred to as the service builder316, defines computer program instructions for creating a network service package comprising one or more application logics corresponding to a use-case-specific service specification, and storing the created network service package in the service repository310as disclosed in the descriptions ofFIGS.1A-1B,FIGS.11-13, andFIGS.14A-14C. For the creation of the network service package, the service creation module renders a third user interface for creating the use-case-specific service specification containing application logic specifications; retrieves one or more application logics corresponding to the use-case-specific service specification, from the application logic repository308; and creates the network service package comprising the retrieved application logic(s). In an embodiment, the service creation module allows a user, for example, a developer using a developer device301a, a field engineer using a field engineer device302a, and/or a customer using a customer device303a, to develop the use-case-specific service specification via the third user interface. In another embodiment, the field engineer and/or the customer inputs the use-case-specific service specification into the service creation module via an API call to the API server1208. In an embodiment, the service creation module maps a pre-built service, and/or a custom-built service, and/or a customer-built service stored in the service repository310to the created application logic(s). In an embodiment, the custom-built service and the customer-built service are typically developed using one or more of the APIs201. In an embodiment, the service creation module provides user interfaces that allow users, for example, software engineers, to create offerable services by direct software development. In another embodiment, the service creation module allows users, for example, field engineers or customer administrators, to create offerable services via one or more of the APIs201, for example, service creation APIs, based on use-case-specific service specifications. The API server1208of the data plane service implementation (DPSI) platform comprises at least one processor communicatively coupled to the APIs201and operably coupled to the computing and networking server(s)1804of the DPSI platform. The API server1208receives and handles calls to the APIs201. The API server1208interprets each of the use-case-specific primitive specification, the use-case-specific application logic specification, and the use-case-specific service specification inputted into the primitive creation module, the application logic creation module, and the service creation module, respectively, via the calls to one or more of the APIs201. The API server1208derives primitive specifications from the use-case-specific application logic specification and the use-case-specific service specification. The API server1208triggers the primitive creation module, the application logic creation module, and the service creation module to create the packet processing primitive, the application logic, and the network service package, respectively, based on the use-case-specific primitive specification, the use-case-specific application logic specification, and the use-case-specific service specification, respectively. The API server1208responds to the calls to the APIs201with a status message, for example, a success or fail message. The service orchestrator317of the data plane service implementation (DPSI) platform is operably coupled to the API server1208as illustrated inFIG.14C. The service orchestrator317orchestrates the network service package based on an order received from a customer device303avia an API call; retrieves the network service package from the service repository310; and deploys the retrieved network service package to a service hosting platform, for example, the networking/computing hardware platform318. The service configurator320of the DPSI platform is operably coupled to the API server1208. The service configurator320receives a configuration for the network service package from a customer device303avia an API call; in response to a trigger from the API server1208, configures the network service package based on the received configuration; and applies the configuration to the network service package deployed on the service hosting platform. The service hosting platform, for example, the networking/computing hardware platform318, is configured to execute packet processing network functions of the network service package. The processor(s)1805in the computing and networking server(s)1804of the data plane service implementation (DPSI) platform retrieves the computer program instructions defined by the primitive creation module, the application logic creation module, the service creation module, the service orchestrator317, and the service configurator320from the memory unit1810for executing the respective functions disclosed above. Each platform module(s)1811in the computing and networking server(s)1804is disclosed above as software executed by the processor(s)1805. In an embodiment, the platform modules1811are implemented completely in hardware. In another embodiment, the platform modules1811are implemented by logic circuits to carry out their respective functions disclosed above. In another embodiment, the DPSI platform is also implemented as a combination of hardware and software including one or more processors, for example,1805, that are used to implement the platform modules1811. In an embodiment, the platform module(s)1811is run locally on a single computing and networking server1804of the data plane service implementation (DPSI) platform; however the scope of the system300and the method disclosed herein is not limited to the platform module(s)1811being run locally on a single computing and networking server1804via an operating system and the processor(s)1805, but extends to running the platform module(s)1811remotely over the network1813by employing a web browser, one or more remote servers1804, computers, mobile phones, and/or other electronic devices. In an embodiment, one or more modules, databases, processing elements, memory elements, storage elements, etc., of the system300disclosed herein are distributed across a cluster of computer systems (not shown), for example, computers, servers, virtual machines, containers, nodes, etc., coupled to the network1813, where the computer systems coherently communicate and coordinate with each other to share resources, distribute workload, and execute different portions of the logic to implement packet processing network functions as a service. Disclosed herein is also a non-transitory, computer-readable storage medium configured to store computer program instructions executable by the processor(s)1805for implementing packet processing network functions as a service. The computer program instructions implement the processes of various embodiments disclosed above and perform additional steps that may be required and contemplated for implementing packet processing network functions as a service. When the computer program instructions are executed by the processor(s)1805, the computer program instructions cause the processor(s)1805to perform the steps of the method for implementing packet processing network functions as a service as disclosed in the descriptions ofFIGS.1A-17. In an embodiment, a single piece of computer program code comprising computer program instructions performs one or more steps of the method disclosed in the descriptions ofFIGS.1A-17. The processor(s)1805retrieves these computer program instructions and executes them. A module, or an engine, or a unit, as used herein, refers to any combination of hardware, software, and/or firmware. As an example, a module, or an engine, or a unit includes hardware such as a microcontroller, associated with a non-transitory, computer-readable storage medium to store computer program codes adapted to be executed by the microcontroller. Therefore, references to a module, or an engine, or a unit, in an embodiment, refer to the hardware that is specifically configured to recognize and/or execute the computer program codes to be held on a non-transitory, computer-readable storage medium. In an embodiment, the computer program codes comprising computer readable and executable instructions are implemented in any programming language, for example, programming protocol-independent packet processors (P4), C, C++, C#, Java®, JavaScript®, Fortran, Ruby, Perl®, Python®, hypertext preprocessor (PHP), Microsoft® .NET, Objective-C®, etc., for performing one or more steps of the method disclosed in the descriptions ofFIGS.1A-17. In another embodiment, other object-oriented, functional, scripting, and/or logical programming languages are also used. In an embodiment, the computer program codes or software programs are stored on or in one or more mediums as object code. In another embodiment, the term “module” or “engine” or “unit” refers to the combination of the microcontroller and the non-transitory, computer-readable storage medium. Often module or engine or unit boundaries that are illustrated as separate commonly vary and potentially overlap. For example, a module or an engine or a unit may share hardware, software, firmware, or a combination thereof, while potentially retaining some independent hardware, software, or firmware. In various embodiments, a module or an engine or a unit includes any suitable logic. The system300and the method disclosed herein provide an improvement in implementing packet processing network functions with customer-facing API-centric customizability. In the system300and the method disclosed herein, the design and the flow of interactions between the primitive builder306, the application logic builder1402, the service builder316, the service orchestrator317, and the service configurator320of the data plane service implementation (DPSI) platform illustrated inFIG.3andFIGS.14A-14C, are deliberate, designed, and directed. Every use-case-specific primitive specification, use-case-specific application logic specification, and use-case-specific service specification received by the DPSI platform via one or more user interfaces are configured by the DPSI platform to steer the input towards a finite set of outcomes. Furthermore, every selection of a packet processing primitive, and/or an application logic, and/or a service received by the DPSI platform via the GUI1700illustrated inFIG.17, is configured by the DPSI platform to steer the input towards a finite set of outcomes. The computing and networking server(s)1804of the DPSI platform implements one or more specific computer programs to direct each input towards a set of end results. The interactions designed by the DPSI platform allow users to create packet processing primitives by applying use-case-specific primitive specifications to corresponding primitive templates retrieved from the primitive repository305; create application logics based on use-case-specific application logic specifications using one or more of the packet processing primitives corresponding to the use-case-specific application logic specifications; and from these application logics, through the use of other, separate and autonomous computer programs, create offerable network service packages comprising one or more of the application logics corresponding to use-case-specific service specifications. The creation or selection of the packet processing primitives and the application logics constituted therefrom are used as triggers to create offerable network service packages. Performing the above disclosed method steps requires multiple separate computer programs and subprograms, the execution of which cannot be performed by a person using a generic computer with a generic program. The focus of the system300and the method disclosed herein is on an improvement to the implementation of packet processing functions with customer-facing API-centric customizability itself, and not on economic or other tasks for which a generic computer is used in its ordinary capacity. Accordingly, the system300and the method disclosed herein are not directed to an abstract idea. Rather, the system300and the method disclosed herein are directed to a specific improvement to the way the packet processing primitives, application logics, the services, and the APIs201in the system300operate, embodied in, for example, creating packet processing primitives by applying use-case-specific primitive specifications to corresponding primitive templates retrieved from the primitive repository305; creating application logics based on use-case-specific application logic specifications using one or more of the packet processing primitives corresponding to the use-case-specific application logic specifications; creating offerable network service packages comprising one or more of the application logics corresponding to use-case-specific service specifications, etc. The system300and the method disclosed herein implement packet processing network functions as services at a more granular level, allowing customers to create, customize, and configure their specific data plane packet processing network solutions from end to end through APIs201manually or programmatically. The system300and the method disclosed herein allow customers to purchase offerable network service packages comprising one or more application logics created from one or more pre-built packet processing primitives, which increases profitability, improves customer engagement, and invokes new lines of business. The data plane service implementation (DPSI) platform provides a computing platform for packet processing solutions and transforms network services into services that may be reused, recycled, repaired, and redistributed. The flexible implementation of the architecture400illustrated inFIG.4, is capable of satisfying the large and also ever-increasing diversity in customer demands, for example, in traffic classification, quality of service (QoS) support, security, availability, and reliability, from both a coverage dimension and a life span dimension. The system300and the method disclosed herein leverage software intermediaries, for example, the APIs201, for providing functional modules as a service and allowing customers to build, customize, and configure their respective solutions. The DPSI platform allows development of customized solutions based on variable dimensions and different specific requirements from specific customers in a highly performing integrated manner that satisfies all potential and ever-changing needs of the customers. The system300and the method disclosed herein ensure high performance, while implementing the packet processing functions in programmable hardware and/or software and/or combined hardware and software forms and integrating these packet processing functions as desired. It is apparent in different embodiments that the various methods, algorithms, and computer-readable programs disclosed herein are implemented on non-transitory, computer-readable storage media appropriately programmed for computing devices. The non-transitory, computer-readable storage media participate in providing data, for example, instructions that are read by a computer, a processor, or a similar device. In different embodiments, the “non-transitory, computer-readable storage media” also refer to a single medium or multiple media, for example, a centralized database, a distributed database, and/or associated caches and servers that store one or more sets of instructions that are read by a computer, a processor, or a similar device. The “non-transitory, computer-readable storage media” also refer to any medium capable of storing or encoding a set of instructions for execution by a computer, a processor, or a similar device and that causes a computer, a processor, or a similar device to perform any one or more of the steps of the method disclosed herein. In an embodiment, the computer programs that implement the methods and algorithms disclosed herein are stored and transmitted using a variety of media, for example, the computer-readable media in various manners. In an embodiment, hard-wired circuitry or custom hardware is used in place of, or in combination with, software instructions for implementing the processes of various embodiments. Therefore, the embodiments are not limited to any specific combination of hardware and software. Various aspects of the embodiments disclosed herein are implemented in a non-programmed environment comprising documents created, for example, in a hypertext markup language (HTML), an extensible markup language (XML), or other format that render aspects of a graphical user interface (GUI) or perform other functions, when viewed in a visual area or a window of a browser program. Various aspects of the embodiments disclosed herein are implemented as programmed elements, or non-programmed elements, or any suitable combination thereof. Where databases are described such as the primitive repository305, the application logic repository308, and the service repository310illustrated inFIG.3,FIGS.11-13,FIGS.14A-14C, andFIG.18, it will be understood by one of ordinary skill in the art that (i) alternative database structures to those described may be employed, and (ii) other memory structures besides databases may be employed. Any illustrations or descriptions of any sample databases disclosed herein are illustrative arrangements for stored representations of information. In an embodiment, any number of other arrangements are employed besides those suggested by tables illustrated in the drawings or elsewhere. In another embodiment, despite any depiction of the databases as tables, other formats including relational databases, object-based models, and/or distributed databases are used to store and manipulate the data types disclosed herein. In an embodiment, object methods or behaviors of a database are used to implement various processes such as those disclosed herein. In another embodiment, the databases are, in a known manner, stored locally or remotely from a device that accesses data in such a database. In embodiments where there are multiple databases, the databases are integrated to communicate with each other for enabling simultaneous updates of data linked across the databases, when there are any updates to the data in one of the databases. The embodiments disclosed herein are configured to operate in a network environment comprising one or more computers that are in communication with one or more devices via a network. In an embodiment, the computers communicate with the devices directly or indirectly, via a wired medium or a wireless medium such as the Internet, satellite internet, a local area network (LAN), a wide area network (WAN) or the Ethernet, or via any appropriate communications mediums or combination of communications mediums. Each of the devices comprises processors that are adapted to communicate with the computers. In an embodiment, each of the computers is equipped with a network communication device, for example, a network interface card, a modem, or other network connection device suitable for connecting to a network. Each of the computers and the devices executes an operating system. While the operating system may differ depending on the type of computer, the operating system provides the appropriate communications protocols to establish communication links with the network. Any number and type of machines may be in communication with the computers. The embodiments disclosed herein are not limited to a particular computer system platform, processor, operating system, or network. One or more of the embodiments disclosed herein are distributed among one or more computer systems, for example, servers configured to provide one or more services to one or more client computers, or to perform a complete task in a distributed system. For example, one or more of embodiments disclosed herein are performed on a client-server system that comprises components distributed among one or more server systems that perform multiple functions according to various embodiments. These components comprise, for example, executable, intermediate, or interpreted code, which communicate over a network using a communication protocol. The embodiments disclosed herein are not limited to be executable on any particular system or group of systems, and are not limited to any particular distributed architecture, network, or communication protocol. The foregoing examples and illustrative implementations of various embodiments have been provided merely for explanation and are in no way to be construed as limiting the embodiments disclosed herein. While the embodiments have been described with reference to various illustrative implementations, drawings, and techniques, it is understood that the words, which have been used herein, are words of description and illustration, rather than words of limitation. Furthermore, although the embodiments have been described herein with reference to particular means, materials, techniques, and implementations, the embodiments herein are not intended to be limited to the particulars disclosed herein; rather, the embodiments extend to all functionally equivalent structures, methods and uses, such as are within the scope of the appended claims. It will be understood by those skilled in the art, having the benefit of the teachings of this specification, that the embodiments disclosed herein are capable of modifications and other embodiments may be effected and changes may be made thereto, without departing from the scope and spirit of the embodiments disclosed herein.
176,270
11943112
DETAILED DESCRIPTION Certain aspects and features relate to a customizable user interface for providing customized access to holistic build data about an overall computer network for controlling capacity of the overall computer network. The holistic build data can be generated by aggregating one or more sets of network build sheets representing subparts (e.g., subnets) of the overall computer network. The network build sheets may each model or otherwise represent a proposed network architecture for at least a portion of the overall computer network. A proposed network architecture can be different from an existing network architecture, in that a proposed network architecture can be a network architecture that is to be constructed at a future point in time to achieve a goal. A proposed network architecture can include a proposed hardware architecture for the corresponding subpart of the overall computer network. In some examples, the network build sheets can be generated by one or more system architects using a network design tool that allows the system architect to generate the proposed network architecture. Once the network build sheets have been created and processed by the system, the customizable user interface may be accessed by a user, such as a system architect, a manager or executive, or other suitable entity for viewing at least a portion of the holistic build data. The user interface may be configured to display a customized graphical representation of at least a portion of the holistic build data. The graphical user interface can be customized based on the access level of the user. For example, a first user with a first level of access may access the user interface having a first graphical representation of the holistic build data, and a second user with a second level of access may access the user interface having a second graphical representation of the holistic build data that is different than the first graphical representation, etc. The customized graphical representation may be used to control a hardware acquisition process for acquiring computer hardware to meet the computing demand anticipated for the proposed network architectures of the overall computer network. A computer network can undergo various changes through time. For example, an amount of computing demand may increase as the number of entities that use the computer network also increases. Determining projected or proposed architectures for the computer network can facilitate meeting the projected or otherwise changing computing demand. The system architects may tend to work on their own parts of the project isolated from the other system architects, which can create problems. For example, designs may not be compatible with one another, and this isolation makes it difficult to obtain any visibility into the overall consumption of the computer network as a whole. For example, each system architect may be assigned a different subset (e.g., subnet, subpart, etc.) of the computer network, and each proposed network architecture may correspond to the different subset of the computer network. A system can be used to generate and provide holistic build data based on the proposed network architectures. For example, the system can receive network build sheets based on proposed network architectures from a set of different system architects or other suitable entities, and the system can aggregate the proposed network architectures to generate the holistic build data about the computer network. The system architects can generate the network build sheets using a network design tool, which can then automatically generate the network build sheets based on the network architecture designs. For example, a system architect can use the network design tool to design a proposed network architecture for a subpart of an overall computer network based on certain project specifications. The design process may be a dynamic one in which the system architect is designing and refining the proposed network architecture over time. As the system architect continues the design process, the network design tool can automatically generate and store corresponding build sheets at various checkpoints throughout the process. These build sheets can be considered partial build sheets, because they may be incomplete and represent only a part of the proposed network architecture. As a result of this dynamic process, a single proposed network architecture may have multiple corresponding build sheets generated and stored at different points along the design process. In some examples, the system can aggregate such build sheets corresponding to one or more proposed network architectures and use them to construct holistic build data. The holistic build data may indicate important information about the overall computer network, such as a proposed network architecture of the overall computer network, projected computing demand of the overall computer network, or a combination thereof. The system can provide the holistic build data using a graphical user interface, and the graphical user interface can be customized. For example, the graphical user interface can include one or more customizable graphical representations. In some examples, the system can generate the customizable graphical representations using the holistic build data. For example, each customizable graphical representation of the customizable graphical representations may correspond to or otherwise represent a different subset of the computer network. Additionally or alternatively, each customizable graphical representation may include different levels of detail relating to its corresponding subset of the computer network. The system can generate the customizable graphical representations based on a level of access of a user requesting access to the holistic build data. For example, a first user may request access to the graphical user interface and may be associated with a first level of access, and a second user may request access to the graphical user interface and may be associated with a second level of access. The system can generate, for the first user, the graphical user interface with a first graphical representation based on the first level of access, and the system can generate, for the second user, the graphical user interface with a second graphical representation based on the second level of access. In some examples, the first graphical representation can be different than the second graphical representation. The graphical representations in the graphical user interface can be used control a hardware acquisition process for obtaining computer hardware for the computer network so as to satisfy the projected computing demands of the computer network. For example, the graphical user interface can include a graphical representation of holistic build data based on network build sheets of the computer network. The network build sheets can correspond to proposed network architectures that indicate respective projected computing demand for respective subsets of the computer network. Based on the projected computing demand, an entity can determine whether or how much (and of what type of) computing hardware to acquire to meet the projected computing demand. In some examples, the system can automatically control the hardware acquisition process or generate support tickets for acquiring the necessary computing hardware. These illustrative examples are given to introduce the reader to the general subject matter discussed here and are not intended to limit the scope of the disclosed concepts. In the following description, for the purposes of explanation, specific details are set forth in order to provide a thorough understanding of various implementations and examples. Various implementations may be practiced without these specific details. The figures and description are not intended to be restrictive. FIG.1is a schematic of a network environment100in which network build sheets can be generated and aggregated according to one aspect of the present disclosure. Included in the network environment100are a computing device110and system architects130a-d. The network environment100may correspond to a Wide Area Network (“WAN”) environment, such as the Internet, through which the system architects130, for example via individual computing devices, may communicate with the computing device110via web browsers or client-side applications, to establish communication sessions, request and receive various computing resources, and access other features of applications or services such as a network design tool. In some examples, the computing devices used by the system architects130may include or otherwise provide the network design tool. The system architects130need not be formally trained system architects and can include any users that facilitate construction of a proposed network architecture. The computing device110may be or otherwise include any type of computing device such as a personal computer, tablet, mobile device, or server that can include, for example, a rack server, a tower server, a miniature server, a blade server, a mini rack server, a mobile server, an ultra-dense server, a super server, etc. The computing device110may include various hardware components such as, for example, a motherboard, processing units, memory systems, hard drives, network interfaces, power supplies, etc. In examples in which the computing device110is or otherwise includes a server, the computing device110may include one or more server farms, clusters, or any other appropriate arrangement or combination of computer servers. Additionally, the computing device110may act according to stored instructions located in a memory subsystem of the computing device110and may execute an operating system or other suitable applications. In some examples, the computing device110may be or otherwise include a cloud-hosted system that exists on a server-less, cloud-based environment. The computing device110may implement several different applications and services, and perform additional server-side functionality, including by an aggregation module114, a user interface module116, etc. For example, the computing device110can include network build sheets112a-bthat can be received from one or more of the system architects130a-d. The aggregation module114can receive the network build sheets112a-band can aggregate or otherwise combine the network build sheets112a-bto generate the holistic build data relating to a computer network. Additionally, the computing device110can use the user interface module116to provide at least a portion of the holistic build data via a user interface that can include a customizable graphical representation of holistic build data based on network architectures represented by the network build sheets112a-b. The system architects130a-dcan use one or more user devices, which can include suitable user devices for accessing web-based resources or application-based resources and can be capable of accessing and establishing communication sessions with the computing device110through the communication networks140, to generate the proposed network architectures. In some examples, the user devices of the system architects130a-dcan include or otherwise correspond to mobile devices, including tablet computers, smartphones, and smart watches, which may access the computing device110via a Local Area Network (“LAN”) or Wide Area Network (WAN), as well as mobile telecommunication networks, short-range wireless networks, or various other communication network types such as cable or satellite networks. Although certain examples herein are described in terms of mobile devices, in other examples, the system architects130a-dmay additionally or alternatively use other mobile or non-mobile computing devices, such as desktop computers, laptop computers, and the like, capable of accessing the computing device110via the communications network140and executing the network design tool for generating the network build sheets112a-b. Although certain components and entities are illustrated inFIG.1, other suitable entities and suitable, compatible, network hardware components and network architecture designs may be implemented in various embodiments to support providing the holistic build data via the customizable user interface. The communication network140, or any suitable alternative, may be any type of network that can support data communications using any of a variety of commercially-available protocols, including, without limitation, TCP/IP (transmission control protocol/Internet protocol), SNA (systems network architecture), IPX (Internet packet exchange), Secure Sockets Layer (SSL) or Transport Layer Security (TLS) protocols, Hyper Text Transfer Protocol (HTTP) and Secure Hyper Text Transfer Protocol (HTTPS), Bluetooth®, Near Field Communication (NFC), and the like. Merely by way of example, the network(s) connecting the user devices of the system architects130a-dand the computing device110inFIG.1may be local area networks (LANs), such as one based on Ethernet, Token-Ring or the like. Such network(s) also may be wide-area networks, such as the Internet, or may include financial/banking networks, telecommunication networks such as a public switched telephone networks (PSTNs), cellular or other wireless networks, satellite networks, television/cable networks, or virtual networks such as an intranet or an extranet. Infrared and wireless networks (e.g., using the Institute of Electrical and Electronics (IEEE) 802.11 protocol suite or other wireless protocols) also may be included in these communication networks. FIG.2is a block diagram of an example of a computing device110configured to aggregate network build sheets112and provide holistic build data214via a customizable user interface216according to one example of the present disclosure. The computing device110may be a network device and may include a processor202, a bus204, a communications interface206, a memory208, and other suitable components. In some examples, the components illustrated inFIG.2may be integrated into a single structure. For example, the components can be within a single housing. In other examples, the components illustrated inFIG.2can be distributed, for example in separate housings and in electrical communication with each other. The processor202may execute one or more operations for implementing various examples and embodiments described herein. The processor202can execute instructions stored in the memory208to perform the operations. The processor202can include one processing device or multiple processing devices. Non-limiting examples of the processor202include a Field-Programmable Gate Array (“FPGA”), an application-specific integrated circuit (“ASIC”), a microprocessor, etc. The processor202may be communicatively coupled to the memory208via the bus204. The non-volatile memory208may include any type of memory device that retains stored information when powered off. Non-limiting examples of the memory208include electrically erasable and programmable read-only memory (“EEPROM”), flash memory, or any other type of non-volatile memory. In some examples, at least some of the memory208may include a medium from which the processor202can read instructions. A computer-readable medium may include electronic, optical, magnetic, or other storage devices capable of providing the processor202with computer-readable instructions or other program code. Non-limiting examples of a computer-readable medium include magnetic disk(s), memory chip(s), ROM, random-access memory (“RAM”), an ASIC, a configured processor, optical storage, or any other medium from which a computer processor may read instructions. The instructions may include processor-specific instructions generated by a compiler or an interpreter from code written in any suitable computer-programming language, including, for example, C, C++, C#, Java, Perl, Python, etc. The communications interface206may interface other network devices or network-capable devices to analyze and receive information related to the network build sheets112, the holistic build data214, and the like. Information received from the communications interface206may be sent to the memory208via the bus204. The memory208can store any information received from the communications interface206. The memory208may include program code for receiving the network build sheets112and aggregating the network build sheets112for generating the holistic build data214. The program code may cause the computing device110, or any suitable component thereof, to generate a user interface216based on a request for accessing the holistic build data214. The computing device110, or any suitable component thereof, may determine a level of access associated with the request, and the computing device110can customize the user interface216, or any elements thereon, based on the determined level of access. The memory208may additionally include program code for a data store module210, a control access module212, and any other suitable modules for performing the operations described herein. In some examples, the user interface216may be a customizable user interface that may be presented to a user on a user device for providing at least a portion of the holistic build data214. In some examples, providing at least a portion of the holistic build data214may facilitate decisions about acquiring computing hardware for the computer network. The data store module210may store information relating to a user account (e.g., username and password, security information, etc.) for a number of users and associated client devices, including originating IP addresses of login attempts, browser settings of login attempts, etc. In some examples, the data store module210may store levels of access associated with the user account for the number of users. For example, the data store module210may store indications of a level of access to the holistic build data214for respective users. The control access module212may include or be communicatively coupled to an authentication service and may validate whether a user access attempt to the holistic build data214or the user interface216has been successfully authenticated in response to receiving a request for access. In some examples, the control access module212may determine the level of access associated with the request based on data included in or otherwise received from the data store module210. The results from the control access module212may be used by the computing device110to generate the user interface216. For example, the computing device110can generate the user interface216and can customize the graphical representation of the holistic build data214based on the determined level of access of the user making the request. FIG.3is a data flow diagram300of a process to aggregate network build sheets112and provide holistic build data214according to one example of the present disclosure. As illustrated, the data flow diagram300includes a first flow302and a second flow304that may be related. The first flow302involves solution or network architecture for a computer network, and the second flow304involves capacity management for the computer network. The first flow302can begin with a planning phase306. The planning phase306may involve one or more system architects130. The system architects130can generate one or more proposed network architectures305. As illustrated, four proposed network architectures305a-dare included in the planning phase306, but other suitable numbers, such as less than four or more than four, proposed network architectures305a-dare possible. Each proposed network architecture305may be generated by the system architect130, who may input the proposed network architecture305into a network design tool that can generate the network build sheet112(e.g., via the first flow302). The proposed network architecture305may indicate an anticipated computing demand for a subset (e.g., subpart or subnet) of the computer network. Upon the system architect130saving a draft of a proposed network architecture305, the system architect130may be prompted, for example via the system described herein, to indicate whether the draft is complete (e.g., in the sense that it includes all of the required hardware components, even if the draft is not “final”). In some examples, the system architect130, at block308, can indicate either that the draft is complete (even if the draft is not “final”) or that the draft is incomplete. In some examples in which the system architect130indicates that the draft is incomplete, the computing device110may not generate a corresponding build sheet. Alternatively, the computing device110or other suitable computing entity may generate, at block310, a partial build sheet corresponding to the incomplete draft. In some examples in which the system architect130indicates that the draft is complete, the computing device110or other suitable computing entity may generate, at block310, a network build sheet112based on the proposed network architecture305. At block312, the computing device110or other suitable computing entity can upload the saved network build sheet112to a database314, which may be configured to store one or more drafts of network build sheets112. Any number and combination of computing devices can save network build sheets112to the database314. In some examples, multiple network build sheets corresponding to multiple proposed network architectures associated with an overall computer network can be stored in the database314. In some examples, the computing device110can receive the network build sheets112from the database314and aggregate them to generate the holistic build data214. Accordingly, the first flow302may involve a flow for planning or otherwise projecting a network architecture and computing demand for the computer network. The second flow304may involve providing the holistic build data214for facilitating decisions about hardware of the computer network for meeting demand of the computer network. For example, at block316, a user can request access to a user interface216that is configured to display or otherwise provide at least a portion of the holistic build data214. At block318, the computing device110can determine a level of access associated with the user. The level of access may be used to determine which portion or portions of the holistic build data214to display on the user interface216. Additionally or alternatively, the level of access may be used to determine an amount of detail to provide for the provided portion of the holistic build data214. At block320, the computing device110generates the user interface216with a graphical representation of the holistic build data214. The computing device110can generate the graphical representation based on the determined level of access. The information included in the graphical representation can be used by an entity, such as the computing device110or a user, to control a hardware acquisition process. In some examples, the hardware acquisition process may or may not be automated. The hardware acquisition process can involve obtaining the computer hardware described in the holistic build data214to meet a projected or anticipated computing demand for the computer network. For example, the user can determine, from the graphical representation, an amount of projected or anticipated computing demand for the computer network, and the user can request or initiate a request for acquiring computing hardware for meeting the projected or anticipated computing demand with respect to the computer network. In some examples, the user interface216can include an amount of computer hardware for meeting the projected computing demand. In some examples, the computing device110automatically determines the amount of projected or anticipated computing demand for the computer network and can automatically request or initiate the request for acquiring computing hardware for meeting the computing demand. FIG.4is a flowchart of a process400for aggregating network build sheets112and providing holistic build data214via a customizable user interface216according to one example of the present disclosure. At block402, the computing device110receives a set of network build sheets112based on a set of proposed network architectures of a computer network. The set of network build sheets112may be generated, either directly or indirectly, by one or more system architects130. For example, a system architect130can input one or more proposed network architectures into a network design tool that can be used to generate one or more network build sheets112based on the one or more proposed network architectures. In some examples, a proposed network architecture can represent a subset (e.g., of the hardware components) of the computer network. The proposed network architecture can indicate an anticipated computing demand associated with the subpart of the computer network. At block404, the computing device110aggregates the network build sheets112. Any suitable number, for example one, two, three, four, five, or more, network build sheets112can be received by the computing device110. The computing device110can aggregate the network build sheets112to generate the holistic build data214. In some examples, the holistic build data214may represent a union of subparts or subsets of the computer network represented by the aggregated network build sheets112. Aggregating the network build sheets112can involve combining the network build sheets112or otherwise suitably generating the holistic build data214of the computer network based on data included in the network build sheets112. At block406, the computing device110receives a request to access a user interface216that is configured to provide at least a portion of the holistic build data214. The request may be submitted by a user or other suitable entity for viewing or otherwise suitably using the holistic build data214. In some examples, the computing device110receives authentication data, from the user, in addition to the request. The authentication information may indicate a type of user or other identifying information about the user that can be used to provide the holistic build data214via the user interface216. In some examples, the request can include a request to view a portion of the holistic build data214or other suitable information for requesting the holistic build data214via the user interface216. At block408, the computing device110determines an access level associated with the request received at the block406. The computing device110may determine the access level based on the request, based on the user associated with the request, based on other suitable information relating to the request or the user, or based on any suitable combination thereof. For example, the computing device110can receive authentication data with the request, and the computing device110can use the authentication data to retrieve, for example from the data store module210or via the control access module212, information about the request, the user, or a combination thereof. The retrieved information about the request or the user can be used to determine the access level associated with the request or the user. At block410, the computing device110generates the user interface216that includes a customized graphical representation of the holistic build data214. In some examples, the graphical representation represents a portion of the holistic build data214. The graphical representation can be customized, for example, based on the determined access level. For example, the computing device110can generate a first graphical representation of the holistic build data214based on a first access level, and the computing device110can generate a second graphical representation (e.g., different than the first graphical representation) of the holistic build data214based on a second access level. The first graphical representation and the second graphical representation may display different portions of the holistic build data214or different information of the same or similar portion of the holistic build data214. In some examples, the first graphical representation can exclude information in the second graphical representation, and vice-versa. In some examples, the graphical representation indicates, or can otherwise be used to determine, anticipated computing demand for the computer network. The computing device110can determine or otherwise generate a demand forecast with respect to the computer network. The demand forecast can include a projected computing demand for one or more subsets of the computer network at a time in the future. For example, the demand forecast can include the projected computing demand one, two, three, four, or more years in the future. Additionally, the computing device110can determine the demand forecast that can include computing demand of the computer network, or any subset thereof, at different intervals between a present time and the time in the future. For example, if the projected computing demand is for two years in the future, the computing device110may generate a demand forecast that includes projected computing demand for the computer network, or any subset thereof, every three months or any other suitable time interval between the present time and the time in the future. In some examples, the time intervals may be irregularly spaced apart. Additionally, the demand forecast can indicate a trend of computing demand from a present computing demand at the present time to the projected computing demand at the time in the future. The customized graphical representation of the holistic build data214may be used to automatically control a computing-hardware-acquisition operation. For example, a user viewing the graphical representation of the holistic build data214may determine a type of computer hardware, an amount of computer hardware, and other suitable determinations for meeting the projected computing demand based on the graphical representation. In some examples, the computing device110may receive, as input, the graphical representation and may output a command or other suitable output for initiating or otherwise automatically controlling an operation for acquiring the type of computer hardware, the amount of computer hardware, and the like for meeting the projected computing demand of the computer network. In some examples, the computing device110can implement machine-learning techniques to perform one or more of the operations of the process400or other suitable operations. For example, the computing device110can use a trained machine-learning model to determine the projected computing demand for the computer network, or any suitable component thereof, at the time in the future. The computing device110can input data that includes the network build sheets112, the holistic build data214, and the like into the trained machine-learning model. The computing device110can receive, as an output from the trained machine-learning model, the projected computing demand for at least a subset of the computer network at the time in the future. Additionally, the computing device110can train the machine-learning model. For example, the computing device110can generate, retrieve, or otherwise receive a training dataset that includes historical network build sheets and historical computing demand corresponding to the historical network build sheets. The computing device110can apply the training dataset to the machine-learning model to train the machine-learning model to output projections of computing demand for the computer network. In some examples, the computing device110can use the trained machine-learning model to control or otherwise initiate a real-world operation. For example, the computing device110can use the trained machine-learning model to determine the type or amount of computer hardware to acquire for at least a subset of the computer network for meeting a projected computing demand at the time in the future. Additionally, the computing device110can use the machine-learning model to initiate a computer hardware-acquisition operation using the determined amount of computer hardware for satisfying the projected computing demand. FIG.5is an example of a design tool500usable to generate network build sheets112for a computer network according to one example of the present disclosure. As illustrated, the design tool500includes a graphical representation of computing devices corresponding to at least a subset of a computer network. The graphical representation can include a set of visual objects that represent different computing devices either included in the computer network or proposed to be included in the computer network. For example, and as illustrated, the graphical representations include visual objects502a-c. The visual object502amay represent a first computing device proposed to be in the computer network, the visual object502bmay represent a second computing device proposed to be in the computer network, and the visual object502cmay represent a third computing device proposed to be in the computer network. Each of the visual objects502a-cmay include data relevant to the respective computing device. For example, each of the visual objects502a-cmay include (i) a visual indicator of whether the computing device is existing, requested, or proposed, and (ii) details relating to the computing device such as an amount of existing or projected computing memory, existing or projected processing power, existing or projected usage, and the like. The visual objects502a-ccan include or otherwise illustrate any other suitable information about the respective computing devices of the computer network. A system architect or other suitable entity can use the design tool500to drag-and-drop visual objects representing the computing devices on a canvas and arrange them into a network architecture design. The design tool500can then automatically generate one or more network build sheets112for the network architecture design based on the types, locations, and interconnections between the visual objects. For example, the system architect can generate a proposed network architecture for at least a subset of the computer network using the design tool500. The system architect can design the proposed network architecture by positioning and interconnecting, on the canvas, one or more visual objects502representing proposed (or existing) computing devices for the subset of the computer network. The system architect can use the design tool500to create the proposed network architecture, and, upon receiving an input from the system architect indicating that the proposed network architecture is substantially complete (e.g., encompasses the desired subset of the computer network), the design tool500or any component of the computing device110can automatically generate the network build sheet112. In some examples, the network build sheet112can be a processed version of the proposed network architecture. For example, the network build sheet112can include the proposed network architecture and other data and metadata (e.g., projected capacity, usage, etc.) for the subset of the computer network. The additional data and metadata may be derived from property values associated with the visual objects in the design tool500. FIG.6is an example of a customizable user interface600for providing holistic build data about a computer network according to one example of the present disclosure. As illustrated, the user interface600provides holistic build data for computer network602. The computer network602includes subnetwork A604and subnetwork B605, but other numbers of subnetworks for the computer network602are possible. Additionally, the subnetwork A604and the subnetwork B605are illustrated with four different computing devices606each, though the subnetworks604-605may include any suitable amounts (and different amounts) of computing devices606. And, each computing device606illustrated in the user interface600includes a respective processing power608, memory availability610, and usage612. In some examples, the respective processing power608, memory availability610, and usage612may be included in the holistic build data and may be determined based on aggregating the network build sheets. For example, the respective usage612may be a projected usage at the time in the future for the respective computing device606, etc. In some examples, the user interface600may be generated for a first access level and separately for a second access level. The first access level may include view615, which includes only subnetwork B605and excludes subnetwork A604(i.e., the first access level may not have access to the subnetwork A604). Additionally, the second access level may include view620, which includes all of the holistic build data displayed on the user interface600. In a non-depicted example, a third access level may include a third view that is different from the view615and the view620. For example, the third view may provide access to all of the subnetworks but may provide fewer details. As an example, the third view may provide the subnetwork A604and the subnetwork B605but may exclude one or more of the respective processing power608, memory availability610, or usage612. Different access levels may be assigned different viewing privileges, so that the view presented can be tailored to each access level. This may allow different entities in an organization to view the information most relevant to them, and may hide certain information from unauthorized entities (e.g., for security or privacy reasons). Although the subject matter has been described in language specific to structural features or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed only for the purpose of illustration and description and they are not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Modifications, adaptations, and uses thereof will be apparent to those skilled in the art without departing from the scope of the disclosure. For instance, any examples described herein can be combined with any other examples.
38,640
11943113
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS For a clearer illustration of the objectives, technical features and effects of the present disclosure, a clear and complete description of the embodiments of the present disclosure will be set forth with reference to the drawings. Obviously, the described embodiments are only a part, rather than all, of the embodiments of the present disclosure. All other embodiments derived by persons skilled in the art from the embodiments of the present disclosure without making inventive efforts shall fall within the scope of the present disclosure. Reference throughout this specification to “one embodiment,” “an embodiment,” “an example,” “some embodiments,” “some examples,” or similar language means that a particular feature, structure, or characteristic described is included in at least one embodiment or example. Features, structures, elements, or characteristics described in connection with one or some embodiments are also applicable to other embodiments, unless expressly specified otherwise. The terms “module,” “sub-module,” “circuit,” “sub-circuit,” “circuitry,” “sub-circuitry,” “unit,” or “sub-unit” may include memory (shared, dedicated, or group) that stores code or instructions that can be executed by one or more processors. A module may include one or more circuits with or without stored code or instructions. The module or circuit may include one or more components that are directly or indirectly connected. These components may or may not be physically attached to, or located adjacent to, one another. In one aspect, the present disclosure provides a method for constructing a synchronous network, which specifically includes the following embodiments. FIG.1illustrates a flow diagram of a method for constructing a synchronous network according to an embodiment of the present disclosure. As illustrated inFIG.1, when being applied to a shelf label system that includes a server, base stations and an electronic shelf label, the method specifically includes:step S101: establishing an initial network topological structure of base stations in a store according to ranging results among all the base stations. It should be noted that, as illustrated inFIG.2, in this embodiment, the shelf label system in each store includes a server, a plurality of base stations and a plurality of electronic shelf labels. The server transmits trigger information or a control instruction to the electronic shelf labels through the base stations, and each base station transmits a synchronization signal and a data signal that includes the trigger information or the control instruction periodically. In this embodiment, the server controls all the base stations in the store to transmit ranging signals in turn, and obtains a ranging result from each base station to the other base stations according to a strength of a feedback signal transmitted by the other base stations and received by the base station; the server then establishes an initial network topological structure of the base stations according to the ranging results among all the base stations, as illustrated inFIG.3. The ranging result includes, but is not limited to, no communication, a long distance and a short distance. In the initial network topological structure, a length of a line segment between the base stations may be expressed as the long distance or the short distance. InFIG.3, AP1represents base station1, AP2represents base station2, . . . , and AP11represents base station11. step S102: performing topology depth traversal by taking each base station as a root node according to the initial network topological structure, to obtain a topology depth corresponding to each base station. In this embodiment, performing topology depth traversal by taking each base station as the root node according to the initial network topological structure to obtain the topology depth corresponding to each base station specifically includes: taking a current base station as a root node, and taking the base station directly connected to the current base station as a first-hierarchy depth base station; taking, when there is a lower-hierarchy base station connected to the first-hierarchy depth base station, the lower-hierarchy base station as a second-hierarchy depth base station, and so on, until an Mth-hierarchy depth base station is traversed; and setting, when there is no base station connected to the Mthhierarchy depth base station, a topological depth of the current base station as M, where M may be a positive integer greater than 1. For example, as illustrated inFIG.3, when AP1is the root node, AP2, AP3and AP4are the first-hierarchy depth base stations, AP5, AP6, AP7, AP8and AP9are the second-hierarchy depth base stations, and AP10and AP11are the third-hierarchy depth base stations, the topological depth of AP1is 3. When AP2is the root node, AP5, AP6, AP4and AP1are the first-hierarchy depth base stations, AP10, AP7and AP3are the second-hierarchy depth base stations, and AP11, AP8and AP9are the third-hierarchy depth base stations, the topological depth of AP2is 3. By analogy, when AP11is the root node, AP7is the first-hierarchy depth base station, AP4and AP3are the second-hierarchy depth base stations, AP6, AP1, AP8and AP9are the third-hierarchy depth base stations, AP2is the fourth-hierarchy depth base station, AP5is the fifth-hierarchy depth base station, and AP10is the sixth-hierarchy depth base station, the topological depth of AP11is 6. It should be noted that when there is a target depth base station belonging to an Xth-hierarchy depth base station or an (X−1)th-hierarchy depth base station among the lower-hierarchy base stations connected to the Xth-hierarchy depth base station, the target depth base station is not counted in an (X+1)th-hierarchy depth base station, where X may be a positive integer greater than 1. For example, when AP1is the root node, AP2, AP3and AP4are the first-hierarchy depth base stations, and there is AP4among the lower-hierarchy base stations connected to AP2, then AP4should, in principle, belong to the second-hierarchy depth base station together with AP5and AP6. However, AP4already belongs to the first-hierarchy depth base station, so AP4is not included in the second-hierarchy depth base station. In this embodiment, AP4is the target depth base station mentioned above. step S103: taking the base station with a shallowest topological depth as a master base station, and adjusting the initial network topological structure according to the ranging result from each base station to its superior base station to obtain a target network topological structure. In this embodiment, each base station in the target network topological structure is only connected to one superior base station, i.e., each base station in the target network topological structure has one single superior base station, and the base stations with a same hierarchy are not connected to each other. The superior base station may be a base station that is capable of receiving a ranging signal during a ranging process of a base station in a topological structure. It should be noted that the shallowest topological depth in this embodiment means that the topological depth is the smallest. Assuming that the topological depth of AP1is 3, the topological depth of AP2is 4, the topological depth of AP3is 4, . . . , and the topological depth of AP11is 6, the value of the topological depth of AP1is the smallest, so the base station AP1is taken as the master base station. In order to prevent the communication interference between the base stations, it is necessary to adjust the initial network topological structure. Assuming the base station with the shallowest topological depth as the master base station and the shortest distance from each base station to its superior base station as the adjustment principle, when one base station has a plurality of superior base stations, the closest superior base station is selected as the target superior base station, and the base stations with the same hierarchy are not connected to each other, thereby obtaining the target network topological structure as illustrated inFIG.4. step S104: allocating a transceiving time slot to each base station according to the target network topological structure to obtain a transceiving time slot periodic table, and making all the base stations transmit and receive synchronous signals according to the transceiving time slot periodic table to obtain a synchronous network of the shelf label system. In this embodiment, allocating the transceiving time slot to each base station according to the target network topological structure to obtain the transceiving time slot periodic table includes: taking a lower-hierarchy base station directly connected to the master base station as a first-hierarchy auxiliary base station, taking a lower-hierarchy base station connected to the first-hierarchy auxiliary base station as a second-hierarchy auxiliary base station, and so on, to obtain an Nth-hierarchy auxiliary base station; and taking the master base station as a time slot starting point and allocating transceiving time slots to the N hierarchy auxiliary base stations from the first-hierarchy auxiliary base station to the Nth-hierarchy auxiliary base station in turn to obtain the transceiving time slot periodic table, where N may be a positive integer greater than 1. It should be noted that when the time slots are allocated to the base stations according to the target network topological structure, the master base station is taken as the time slot starting point, then the time slots are allocated to each base station among the first-hierarchy auxiliary base stations in turn, thereafter the time slots are allocated to the second-hierarchy auxiliary base stations, and so on, until the time slots are allocated to the last-hierarchy auxiliary base stations, thus obtaining the transceiving time slot periodic table as illustrated inFIG.5. Therefore, all the base stations in the target network topological structure can transmit and receive the synchronous signals according to the transceiving time slot periodic table, to synchronize the base stations in the electronic shelf label system to obtain the synchronous network of the electronic shelf label system. InFIG.5, T1indicates a transmission period of the synchronization signal, and T2indicates a transmission duration of the synchronization signal. Compared with the prior art, this embodiment has the following advantageous effects: According to this embodiment, an initial network topological structure of base stations is established through ranging results among all the base stations. Next, the initial network topological structure is adjusted according to topological depths and ranging results to obtain a target network topological structure. Finally, a transceiving time slot is allocated to each base station according to the target network topological structure, so that all the base stations transmit and receive synchronization signals according to the allocated transceiving time slots, thereby obtaining a synchronous network of the shelf label system. This embodiment solves the problem that since the existing asynchronous cellular network cannot uniformly schedule the time slots for a plurality of base stations, it is easy to cause a communication failure between the base station and the shelf label due to a time slot conflict and an interference between signals. This embodiment makes full use of time resources to maximize the data throughput of the system communication. This embodiment keeps time synchronization between the plurality of base stations by establishing a synchronous network, and prevents a signal interference by a time-slotted data transmission and reception based on the synchronous network, thereby improving the stability and the data throughput of the electronic shelf label system. In another embodiment of the present disclosure, after all the base stations transmit and receive the synchronization signals according to the transceiving time slot periodic table, the method further includes: starting, by a current base station, a ranging mode to obtain a second target base station closest to the current base station, if the current base station does not receive a synchronization signal transmitted by a first target base station within a preset time duration, or if the current base station receives a synchronization signal with an intensity less than a preset threshold, wherein the first target base station is a superior base station of the current base station in the target network topological structure; adjusting the target network topological structure by taking the second target base station as the superior base station of the current base station to obtain a first updated network topological structure, when the first target base station and the second target base station belong to a same hierarchy in the target network topological structure; and transmitting and receiving, by all the base stations, the synchronization signal in the first updated network topological structure according to the transceiving time slot periodic table. It should be noted that, taking AP6inFIG.4as an example, when AP6moves, and/or AP2moves, and/or AP2fails, AP6cannot receive the synchronization signal transmitted by AP2or receives the synchronization signal with a weak intensity. In order not to affect the normal communication of AP6, AP6starts the ranging mode to find the closest AP4, and modifies the superior base station of AP6into AP4. Since both AP2and AP4belong to the first-hierarchy auxiliary base stations in the target network topological structure, the topological depth of the target network topological structure and the hierarchy of the base station are not changed although the superior base station of AP6is modified, so it is still possible to transmit and receive the synchronization signals according to the previous transceiving time slot periodic table. Alternatively, when the first target base station and the second target base station do not belong to a same hierarchy in the target network topological structure, the method further includes: adjusting the target network topological structure by taking the second target base station as the superior base station of the current base station, to obtain a second updated network topological structure; re-allocating a transceiving time slot to each base station according to the second updated network topological structure, to obtain an updated transceiving time slot periodic table; and transmitting and receiving, by all the base stations, the synchronization signal in the second updated network topological structure according to the updated transceiving time slot periodic table. It should be noted that, taking AP7inFIG.4as an example, when AP7moves, and/or AP4moves, and/or AP4fails, AP7cannot receive the synchronization signal transmitted by AP4or receives the synchronization signal with a weak intensity. In order not to affect the normal communication of AP7, AP7starts the ranging mode to find the closest AP6, and modifies the superior base station of AP7into AP6. Since both AP4and AP6do not belong to a same hierarchy in the target network topological structure, the superior base station of AP7, the topological depth of the target network topological structure and the hierarchy of the base station are modified. Therefore, the transceiving time slot periodic table needs to be redistributed based on the modified network topological structure to obtain the updated transceiving time slot periodic table, so that all the base stations in the modified network topological structure can transmit and receive the synchronous signal according to the updated transceiving time slot periodic table. Therefore, in the synchronous network in this embodiment, the base station can select an optimal superior base station for communication after the base station moves by monitoring synchronization signals or ranging signals of a plurality of base stations, thereby solving the problem that the base station cannot communicate normally after moving. Referring toFIG.2, an electronic shelf label1-N is a communication shelf label of a base station1, and also falls within a communication range of a base station2. In fact, the electronic shelf label1-N can also communicate with the base station2. However, in the existing asynchronous cellular network, the electronic shelf label1-N can only receive a data signal of the base station1, and cannot monitor the signal of any other base station, so it is impossible to realize a movement across the base stations and select the optimal base station for communication. In order to solve the above problem, the electronic shelf label may also monitor the synchronization signal transmitted by the base station to determine whether it is necessary to change the connected base station, and the change process is the same as that for the base station to change the superior base station, which will not be repeated here. In another embodiment of the present disclosure, after establishing the initial network topological structure of base stations in the store according to ranging results among all the base stations, the method further includes: when there are at least two independent sub-networks in the initial network topological structure, obtaining a target sub-network topological structure corresponding to each sub-network; and obtaining a transceiving time slot period sub-table corresponding to each sub-network according to the target sub-network topological structure, so that the base station in each sub-network transmits and receives a synchronous signal according to the corresponding transceiving time slot period sub-table to obtain a synchronous network of the shelf label system. It should be noted that in order to further adapt to more complex store scenarios, such as scenarios of multiple floors and multiple separation areas, the synchronous network may be constructed by dividing the network into a plurality of sub-networks, and the specific flow is illustrated inFIG.6:step S201: a server starts a ranging process for all base stations in a store;step S202: the server establishes an initial network topological structure of the base stations according to ranging results;step S203: determining whether all the base stations in the store are communicable, and performing step S204if it is determined that all the base stations in the store are communicable, or performing step S205if it is determined that the base stations in the store are not all communicable;step S204: the server traverses a topological depth of each base station as a root node, and selects the base station with a shallowest topological depth as a master base station and other base stations as auxiliary base stations;step S205: regarding the communicable base stations as a sub-network, and totally there are L sub-networks;step S206: the server controls the master base station to transmit a synchronization signal, and the auxiliary base stations synchronously track a strongest base station according to a topological relationship;step S207: the server allocates transceiving time slots to the base stations according to the topological relationship, in an allocation principle that the time slots of adjacent base stations are adjacent to each other;step S208: determining whether all the sub-networks have been processed, and performing step S209if it is determined that all the sub-networks have been processed, or performing step S204if it is determined that not all the sub-networks have been processed;step S209: the base stations periodically transmit synchronization signals, periodically receive synchronization signals from the respective superior base stations and maintains synchronization, according to the allocated transceiving time slot;step S210: shelf labels periodically track and monitor the synchronization signals of the base stations, maintain synchronization, and determine whether it is necessary to change the base station for communication. That is, when there are at least two independent sub-networks in the initial network topological structure, the network topological structure is adjusted and the transceiving time slot periodic table is established for each sub-network according to step S101to step S104, so that the base stations in each sub-network transmit and receive synchronization signals according to the corresponding transceiving time slot periodic table, thereby taking the network topological structure composed of all the sub-networks as the synchronous network of the shelf label system. In addition, the time slot allocation after the multi-base station synchronization can also be adopted for business data transmission and reception and avoid the interference on the data transmission and reception, thereby improving the data throughput of the system communication. The method for constructing a synchronous network of the electronic shelf label system according to this embodiment solves the problem that since the existing asynchronous cellular network cannot uniformly schedule the time slots for a plurality of base stations, it is easy to cause a communication failure between the base station and the shelf label due to a time slot conflict and an interference between signals. The method makes full use of time resources to maximize data throughput of the system communication. In addition, the synchronous network can provide the synchronization signal, and the shelf label can monitor the synchronization signal of a plurality of base stations to find a cross-base station movement and select an optimal base station for communication, thereby solving the problem that the shelf label cannot communicate normally after moving. In another aspect, the present disclosure provides a shelf label system, which specifically includes a server, base stations and an electronic shelf label. The server is configured to establish an initial network topological structure of base stations in a store according to ranging results among all the base stations. The server is further configured to perform topology depth traversal by taking each base station as a root node according to the initial network topological structure, to obtain a topology depth corresponding to each base station. The server is further configured to take the base station with a shallowest topological depth as a master base station, and adjust the initial network topological structure according to the ranging result from each base station to its superior base station to obtain a target network topological structure, where each base station in the target network topological structure is only connected to one superior base station. The server is further configured to allocate a transceiving time slot to each base station according to the target network topological structure to obtain a transceiving time slot periodic table. The base stations are configured to transmit and receive synchronous signals according to the transceiving time slot periodic table to obtain a synchronous network of the shelf label system. In another aspect, an embodiment of the invention provides a computer device, which includes a memory, a processor and a computer program stored in the memory and executable on the processor. The processor is configured to execute the computer program to implement the steps of: establishing an initial network topological structure of base stations in a store according to ranging results among all the base stations; performing topology depth traversal by taking each base station as a root node according to the initial network topological structure, to obtain a topology depth corresponding to each base station; taking the base station with a shallowest topological depth as a master base station, and adjusting the initial network topological structure according to the ranging result from each base station to its superior base station to obtain a target network topological structure, where each base station in the target network topological structure is only connected to one superior base station; and allocating a transceiving time slot to each base station according to the target network topological structure to obtain a transceiving time slot periodic table, and making all the base stations transmit and receive synchronous signals according to the transceiving time slot periodic table to obtain a synchronous network of the shelf label system. In another aspect, an embodiment of the present disclosure provides a readable storage medium storing a computer program. When being executed by a processor, the computer program implements the steps of: establishing an initial network topological structure of base stations in a store according to ranging results among all the base stations; performing topology depth traversal by taking each base station as a root node according to the initial network topological structure, to obtain a topology depth corresponding to each base station; taking the base station with a shallowest topological depth as a master base station, and adjusting the initial network topological structure according to the ranging result from each base station to its superior base station to obtain a target network topological structure, where each base station in the target network topological structure is only connected to one superior base station; and allocating a transceiving time slot to each base station according to the target network topological structure to obtain a transceiving time slot periodic table, and making all the base stations transmit and receive synchronous signals according to the transceiving time slot periodic table to obtain a synchronous network of the shelf label system. According to the present disclosure, an initial network topological structure of all base stations is established through ranging results among all the base stations; next, the initial network topological structure is adjusted according to topological depths and the ranging results to obtain a target network topological structure; finally, a transceiving time slot is allocated to each base station according to the target network topological structure, so that all the base stations transmit and receive synchronization signals according to the allocated transceiving time slots, thereby obtaining a synchronous network of the shelf label system. The present disclosure solves the problem that since the existing asynchronous cellular network cannot uniformly schedule time slots for a plurality of base stations, it is easy to cause a communication failure between the base station and the shelf label due to a time slot conflict and an interference between signals. The present disclosure makes full use of time resources to maximize the data throughput of the system communication. By establishing a synchronous network, the present disclosure keeps time synchronization between the plurality of base stations by establishing a synchronous network, and prevents a signal interference by a time-slotted data transmission and reception based on the synchronous network, thereby improving the stability and the data throughput of the electronic shelf label system. Those of ordinary skills in the art can understand that all or part of the flows in the aforementioned method embodiments can be completed by instructing relevant hardware through a computer program that may be stored in a nonvolatile computer-readable storage medium, and when executed, the program can include the flows of the aforementioned method embodiments. In which, any reference to a memory, a storage, a database or any other medium adopted in the embodiments of the present disclosure can include a non-volatile memory and/or a volatile memory. The nonvolatile memory may include a read-only memory (ROM), a programmable ROM (PROM), an electrically programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM) or a flash memory. The volatile memory may include a random-access memory (RAM) or an external cache memory. By way of illustration rather than limitation, the RAM is available in various forms, such as a static RAM (SRAM), a dynamic RAM (DRAM), a synchronous DRAM (SDRAM), a double data rate SDRAM (DDRSDRAM), an enhanced SDRAM (ESDRAM), a synch-link DRAM (SLDRAM), a memory bus (Rambus) direct RAM (RDRAM), a direct memory bus dynamic RAM (DRDRAM) and a memory bus dynamic RAM (RDRAM). It should be noted that herein, the relational terms such as ‘first’ and ‘second’ are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply any such actual relationship or sequence between these entities or operations. In addition, the term “comprise,” “include” or any other variation thereof is intended to cover non-exclusive inclusions, so that a process, method, article or device that includes a series of elements includes not only those elements, but also other elements not explicitly listed, or further includes elements inherent to such process, method, article or device. Without further restrictions, an element defined by a statement “comprising a . . . ” does not exclude the existence of other identical elements in a process, method, article or device that includes said element.
29,537
11943114
DETAILED DESCRIPTION OF THE EMBODIMENTS The present disclosure is further defined by, but not limited to, the accompanying drawings and embodiments of the specification. Embodiment 1 An active edge caching method based on community discovery and weighted federated learning includes:aggregating users into different user groups in a service scope of a base station by using a community discovery algorithm;selecting a most important user from each user group as a caching node to provide content distribution services; andtraining a DL model with a content popularity prediction framework based on attention weighted federated learning, and then predicting user's content preferences at the next moment by using the trained DL model to cache hot content on a selected user. Embodiment 2 According to the active edge caching method based on community discovery and weighted federated learning described in Embodiment 1, differences are as follows: As shown inFIG.1, an operating system for the active edge caching method includes a base station and U users, where each user has content storage capability and requests content from a content library F=f 1; 2; : : : ; f : : : ; Fg according to personal preferences. In order to avoid network congestion caused by repeated content downloading requests and reduce content downloading latency, operators rent some user terminals as caching nodes to provide caching services. In order to select a best caching user and place best hot content, the present disclosure provides an active edge caching method to reduce network transmission latency and network operation cost. The aggregating users into different user groups in a service scope of a base station by using a community discovery algorithm includes: (1) obtaining physical distances between different users by means of GPS or the like, and simultaneously obtaining social relationships between different users from social platforms such as microblog, WeChat, and Tik Tok; constructing a D2D content sharing graph G=f U; Eg according to the physical distances and social relationships between different users, where U is a user node, E represents whether a D2D communication link has been established between users for content transmission, and E=1 represents that a D2D communication link has been established; where a specific construction flow is shown asFIG.2, that is, when the physical distance between user u and user v in the base station is less than a communication distance threshold (the communication distance threshold is generally determined by transmitting power of a user terminal, and the higher transmitting power indicates a longer transmission distance) and the two users have a social relationship, namely, two users keep in touch or are in the same social circle, a D2D communication link has been established between user u and user v for content transmission, and Eu; v=1; otherwise, Eu; v=0; and (2) dividing the users into different user groups C=f C1; C2; ; : : : ; Ci; : : : ; Cig based on the constructed D2D content sharing graph G=f U; Eg by using a Louvain community discovery algorithm. The method used for user grouping in the present disclosure is not limited to the Louvain algorithm, but may alternatively be graph segmentation algorithms such as Kernighan-Lin, and Newman and Girvan. The dividing the users into different user groups by using a Louvain community discovery algorithm, as shown inFIG.3, includes:A. treating each node in the D2D content sharing graph G as an independent community;B. assigning each node in turn to a community where its neighbor nodes are located, computing a modularity change ΔQ before and after assigning, recording a neighbor node maxΔQ with maximum ΔQ, and assigning the node to a community where the neighbor node with maximum ΔQ is located if maxΔQ>0, otherwise remaining the node unchanged, where ΔQ is computed by formula (I): ¢Qu!⁢G=Au;v+au!⁢G2⁢m⁢iμ⁢aG+a⁡(u)¶2′′2⁢m⁢i′′⁢Au;v2⁢m⁢i3⁢aG′22⁢m⁢iμ⁢a⁡(u)¶2#2⁢m;=au!⁢G2⁢m⁢i⁢aG⁢£⁢a⁢(u)2⁢m2;(I)where, m={dot over ( )} Eu; vis a sum of all edge weights in the D2D content sharing graph G; Au; v={dot over ( )} Eu; v; 8u ; v 2 Ciis a sum of edge weights between all nodes in community Ci; aCirepresents a sum of weights of connecting edges between community Ciand other communities; au! Ciis a sum of edge weights between all nodes after node U is moved into community Ci; a(u) is a sum of weights of all edges connected to node U; C. repeating step B until the communities of all nodes do not change; D. compressing the nodes in the same community into a new node to convert the edge weights between nodes within the community into an edge weight of the new node and convert the edge weights between communities into edge weights between new nodes; and E. repeating steps A-D until the modularity of the entire D2D content sharing graph G does not change. The selecting a most important user from each user group as a caching node to provide content distribution services includes:(3) in each user group, first computing a centrality Buof each user, where Buis computed by formula (II): Bu=XUi=1⁢Xi<j⁢gi;j(u)Gi;j(II)where, Gi; jrepresents the number of shortest path links between all users in the D2D content sharing graph, and gi; j(u) is the number of links that pass through user U in Gi; j;then obtaining caching capacities Suof different users according to the storage capacities of different user devices, where the caching capacities Suare magnitudes of inherent storage spaces of terminals such as users' mobile phones;finally, computing degrees of importance Vu=° £ Bu+(1i°)£ Suof different users in different user groups according to the centralities Buand the caching capacities Su, where {hacek over ( )} is a weight assigned to the centralities Buand the caching capacities Su, with a value between 0 and 1;(4) sorting the users in different user groups according to the degrees of importance Vu, and selecting the most important user as the user caching node in each user group; and(5) after the caching nodes are selected, selecting hot content according to the popularity of different content in the user groups for caching on the selected users. The popularity of different content in different user groups will be predicted through deep learning models. In the present disclosure, the training a content popularity deep learning prediction model, namely, DL model with an attention weighted federated learning framework includes:(6) selecting a certain proportion of user terminals to participate in the training process of the content popularity deep learning prediction model, where the number of the selected user terminals is determined by the current communication environment, and more user terminals may be selected to participate in federated training when communication resources are relatively idle; training the content popularity deep learning prediction model with locally recorded user's historical content request data; after the content popularity deep learning prediction model is trained on different user terminals, transmitting the models to a base station for model aggregation, where the entire process is shown inFIG.4; and(7) in the process of model aggregation, assigning, by the base station, different weights to the content popularity deep learning prediction models, namely, local models of different user terminals according to user activities and terminal computing capabilities (as shown inFIG.5), where the weights are computed by formulas (III) and (IV): ar+1u=°droj⁢12.⁢r°u⁢°Pu=1U°⁢dri⁢12.⁢ru°(III)qr+1u=log⁡(er+1u)log⁡(max⁢f⁢eg)(IV)where, ar+1urepresents an activity of the selected user terminal U in the (r+1)thfederated training process, °dri⁢12.⁢ru° represents a quantity of requests for different content by the selected terminal U between time windows [r i 1/{dot over (2)} r]; gr+1urepresents a computing capability of the selected user terminal U in the (r+1)thfederated training process, er+1urepresents the number of local training times that the computing capability of the selected terminal U may be performed in the (r+1)thfederated process, log( ) is logarithmic computation, and maxf eg is the maximum number of local training times;after the weight computation is completed, performing weighted aggregation on different local models to obtain a global content popularity deep learning prediction model, where weighted aggregation formulas are shown as formulas (V) and (VI): wr+1g=u=1XUar+1uqr+1uwr+1u(V) br+1g=u=1XUar+1uqr+1ubr+1u(VI) where, wr+1gand br+1gare a weight and a bias parameter of the content popularity deep learning prediction model after aggregation in the base station; wr+1uand br+1uare a weight and a bias parameter of the local model obtained on the selected user terminal U after local training; ar+1uand qr+1uare weights, computed by formulas (III) and (IV), of the local model trained by the user terminal U. After the content popularity deep learning prediction model is trained with the foregoing weighted federated framework, the caching user selected in step (4) uses the obtained content popularity deep learning prediction model to predict user preferences for different content at the next moment to cache hot content, including:(8) in different user groups, inputting, by the caching users selected in step (4), locally recorded request data dri 1/{dot over (2)}rufor different content in the previous 1/: time windows into the trained content popularity deep learning prediction model, and outputting user preferences Ŷr+1for different content at the next moment; subsequently, computing caching probabilities of different content in the selected caching users according to predicted results, as shown in formula (VII): pr+1f=Y^r+1fPf=1F⁢Y^r+1f(VII)where, pr+1frepresents a probability that content f is cached at the next moment, Ŷr+1frepresents a user preference for the content f at the next moment, and Γf=1F⁢Y^r+1f represents a sum of user preferences for all content; and(9) after the caching probabilities of different content in the selected caching users are computed, sorting the caching probabilities by using a sorting algorithm, and selecting the first S (S is the number of content that may be stored by the selected caching users) pieces of content with the highest probability for caching at the selected user terminals, so as to reduce network transmission latency and network operation cost. The content popularity deep learning prediction model used in the present disclosure is a bidirectional long short-term memory (BiLSTM) network model, with a structure shown inFIG.6. The prediction model is not limited to the use of a bidirectional long and short-term memory network, but may be a deep learning network model such as a convolutional neural network model or a graph neural network model. FIG.7is a performance analysis diagram of the content popularity deep learning prediction model based on a weighted federated learning framework in this embodiment, where horizontal coordinates represent indexes of different request content, and vertical coordinates represent the number of times the user has requested different content. InFIG.7, AWFL is a predicted value of the content popularity model based on a weighted federated learning framework, and Group True is a true value. It may be seen that the AWFL method of the present disclosure can accurately predict user's future requests for different content. The combination method of weighted federated learning and a bidirectional long short-term memory network, provided in this embodiment, can well fit user's preferences for different content. FIG.8(a)is a latency performance analysis diagram of the active edge caching method based on community discovery and weighted federated learning in this embodiment under different caching capabilities, where horizontal coordinates represent quantities of content that may be cached by different user terminals, vertical coordinates represent content downloading latency, and CAFLPC is the active edge caching method based on community discovery and weighted federated learning provided in the present disclosure.FIG.8(a)can demonstrate that the provided CAFLPC method can well reduce content downloading latency and obtain approximately optimal policy performance under different caching capabilities compared with other methods. FIG.8(b)is an analysis diagram of system benefit per unit cost of the active edge caching method based on community discovery and weighted federated learning in this embodiment under different caching capabilities. Horizontal coordinates represent quantities of content that may be cached by different user terminals, and vertical coordinates represent system benefit per unit cost.FIG.8(b)can prove that the provided CAFLPC method can reduce more content downloading latency per unit cost compared with other methods, that is, the provided method can achieve goals of reducing network transmission latency and network operation cost. Embodiment 3 A computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the active edge caching method based on community discovery and weighted federated learning when executing the computer program. Embodiment 4 A computer-readable storage medium stores a computer program, and the computer program implements the steps of the active edge caching method based on community discovery and weighted federated learning when executed by a processor.
13,563
11943115
DETAILED DESCRIPTION Some embodiments will be described in more detail with reference to the accompanying drawings, in which the embodiments of the present disclosure have been illustrated. However, the present disclosure can be implemented in various manners, and thus should not be construed to be limited to the embodiments disclosed herein. It is to be understood that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment now known or later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as follows: On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider. Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service. Service Models are as follows: Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations. Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls). Deployment Models are as follows: Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises. Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises. Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds). A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure that includes a network of interconnected nodes. Referring now toFIG.1, a schematic of an example of a cloud computing node is shown. Cloud computing node10is only one example of a suitable cloud computing node and is not intended to suggest any limitation as to the scope of use or functionality of embodiments of the invention described herein. Regardless, cloud computing node10is capable of being implemented and/or performing any of the functionality set forth hereinabove. In cloud computing node10there is a computer system/server12or a portable electronic device such as a communication device, which is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with computer system/server12include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like. Computer system/server12may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. Computer system/server12may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices. As shown inFIG.1, computer system/server12in cloud computing node10is shown in the form of a general-purpose computing device. The components of computer system/server12may include, but are not limited to, one or more processors or processing units16, a system memory28, and a bus18that couples various system components including system memory28to processor16. Bus18represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus. Computer system/server12typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server12, and it includes both volatile and non-volatile media, removable and non-removable media. System memory28can include computer system readable media in the form of volatile memory, such as random access memory (RAM)30and/or cache memory32. Computer system/server12may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system34can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to bus18by one or more data media interfaces. As will be further depicted and described below, memory28may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention. Program/utility40, having a set (at least one) of program modules42, may be stored in memory28by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. Program modules42generally carry out the functions and/or methodologies of embodiments of the invention as described herein. Computer system/server12may also communicate with one or more external devices14such as a keyboard, a pointing device, a display24, etc.; one or more devices that enable a user to interact with computer system/server12; and/or any devices (e.g., network card, modem, etc.) that enable computer system/server12to communicate with one or more other computing devices. Such communication can occur via Input/Output (I/O) interfaces22. Still yet, computer system/server12can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter20. As depicted, network adapter20communicates with the other components of computer system/server12via bus18. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer system/server12. Examples, include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc. Referring now toFIG.2, illustrative cloud computing environment50is depicted. As shown, cloud computing environment50includes one or more cloud computing nodes10with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone54A, desktop computer54B, laptop computer54C, and/or automobile computer system54N may communicate. Nodes10may communicate with one another. They may be grouped (not shown) physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds as described hereinabove, or a combination thereof. This allows cloud computing environment50to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices54A-N shown inFIG.2are intended to be illustrative only and that computing nodes10and cloud computing environment50can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). Referring now toFIG.3, a set of functional abstraction layers provided by cloud computing environment50(FIG.2) is shown. It should be understood in advance that the components, layers, and functions shown inFIG.3are intended to be illustrative only and embodiments of the invention are not limited thereto. As depicted, the following layers and corresponding functions are provided: Hardware and software layer60includes hardware and software components. Examples of hardware components include: mainframes61; RISC (Reduced Instruction Set Computer) architecture based servers62; servers63; blade servers64; storage devices65; and networks and networking components66. In some embodiments, software components include network application server software67and database software68. Virtualization layer70provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers71; virtual storage72; virtual networks73, including virtual private networks; virtual applications and operating systems74; and virtual clients75. In one example, management layer80may provide the functions described below. Resource provisioning81provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing82provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may include application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal83provides access to the cloud computing environment for consumers and system administrators. Service level management84provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment85provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA. Workloads layer90provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation91; software development and lifecycle management92; virtual classroom education delivery93; data analytics processing94; transaction processing95; and local arranging of a remote deployment96. Typically, a microservice can be deployed and then debugged based on a remote deployment, for example, in a remote virtualization entity in a cloud computing environment. The remote virtualization entity may be, for example, a Virtual machine, a Docker container, a cluster container, or the like. A cluster container may be described below with reference toFIG.4as an example remote virtualization entity. FIG.4depicts a block diagram showing a cluster400for debugging microservices in a cloud computing environment according to some embodiments of the present disclosure. In some embodiments, the cluster400may comprise a plurality of pods (not shown), each may run at least one container. A microservice may be deployed in the container. Generally, a microservice may have its own database and communicate with other microservices via an Application Programming Interface (API) gateway. Development and debugging of each microservice can be done separately. For illustrative purpose only, a container410for debugging a microservice is shown and described according to the embodiments. Some other containers430specific to other microservices, related or unrelated to the microservice deployed in the container410, may be arranged in the cluster400as needed. AsFIG.4shows, the container410may contain a file system412, a database414, a client416, a server endpoint418, and/or the like. The database414may store logs and data associated with the microservice. Accordingly, the file system412may read/write the logs and data associated with the microservice from/to the database414. Moreover, the cluster400may also contain a storage420, a service endpoint440, and/or the like. In the embodiments, the container410may access to the storage420and the other container430, and may be accessed by an external application or microservice via the service endpoint440. In some embodiments, the file system412may retrieve contents from the storage420, and may mount the logs and data from the database414to the storage420. For example, the storage420may be implemented as a container, a network file system, or the like. Moreover, the client416may access to a microservice deployed in the other container430, such that the microservice deployed in the container410may call a function of the microservice deployed in the other container430. For example, the client416may be a Representational State Transfer Application Programming Interface (REST API) client, a Graph Query Language (QL) client, a MongoDB client, or the like. Accordingly, the microservice deployed in the other container430may be a REST API microservice, a GraphQL microservice, a MongoDB, or the like. As an example, the REST API client may access to the REST API microservice in a same cluster. The GraphQL client may access to the GraphQL microservice. The MongoDB client may access to the MongoDB microservice. In addition, the service endpoint440may be exposed, as an interface, to an external application or microservice. The service endpoint440may be an endpoint of a network proxy, for example, a Kubernetes (K8S) service. The external application or microservice may then access to the server endpoint418via the service endpoint440. In such manner, a function of the microservice deployed in the container410may be called by other application or microservice. It can be noted that, the cluster container may be created based on configuration files, such as, YAML configuration files. Specifically, the YAML configuration files may define the following items: apiVersion specifying a version from which K8S configuration meta will be fetched, kind specifying a type of K8S objects, and the like. The most common K8S objects are pods, replica sets, deployments, and services. An exemplary YAML configuration file for creating a cluster container (for example, the container410) may be provided below, in Table 1, as an Example 1. TABLE 1apiVersion: apps/410kind: Deploymentmetadata:name: Example 1labels:app: examplespec:replicas: 1selector:matchLabels:app: exampletemplate:metadata:labels:app: examplespec:containers:...- resources:limits:cpu: 4000mmemory: 4096Mirequests:cpu: 2000mmemory: 2048Miname: exampleimage: container:410 Therefore, the microservices may be deployed and debugged based on the created cluster in a cloud computing environment. However, deployment and debugging on cloud is a time-consuming process, as it may involve a complete deployment and debugging procedure, including code building, image building, image pushing and pulling, and pod restarting, and the like. Meanwhile, cloud resources are more expensive than local resources. For microservices using high CPU and memory, a debugging process with saved time and resources consumption is desired. Embodiments of the present disclosure provide a debug system which may arranging a remote deployment of a remote virtualization entity into a local system, such that a microservice debugging process can be implemented locally.FIG.5depicts a block diagram showing an example debug system500for debugging microservices locally according to some embodiments of the present disclosure. As shown inFIG.5, the debug system500may comprise a remote virtualization entity proxy510, a local virtualization entity520, a debug client530, and the like. The local virtualization entity520and the debug client530can be arranged in a local system540. In some embodiments, the remote virtualization entity proxy510and the local virtualization entity520may be created (and/or started) with respect to the remote virtualization entity. For example, the remote virtualization entity proxy510may have a same or a corresponding identification with the local virtualization entity520, such that they may have a one-to-one correspondence to each other. The remote virtualization entity proxy510may be implemented in a cloud computing environment, as a SAAS. The remote virtualization entity proxy510may be, such as, a Virtual machine, a Docker container, a cluster container, or the like. The local virtualization entity520may be implemented locally as a container. Similar with the cluster400inFIG.4as the remote virtualization entity, for example, the remote virtualization entity proxy510may be a remote cluster proxy. Moreover, the local virtualization entity520may be a local container. FIG.6depicts a block diagram showing a remote cluster proxy600according to some embodiments of the present disclosure.FIG.7depicts a block diagram showing a local container700according to some embodiments of the present disclosure. In some embodiments, the cluster proxy600and the local container700may be created (and/or started) with respect to the cluster400inFIG.4. As shown inFIG.6, the remote cluster proxy600may contain a remote container proxy610, a storage620, other container630, a service endpoint640, and/or the like, in some embodiments. The remote container proxy610may comprise at least one component, for example, a file system612and one or more servers, such as a Network File System (NFS) server614, a Virtual Private Network (VPN) server616, a Linux Virtual Server (LVS)618, and the like. In some embodiments, the file system612may be configured as similar with the file system412inFIG.4. Moreover, the storage620, the other container630, and the service endpoint640(such as, a K8S service) may respectively be configured as similar with the storage420, the other container430, and the service endpoint440inFIG.4. In some embodiments, the remote cluster proxy600may be deployed in a cloud computing environment. Moreover, the remote container proxy610may be provided as a SAAS. On the other side, as shown inFIG.7, the local container700may comprise a database710, a client720, a server endpoint730, and/or the like. In addition to being arranged locally, the database710, the client720and server endpoint730may respectively be configured as similar with the database414, the client416, and the server endpoint418inFIG.4. In some embodiments, the local container700may be deployed in a local system, for example, a computing device used by a developer. The local container700may be started with a start script based on configuration information associated with the local container700. The configuration information may include a local directory, start parameters, identification of the local container700, and/or the like. Moreover, the remote cluster proxy600and the local container700may be assigned with a same identification or corresponding identifications. An exemplary YAML configuration file for creating a remote container proxy (for example, the remote container proxy610) may be provided below, in Table 2, as an Example 2. TABLE 2apiVersion: apps/610kind: Deploymentmetadata:name: Example 2labels:app: examplespec:replicas: 1selector:matchLabels:app: exampletemplate:metadata:labels:app: examplespec:containers:...- resources:limits:cpu: 400mmemory: 256Mirequests:cpu: 100mmemory: 128Miname: exampleimage: remote container proxy:610 In the YAML configuration file of Example 2, a new image (remote container proxy:610) is configured to replace the image (container:410) in the YAML configuration file of Example 1. Therefore, as described above, the remote cluster proxy600may be created to further include the servers. Moreover, compared with Example 1, limits and requests for resources (CPU and memory) may be lowered in Example 2, as the remote cluster proxy600may be an Input/output application. Otherwise, other configuration parameters, may be maintained the same. Back toFIG.5, in some embodiments, the debug client530may be configured to connect the remote virtualization entity proxy510and the local system540, and thus the remote virtualization entity proxy510may be connected to the local virtualization entity520. The debug client530may be further configured to arrange the remote deployment of the remote virtualization entity proxy510into the local virtualization entity520through tunneling technique. In this way, a microservice debugging process may be implemented in the local system540. For example, the microservice deployed in the container410inFIG.4may also be deployed in the debug system500. Thus, the debugging of the microservice may be locally performed in the local virtualization entity520based on the remote deployment of the remote virtualization entity proxy510. In some embodiments, the debug client530may be installed in the local system540as a plugin. According to further embodiments of the present disclosure, for secure connection of the remote virtualization entity proxy and the local virtualization entity, a debug system may be provided to further include a mapping manager.FIG.8depicts a block diagram showing a debug system800according to some embodiments of the present disclosure. As shown inFIG.8, the debug system800may comprise a remote virtualization entity proxy810, a local virtualization entity820and a debug client830in a local system840, a mapping manager850, and the like. The remote virtualization entity proxy810, the local virtualization entity820, and the debug client830may be configured respectively as similar with the remote virtualization entity proxy510, the local virtualization entity520, and the debug client530inFIG.5. In some embodiments, the mapping manager850can be arranged to connect the remote virtualization entity proxy810with the debug client830in a secure manner. For example, the mapping manager850may be assigned with environment variables, for example, a key and an address of the mapping manager850. In such case, the remote virtualization entity proxy810(for example, the remote cluster proxy, including the remote container proxy) may be created further based on the environment variables associated with the mapping manager. Therefore, the remote virtualization entity proxy810may be linked to the mapping manager850based on the environment variables associated with the mapping manager850. An exemplary YAML configuration file for creating a remote container proxy based on the environment variables may be provided below, in Table 3, as an Example 3. TABLE 3apiVersion: apps/Xkind: Deploymentmetadata:name: example-deploymentlabels:app: examplespec:replicas: 1selector:matchLabels:app: exampletemplate:metadata:labels:app: examplespec:containers:...- resources:limits:cpu: 400mmemory: 256Mirequests:cpu: 100mmemory: 128Miname: exampleimage: remote container proxy: Xenv:- name: MAPPING_MANAGER_URLvalue: https://mapping.manager.com/manager- name: MAPPING_MANAGER_KEYvalue: a47db531b6b3879d854f46130d5e883 In some embodiments, a code modification plugin may be provided to modify a first configuration file (such as, Example 1) for creating a container to a second configuration file (such as, Example 2, Example 3) for creating a remote container proxy. For example, the code modification plugin may be installed in the local system to automatically transform the first configuration file to the second configuration file. In an alternative embodiment, the configuration file may also be manually modified by a developer to replace the image file, add the environment variables, and/or the like. Compared with Example 2, the environment variables (MAPPING_MANAGER_URL and MAPPING_MANAGER_KEY) may be added in Example 3. The environment variables associated with the mapping manager850may also be retrieved by the debug client830. Then, the debug client830may be registered with the mapping manager850based on the environment variables. Accordingly, the remote virtualization entity proxy810may send, to the mapping manager850, a request to connect to the debug client830based on the environment variables. The mapping manager850may then request the debug client830to connect with the remote virtualization entity proxy810based on the environment variables. In such way, the remote virtualization entity proxy810can be connected with the debug client830via the mapping manager850, thereby improving security of the connection between them. Moreover, in some embodiments, the mapping manager850can be arranged in a cloud computing environment, as a SAAS. Embodiments of the present disclosure provide a method for arranging a remote deployment into a local system. Thus, a microservice debugging process running in a remote virtualization entity can be implemented locally. Embodiments will be described in detail below in conjunction withFIG.9andFIG.10. FIG.9depicts a schematic flowchart of a method900for local arrangement of remote deployment according to some embodiments of the present disclosure. In some embodiments, the method900can be implemented based on a debug system, for example, the debug system500inFIG.5, the debug system800inFIG.8, or the like. In some embodiments, the processing of local arrangement of remote deployment according to the embodiments could be implemented by a debug client, for example, the computer system/server12ofFIG.1. The debug client, for example, the debug client530inFIG.5, the debug client830inFIG.8, or the like, may be arranged in a local system, for example, the local system540inFIG.5, the local system840inFIG.8, or the like. FIG.10depicts an illustrative flowchart showing a process of local arrangement of a remote deployment based on a debug system (such as, the debug system500, or the debug system800) according to some embodiments of the present disclosure. In the debug system, a debug client1010(such as, the debug client530or830) may be configured to perform the method900inFIG.9. A remote virtualization entity proxy1030(such as, the remote virtualization entity proxy510or810) and a local virtualization entity1040(such as, the local virtualization entity520or820) may be created for implementing the function of the remote virtualization entity (such as, cluster400) described above. The debug client1010and the local virtualization entity1040may be provided in a local system. As shown inFIG.9, at block910, the debug client receives a starting request to connect a local system with a remote virtualization entity proxy. In some embodiments, the remote virtualization entity proxy1030may send the starting request to the debug client1010directly at stage A. Correspondingly, the debug client1010may receive the starting request from the remote virtualization entity proxy1030. In some other embodiments, for the debug system with the mapping manager1020(such as, the mapping manager850), stages a, b, and c can be performed instead of stage A. The mapping manager1020may be assigned with environment variables, such as a key, an address, and/or the like. In some embodiments, in a preparation stage, for example, before stage a, the debug client1010may retrieve the environment variables associated with the mapping manager1020, and may register itself with the mapping manager1020based on the environment variables. Moreover, the remote virtualization entity proxy1030may be assigned with the environment variables. At stage a, the debug client1010may listen to the mapping manager1020for the starting request. At stage b, the remote virtualization entity proxy1030may request the mapping manager1020based on the environment variables to connect to the debug client1010. In response to the request, the mapping manager1020may be linked to the remote virtualization entity proxy1030, and may send the starting request to the debug client1010at stage c. Accordingly, the debug client1010may receive the starting request from the mapping manager1020. Therefore, the debug client1010may be connected to the remote virtualization entity proxy1030via the mapping manager1020in a secure manner. At block920, the debug client initiates a network tunnel between the local system and the remote virtualization entity proxy, for example, in response to the starting request. The network tunnel may be configured for data forwarding between a port of the local system and a port of the remote virtualization entity proxy. In some embodiments, as shown inFIG.10, the network tunnel can be initiated to be established between the remote virtualization entity proxy1030and the debug client1010which is included in the local system at stage B. For example, the network tunnel may comprise a port forwarding for at least one component in the remote virtualization entity proxy. In some embodiments, the remote virtualization entity proxy may be provided with a Network File System (NFS) server, a Virtual Private Network (VPN) server, and/or the like. Thus, the network tunnel may comprise, for example, a first port forwarding for a Network File System server, a second port forwarding for a Virtual Private Network (VPN) server, and/or the like. At block930, the debug client arranges at least one component in the remote virtualization entity proxy into a local virtualization entity in the local system via the network tunnel. In some embodiments, the debug client1010may start the local virtualization entity1040at stage C, as shown inFIG.10. For example, the debug client1010may run a start script based on configuration information associated with the local virtualization entity1040. The configuration information may include a local directory, start parameters, identification of the local virtualization entity, and/or the like. The identification may be the same as or correspond to an identification of the remote virtualization entity proxy1030. After the local virtualization entity1040started, it can be securely connected with the remote virtualization entity proxy1030via the network tunnel. Then, at stage D, the debug client1010may map the at least one component (as the remote deployment) of the remote virtualization entity proxy1030into the local virtualization entity1040via the port forwarding. For example, the at least one component may comprise a file system. In some embodiments, the debug client1010may map the file system into the local virtualization entity1040via the first port forwarding for the NFS server. Then, the local virtualization entity1040may share the file system with the remote virtualization entity proxy. Moreover, for example, the at least one component may comprise the VPN server. In some embodiments, the debug client1010may map the VPN server into the local virtualization entity1040via the second port forwarding for the VPN server, to form a VPN client in the local virtualization entity1040. Therefore, a VPN network can be formed based on the VPN server and the VPN client. In this case, a client in the local virtualization entity may access to a remote microservice via the VPN client and the VPN server, i.e., via the VPN network. In a further embodiment, a Linux Virtual Server (LVS) may also be included in the remote virtualization entity proxy1030. In some embodiments, a server endpoint in the local virtualization entity may be accessible by an external application via the LVS and the VPN client. In an alternative embodiment, stage D can be implemented before stage C inFIG.10. Specifically, after the network tunnel is initiated at stage B, the debug client1010may map the at least one component into the local system firstly at stage D. The debug client1010may then start the local virtualization entity1040in the local system at stage C, for example, by means of the above described manner. In embodiments, the started local virtualization entity1040has a same or corresponding identification with the remote virtualization entity proxy1030. Further, the debug client1010may cause the local virtualization entity1040to include the at least one component. It can be noted that, the sequence of the stages described in the embodiments are merely for illustrative purposes. Any other appropriate sequences (including addition, deletion, and/or modification of at least one stage) can also be implemented to realize the corresponding embodiments. FIG.11depicts a schematic block diagram of a debug system1100using the method of local arrangement of remote deployment inFIG.9according to some embodiments of the present disclosure. In the embodiments, the debug system1100may be implemented as the debug system500or the debug system800. In some embodiments, as shown inFIG.11, the debug system1100may comprise a debug client1110(for example, the debug client530,830), the local container700(as shown inFIG.7), and the remote cluster proxy600(as shown inFIG.6). The debug client1110and the local container700may be provided in a local system1120(for example, the local system540,840). The remote cluster proxy600may be provided in a cloud computing environment as a SAAS. As described above, the remote cluster proxy600and the local container700may be created (and/or started) with respect to the cluster400. Thus, the debugging process for the microservice deployed in the container410in the cluster400can be equivalently implemented by the debug system1100according to embodiments of the present disclosure. In some embodiments, the remote cluster proxy600may be connected with the debug client1110by means of any appropriate connection technique. Additionally, the debug system1100may further comprise a mapping manager1140(for example, the mapping manager850), as dotted block shows. In some embodiments, the debug client1110may retrieve environment variables associated with the mapping manager1140, for example, a key and an address of the mapping manager1140. The debug client1110may be registered with the mapping manager1140based on the environment variables. For example, a user interface, such as an interface of a browser console, may be provided for the registering process. The debug client1110may then listen to the mapping manager1140for a starting request. The remote cluster proxy600may be created with the environment variables. The remote cluster proxy600may be linked to the mapping manager1140based on the environment variables. In response to a request connecting with the debug client1110from the remote cluster proxy600, the mapping manager1140may send the starting request to the corresponding debug client1110based on the environment variables. Respectively, the debug client1110may receive the starting request and connect to the remote cluster proxy600in a secure manner. Upon the secure connection, the debug client1110may initiate a network tunnel1130between the remote cluster proxy600and the local system1120. The network tunnel1130can be configured for data forwarding between a port of the remote cluster proxy600and a port of the local system1120. For example, the network tunnel1130may comprise the first port forwarding for the NFS server614, a second port forwarding for the VPN server616, and/or the like. The debug client1110may start the local container700corresponding to the remote cluster proxy600based on configuration information. The configuration information may include a local directory, start parameters, identification, and/or the like. Accordingly, the remote cluster proxy600may be connected to the local container700by means of the network tunnel1130. In some embodiments, the debug client1110may map the file system612from the remote container proxy610into the local container700via the first port forwarding, as a shared file system712. That is, the file system612/712can be shared between the remote container proxy610and the local container700. The file shared system712may read/write logs and data from/to the database710. Moreover, the shared file system712may retrieve contents from the storage620in the remote cluster proxy600, and mount the logs and data to the storage620. As an example, a data processing related microservice can be debugged in the local system1120based on the shared file system712. In some embodiments, the debug client1110may map the VPN server616from the remote container proxy610into the local container700via the second port forwarding, to form a VPN client1160in the local container700. A VPN network may be established based on the VPN server616and the VPN client1160. Alternatively, the above mapping process and the local container starting process can be implemented in another order. For example, the debug client may map a component (such as, the file system612, the VPN server616, or the like) from the remote cluster proxy600into the local container700, and then start the local container700to include the component. In some embodiment, the client720may access to the other container630in the remote cluster proxy600or external application via the VPN network. Some other microservices (or applications) may be deployed in the other container630. For example, the client720may be a REST API client, a GraphQL client, a MongoDB client, or the like. Accordingly, the microservice deployed in the other container630may be a REST API microservice, a GraphQL microservice, a MongoDB microservice, or the like. As an example, the REST API client in the local container700may access the REST API microservice in the remote cluster proxy600through the VPN network. Therefore, the microservice to be debugged may call functions of other microservices or applications. Further, the service endpoint640(such as, a K8S service) can be exposed to an external application, such that the external application may access the server endpoint730via the K8S service, the LVS server618and the VPN client1160. That is, an access to the server endpoint418in cluster400inFIG.4can be redirect to the server endpoint730in the local container700through the LVS server and the VPN client. Therefore, the microservice to be debugged may provide a calling interface for other microservices or applications, which may then call functions of the microservice via the calling interface. Therefore, the debug system1100may debug microservices by running corresponding code instantly in local. The developers may run, compile and modify microservice code with compiling tools at any time. Moreover, time cost for image building, image pushing, and pod restarting may be respectively saved. Thus, the local debugging process according to the embodiments may save both developers' time and cloud resources, and becomes more convenient than running on cloud. Additionally, in some embodiments of the present disclosure, an apparatus for local arrangement of a remote deployment may be provided. The apparatus may comprise one or more processors, a memory coupled to at least one of the one or more processors, and a set of computer program instructions stored in the memory. The set of computer program instructions may be executed by at least one of one or more processors to perform the above method. In some other embodiments of the present disclosure, a computer program product for local arrangement of a remote deployment may be provided. The computer program product may comprise a computer readable storage medium having program instructions embodied therewith. The program instructions executable by one or more processors causes the processor to perform the above method. The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be accomplished as one step, executed concurrently, substantially concurrently, in a partially or wholly temporally overlapping manner, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
52,030
11943116
DETAILED DESCRIPTION OF THE DRAWINGS The various embodiments described herein generally provide apparatus, systems and methods which facilitate the mapping and controlling of network-enabled IoT devices housed in an environment via a graphical user interface (GUI) of an electronic device. More particularly, the method (e.g., implemented by an application program configured to run on the electronic device) includes receiving information including spatial locations of these devices in the environment and associating each IoT device with a respective three-dimensional (3D) or two-dimensional (2D) position in the environment based on the spatial information, includes receiving information including spatial locations of these devices in the environment and associating each IoT device with a respective three-dimensional (3D) or two-dimensional (2D) position in the environment based on the spatial information. The method also includes generating a digital map representing the 3D or the 2D position of the IoT devices in the environment. In some embodiments, the digital map can be overlaid on a layout of a physical floorplan of the environment. In some embodiments, the layout can be selected in a user-friendly manner from one or more pre-loaded floorplan templates corresponding to different types of floorplans. In some embodiments, new floorplan templates can be added to an existing set of floorplan templates. In some embodiments, physical objects (e.g., walls, furniture, plants, paintings, lamps, staircase, bathroom fixtures, plumbing fixtures, etc.) in the environment can be represented on the floorplan template by preloaded visual indicia that provide realistic displays of the physical objects. In some embodiments, a floorplan template can be modified by adding new indicia representing physical objects that are newly-added in the environment. In some embodiments, the indicia can be customized, if a user so desires. In some embodiments, the IoT devices can be represented on the digital map using industry-standard icons or user-defined icons. Such a representation can allow configuration/registration of the IoT devices to various areas/locations of the house. For example, a floorplan can depict a living room and via a user interface a user can tag IoT devices such as motion sensors or remote smoke detectors in the floorplan to assign these devices to the “living room area.” Similarly, a user can tag icons corresponding to an oven, a microwave, a refrigerator, and smart appliances via a user interface to assign these IoT devices in the “dining area.” Thus, embodiments of the present technology provide the option of stepwise registration or assignment of IoT devices during installation. In some embodiments, the digital map provides settings/configurations, network addresses, and operational details of the different IoT devices installed in the environment. In some embodiments, a control signal can be sent to an IoT device to query the operational settings of the IoT device. In some embodiments, the digital map provides an option to modify the settings/configurations of the IoT devices via a graphical user interface (GUI) of an electronic device that is located remotely from the IoT devices. Such modification, for example, can be implemented by communicating control signals from the electronic device to the IoT devices, wherein the control signals include updated settings/configurations. In some embodiments, updated settings/configurations can be sent to an IoT device based on a request from the user and/or based on operational settings of the IoT device. The electronic device displaying the GUI can include a transceiver to provide various commands and control signals to a transceiver included in an IoT device. These commands and control signals can be sent using various wireless communication protocols and techniques, such as Bluetooth, cellular, infrared, Wi-Fi, etc. In some embodiments, the electronic device can communicate with the same IoT device using multiple wireless communication protocols and techniques. In certain applications, the communication channel that allows remote control of an IoT device can include a wired communication channel. Thus, it will be appreciated that embodiments of the present disclosure provide the ability to monitor and control a plurality of different types of IoT devices from different manufacturers via a single GUI common to all the IoT devices in the environment, without the necessity to make use of separate manufacturer-provided means of communicating with the IoT devices. Different IoT devices in an environment can be controlled via a “central” or a “master” GUI that is easily usable/understandable by a user who desires to leverage functionalities of the different IoT devices within a commonly-implemented GUI. For example, a process associated with a central GUI can be configured to call individual software/firmware/API's associated with individual IoT devices. In some embodiments, the GUI provides the ability to select settings that would impact a group of IoT devices in the environment. Such a group can include every single device in the environment or can be a subset of the IoT devices in the environment. Such a group can, for example, be formed by selecting individual IoT devices to be part of the group, and assigning a group name (e.g., by typing a desired group name via the GUI). Once a group is formed, various functionalities of the IoT devices within the group can be activated and/or adjusted. These functionalities can be identical to all the IoT devices in the group, or they can be different across the devices in the group. For example, group timer selection, group time settings, group network settings, can be made in a manner similar to the settings that would impact an individual IoT device. For example, a software update for a group of can be pushed out to multiple IoT devices in the group. Additionally, if desired, a single disconnect button on the GUI can sever communications with all devices within the group or groups. IoT devices forming a group are not necessarily constrained to be devices of the same type or the same manufacturer but rather can be different types of devices and/or can be from different manufacturers. Furthermore, according to embodiments disclosed herein, when a new IoT device is deployed in the environment, the GUI can be configured to initially discover (e.g., based on a registration signal received from the new IoT device) and subsequently control the new IoT device. In some embodiments, information pertaining to the new IoT device is broadcast to the one or more IoT devices deployed in the environment. In conventional systems, IoT devices, at best report their presence based on a listing of some kind of information identifying the IoT devices. Such a listing is a static listing which makes it difficult for a user which IoT device is installed in which location (e.g., where) with respect to an environment, does not involve a GUI, and also offers limited functionality for remote management/control of IoT devices. Also, information reported by IoT devices are in different formats which are manufacturer-developed, proprietary, and not necessarily compatible with one another. Furthermore, conventional systems also have cumbersome naming conventions for IoT devices which makes it difficult for a user to identify IoT devices. In contrast, disclosed embodiments facilitate the mapping and controlling of network-enabled IoT devices housed in an environment via a “central” graphical user interface (GUI) that communicates with various types of IoT devices and associates each IoT device with a particular location relative to the environment. Instead of identifying a door sensor with a merely generic device name, disclosed embodiments can display a visual representation of a door sensor on a map of the environment where the door sensor is located. Thus, if there are four doors in a home and each one has a sensor, a user is able to identify/locate a door sensor of a bed room distinct from a door sensor in the guest room. That is, the graphical representation of IoT devices are relative to the environment. Not only does the disclosed technology facilitate a topological display of IoT deployed in an environment, but also identifies “IoT holes” due to the absence of expected IoT devices in the environment and further offers recommendations to fill the gap arising due to the holes. These, and several other features, functionalities, and advantages of the disclosed technology will be better understood in the discussions that follow. FIG.1Aillustrates an example floor plan100A of a home depicting IoT devices installed in different locations in the home environment. For example, IoT devices in the living/dining area of a home102include a gateway, smart lighting, a thermostat, a television, a remote motion detector, and a remote smoke detector. The bedroom of the home102includes IoT devices104such as door and window sensors, a remote smoke detector, a router, a room thermostat, a television, and smart lighting. The kitchen of the home102includes IoT devices104such as a refrigerator, a washer/dryer, door and window sensors, and a remote smoke detector. FIG.1Billustrates a graphical user interface100B of a user's computing device (e.g., a mobile phone) showing the floor plan ofFIG.1Awith IoT devices and their respective locations with respect to the environment. For example, the interface displays a home environment that includes a living/dining area, a bedroom, a kitchen, and a bathroom. IoT devices are displayed installed in these areas. Region110of the interface displays a listing of the various areas of the home and region120of the interface displays the various IoT devices installed in the home. If a user clicks on the bedroom icon in region110, the interface would display all the IoT devices in the bedroom. If a user clicks on the bathroom icon of the interface, the interface would display all the IoT devices in the bathroom. Thus, the interface can filter/display information pertaining to IoT devices by rooms or areas of the home. If a user click on thermostats icon in region120of the interface, the interface would display all rooms or areas of the home where thermostats are installed. Thus, the interface can also filter/display location information for all areas where an IoT device is installed, based on a selection of a type of IoT device by a user. In some embodiments, the disclosed technology can identify a location of an IoT device based on a type of IoT device. Thus, if a refrigerator (i.e., an IoT device) registers itself with the application program rendering the interface, the application program is able to associate the refrigerator with a kitchen location in the environment. Similarly, if a garage door (i.e., an IoT device) registers itself with the application program rendering the interface, the application program is able to associate the garage door with a garage location in the environment. Disclosed embodiments provides the option of filtering IoT devices installed in an environment based on a type of device, a location or an area (e.g., a kitchen or a bedroom), a status type reflecting connectivity of an IOT device, an alert/notification associated with an IoT device, a security protocol of an IoT device, a type of manufacturer of an IoT device, or a type of communication protocol associated with an IoT device. It will be understood that a status type reflecting connectivity of an IOT device can identify whether the IoT device is an ON state, an OFF state, or a SLEEP (e.g., for power savings) stare. An alert/notification associated with an IoT device can come from the device itself or from a peer IoT device in the event that the device is exhibiting anomalous behavior, has low battery, consuming high power, undergoing intermittent failures, sending malicious data packets, flooding the communication network with redundant packets, etc. A security compliance protocol of an IoT device are one or more security protocols that the IoT device is compliant with. Such protocols can be industry standard protocols or proprietary protocols. FIG.2illustrates an embodiment of a system200for mapping and controlling network-enabled IoT devices housed in an environment via a graphical user interface (GUI) of an electronic device. Examples of an environment can include an office, a factory, a school, a residence, a laboratory, or otherwise any physical facility. The system200includes an electronic device202and one or more IoT devices210coupled to communication networks206and208. Examples of an electronic device can include a set top box, a phone, a tablet computer, a router, a gateway, or an IoT controller/base station. In some embodiments, the GUI is associated with an application program (e.g., non-transitory machine-readable storage medium) configured to run on the electronic device. Thus, in some embodiments, the application program allows the electronic device to communicates with the IoT devices installed in the environment. In some examples, the IoT devices can be of different types and/or can be produced by different manufacturers. In some embodiments, an IoT device210can communicate commands and control signals to electronic device202via more than one communication network, e.g., using multiple wireless communication protocols and techniques. The commands and control signals can be sent using various wireless communication protocols and techniques, such as Bluetooth, cellular, infrared, Wi-Fi, etc. The electronic device202may access a storage medium connected to the electronic device202using a local communication bus, such as parallel ATA (PATA, also called IDE or EIDE), Serial ATA (SATA), SCSI, Serial Attached SCSI (SAS), Fibre Channel, IEEE 1394 (also called FireWire), USB, SCSI, or Thunderbolt. For example, each of a computer, a remote control device configured to operate a DVR or a set-top box, a PDA, a mobile phone, a consumer wearable device, a head-mounted display device, or generally, any mobile communication device may serve as the electronic device202. In some embodiments, the electronic device202may access a remote server (e.g., a physical server or located in the cloud) to download information, operational settings, parameters, network settings, or specific instructions (such as software development kits or SDKs) to communicate with each of the IoT devices210. In certain applications, the communication channel that allows remote control of an IoT device can include a wired communication channel, including, but not limited to public switched telephone networks (PSTN), local area networks (LAN), and wide area networks (WAN) providing data communication services and/or Internet access. In the system200, also shown is a graphical user interface226that display settings220for IoT device1, settings222for IoT device2, and settings224for IoT device3. In some embodiments, graphical user interface226also displays a digital map representing the 3D or the 2D position of IoT device1, IoT device2and IoT device3in the environment. In some embodiments, the digital map can be overlaid on a layout of a physical floorplan of the environment. FIG.3illustrates an embodiment of an electronic device302in a system300for displaying a digital map that includes location information and operational settings for one or more IoT devices managed by electronic device302. The electronic device302may be an example of the electronic device202described in reference toFIG.2. Examples of an electronic device can include a set top box, a phone, a tablet computer, a router, a gateway, or an IoT controller/base station. The electronic device302includes a communication module312, a floor plan editor module318, control logic316, a storage unit310storing floorplan templates320, and a GUI rendering module322. The communication module312receives IoT device information308(e.g., spatial information and operational parameters/network settings) from one or more IoT devices via the communication network306and/or communication network308. In some applications, the IoT device information308can be a registration signal for an IoT device that is newly-deployed in the environment. The communication module312also includes one or more radios (e.g., cellular, Wi-Fi, infrared, etc.) for communicating with IoT devices using one or more communication protocols or standards associated with the radios. The communication module conveys the IoT device information308to control logic316for subsequent processing. Control logic316generates command and control signals306that are communicated to IoT devices. The command and control signals306can be used to manage and/or modify the operational settings of the IoT device. In some applications, the command and control signals are broadcast signals directed at changing group settings, e.g., a group of IoT devices. In some applications, the command and control signals are directed at a single IoT device, e.g., an update signal sent to an IoT device. In some embodiments, control logic316associates each IoT device in the environment with a respective position in the environment based on the spatial information received in the IoT device information308. Control logic316conveys the outcome of processing IoT device information308to GUI rendering module322. GUI rendering module322creates and displays a GUI for display on a screen coupled to the electronic device302. The GUI can include a digital map showing the physical locations of the IoT devices, based on the received IoT device information308. The GUI can also have an option to provide information (in real time, near real time, or otherwise) pertaining to the settings and operational parameters of the IoT devices that are displayed on the digital map. In some applications, the GUI provides an option to change the settings and operational parameters of the IoT devices. For example, the digital map can display the 3D or the 2D position (e.g., based on IoT device information308) of the IoT devices in the environment. In some embodiments, the GUI rendering module322is based on the iCMS control and monitor application. In some embodiments, the digital map indicates a name of a room (e.g., a living room) that houses the IoT device in the environment. In some implementations, when a user selects an IoT device displayed on the GUI, the name and/or other information about the selected IoT device may be displayed as a label on the digital map. In some embodiments, IoT device information and/or operational settings of an IoT device is displayed as metadata on the label. In some embodiments, the digital map can be overlaid on a layout of a physical floorplan of the environment. In some embodiments, the physical floorplan can be selected from a set of floorplan templates stored in one or more storage units310(e.g., memory) coupled to the electronic device. One or more storage units310may be included internal and/or external to the electronic device302. The storage unit310may store floorplan templates (e.g., one or more templates for a living room, one or more templates for a bedroom, one or more templates for a child's nursery room, one or more templates for laboratories or research facilities, and the like), which can be selected based on a simple drag- and drop mechanism. In some embodiments, the electronic device302may generate floorplan templates320or can receive/retrieve them periodically or intermittently from a remote server. In some embodiments, a floorplan editor module318included in electronic device302allows a floorplan template to be modified by adding additional elements or features to the floorplan or the digital map. For example, the floorplan editor module302allows adding new indicia representing physical objects in the environment. In some embodiments, the indicia can be customized, if a user so desires. For example, a floorplan template can be modified in a manner such that a physical floorplan can include one or more indicia representing one or more physical objects (e.g., a wall, furniture, a plant, a painting, a lamp, a staircase, a bathroom fixture, or a plumbing fixture) in the environment. These indicia can be downloaded or received from a server and saved in one or more storage units310for inclusion in the floorplan templates. Furthermore, the electronic device302and system300may include other components, modules, elements, or devices not illustrated for the sake of brevity. FIG.4is a flow diagram representing a process that includes method steps for creating a digital map that indicates IoT devices deployed in an environment, thereby allowing remote management/control of the IoT devices deployed in the environment. The steps, for example, can be implemented by an application program configured to run on an electronic device. Examples of an electronic device can include a set top box, a phone, a tablet computer, a router, a gateway, or an IoT controller/base station. Starting at step402, the process receives information that includes spatial locations of one or more IoT devices deployed in an environment (e.g., a school, an office, a building, or otherwise any facility indoors or outdoors). For example, the received information can include metadata such as a device name, a MAC address, an operational status of the device, an IP address of the device, and/or other configurational parameters of an lot device. In some embodiments, the process can receive information about a first IoT device installed in an environment from a second IoT device installed in the environment. Based on the information received from the second IoT device, the process can predict what is the first IoT device, an accurate or an approximate spatial location where the first IoT device is installed, and other parameters/information relating to the first IoT device. Thus, embodiments of the present disclosure allow for both self-discovery of IoT devices or peer-based discovery of IoT devices installed in an environment. At step404, the process associates each IoT device with a respective position in the environment based on the spatial information. The position can be based on a global co-ordinate system (e.g., based on latitude/longitude information). In some embodiments, the position can be with respect to a landmark, a room or a floor, a distance to a wall, or generally based on a reference point within the environment. The spatial location of an IoT device can be determined by a location-based sensor (e.g., an on-board GPS receiver) within the IoT device and then sent by the IoT device to the electronic device. In some embodiments, the spatial information can be determined based on an electronic signal (e.g., triangulation mechanism using Wi-Fi, Bluetooth, or cellular signals) sent by the IoT device. In some applications, a second IoT device in the environment can relay or convey the spatial information of a first IoT device to the electronic device. Such a situation, for example, when the first IoT device is located in a “dead zone” characterized by weak radio signals or, when the first IoT device lacks the capability to determine its own spatial information. In some embodiments, the process uses the spatial information to provide a corrective recommendation. Based on the spatial information of a first IoT device and a second IoT device, the process can determine the distance between two IoT devices. If the distance between two IoT devices is more (or, less) than a threshold, then the process can provide a corrective recommendation to bring the first IoT device and second IoT close to one another. For example, if the process determines that an IoT camera is too far from an IoT router, then the process can indicate via an audio-visual alert on the map, to bring the IoT camera and the IoT camera close to one another. At step406, the process generates a graphical user interface (GUI) displaying a digital map that includes a respective position of each IoT device in the environment. Such a digital map can be rendered on a screen or a monitor coupled to the electronic device. For example, the GUI can display a digital map automatically populated with a graphical representation of the one or more IoT devices that are deployed, such that the graphical representation corresponds to the spatial information of the one or more IoT devices. At step408, the process overlays the digital map on a physical floorplan of the environment. In some applications, a physical floorplan can include one or more indicia representing one or more physical objects (e.g., walls, furniture, plants, paintings, lamps, staircase, bathroom fixtures, plumbing fixtures, etc.) included in the environment. In some embodiments, the disclosed technology can identify missing IoT devices in the environment based on spotting IoT “holes” in a digital map. Thus, if there is a remote camera (i.e., an IoT device) on one side of a house and no cameras on the other side(s) of the house, the program that displays the GUI can detect an absence of a camera on the other side(s) of the house, based on identifying a “hole” on a digital map and notify a user that a hole is identified. Thus, for certain locations in an environment, the disclosed technology can detect the absence of expected IoT devices that are typically meant to be positioned in those locations. Based on detecting a type of a location (e.g., a living room, a bed room, a kitchen, a garage, a basement, a class room, an office, or any other location), the system can recommend one or more IoT devices that can be deployed or installed in a particular type of location. In some embodiments, the program can recommend one or more IoT devices that can be installed in such locations to fill up the voids created by the “IoT holes.” For example, the system can recommend a list of standard IoT devices depending on various locations in an environment. Thus, there can be a first list of IoT devices for the interior(s) of an environment and a second set of IoT devices for the exterior(s) of an environment. In some embodiments, one or more IoT devices can be commonly included in multiple lists for different locations in an environment. In some implementations, the disclosed technology offers recommendations for IoT devices based on IoT devices that already exist at a particular location or in nearby locations. Thus, for example, based on the IoT devices detected in a bedroom, the system can recommend IoT devices that can be installed or deployed in the kitchen and/or the living room. In some implementations, the recommendations for the IoT devices solely depend on a particular type of location in an environment. Some of the embodiments described herein are described in the general context of methods or processes, which may be implemented in one embodiment by a computer program product, embodied in a computer-readable medium, including computer-executable instructions, such as program code, executed by computers in networked environments. A computer-readable medium may include removable and non-removable storage devices including, but not limited to, Read Only Memory (ROM), Random Access Memory (RAM), compact discs (CDs), digital versatile discs (DVD), etc. Therefore, the computer-readable media can include a non-transitory storage media. Generally, program modules may include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Computer- or processor-executable instructions, associated data structures, and program modules represent examples of program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps or processes. Some of the disclosed embodiments can be implemented as devices or modules using hardware circuits, software, or combinations thereof. For example, a hardware circuit implementation can include discrete analog and/or digital components that are, for example, integrated as part of a printed circuit board. Alternatively, or additionally, the disclosed components or modules can be implemented as an Application Specific Integrated Circuit (ASIC) and/or as a Field Programmable Gate Array (FPGA) device. Some implementations may additionally or alternatively include a digital signal processor (DSP) that is a specialized microprocessor with an architecture optimized for the operational needs of digital signal processing associated with the disclosed functionalities of this application. Similarly, the various components or sub-components within each module may be implemented in software, hardware or firmware. The connectivity between the modules and/or components within the modules may be provided using any one of the connectivity methods and media that is known in the art, including, but not limited to, communications over the Internet, wired, or wireless networks using the appropriate protocols. The foregoing description of embodiments has been presented for purposes of illustration and description. The foregoing description is not intended to be exhaustive or to limit embodiments of the present invention to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of various embodiments. The embodiments discussed herein were chosen and described in order to explain the principles and the nature of various embodiments and its practical application to enable one skilled in the art to utilize the present invention in various embodiments and with various modifications as are suited to the particular use contemplated. The features of the embodiments described herein may be combined in all possible combinations of methods, apparatus, modules, systems, and computer program products.
30,110
11943117
The drawings and the description provided herein illustrate and describe various examples of the inventive methods, devices, and systems of the present disclosure. However, the methods, devices, and systems of the present disclosure are not limited to the specific examples as illustrated and described herein, and other examples and variations of the methods, devices, and systems of the present disclosure, as would be understood by one of ordinary skill in the art, are contemplated as being within the scope of the present application. DETAILED DESCRIPTION In general, the disclosure is directed to devices, systems, and methods for generating and displaying graphical user interfaces depicting statuses and/or metrics associated with the communications between various resources of virtualization infrastructure. In some examples, the resources depicted in the graphical user interfaces include the physical devices, such as servers (also referred to herein as “nodes,” “compute nodes,” and “hosts”), that communicate via an underlay computer network that transports communications among the resources. In some examples, one or more of the resources depicted in the graphical user interfaces include virtual resources, such as one or more virtual servers and/or one or more virtual routers. As described in this disclosure, a policy agent (“agent”) associated with a server in the computer network may generate and transmit communication probes to one or more other agents located in other servers of the computer network. The agent issuing the communication probes, referred to as the source agent, issues each communication probe to a specific and different agent, referred to as the destination agent, and monitors the computer network for a reply to the issued probe returned by the destination agent. One or more metrics may be measured and/or determined based on received responses, and/or a lack of a response, resulting from the issuance of each of the communication probes. The measured and/or determined metrics may be communicated from a source agent to a policy controller, or simply “controller,” that uses the data and other information reported to the controller from the source agents to generate, for example through a dashboard, various versions of a user interface that depict statuses and/or values for the metrics that may be displayed on a display device, such as a display screen of a user interface device such as a computer. FIG.1is a block diagram illustrating an example computer network system8in which examples of the techniques described herein may be implemented. Computer network system8in the example ofFIG.1includes data centers10A-10X (collectively, “data centers10”) interconnected with one another and with customer networks associated with customers11via a service provider network7.FIG.1illustrates one example implementation of computer network system8and a data center10A that hosts one or more cloud-based computing networks, computing domains or projects, generally referred to herein as cloud computing cluster. The cloud-based computing clusters may be co-located in a common overall computing environment, such as a single data center, or distributed across environments, such as across different data centers. Cloud-based computing clusters may, for example, be different cloud environments, such as various combinations of OpenStack cloud environments, Kubernetes cloud environments or other computing clusters, domains, networks and the like. Other implementations of computer network system8and data center10A may be appropriate in other instances. Such implementations may include a subset of the components included in the example ofFIG.1and/or may include additional components not shown inFIG.1. Data centers10B-10X may include the same or similar features and be configured to perform the same or similar functions as described herein with respect to data center10A. In the example shown inFIG.1, data center10A provides an operating environment for applications and services for customers11coupled to data center10A by service provider network7through gateway108. Although functions and operations described in connection with computer network system8ofFIG.1may be illustrated as being distributed across multiple devices inFIG.1, in other examples, the features and techniques attributed to one or more devices inFIG.1may be performed internally, by local components of one or more of such devices. Similarly, one or more of such devices may include certain components and perform various techniques that may otherwise be attributed in the description herein to one or more other devices. Further, certain operations, techniques, features, and/or functions may be described in connection withFIG.1or otherwise as performed by specific components, devices, and/or modules. In other examples, such operations, techniques, features, and/or functions may be performed by other components, devices, or modules. Accordingly, some operations, techniques, features, and/or functions attributed to one or more components, devices, or modules may be attributed to other components, devices, and/or modules, even if not specifically described herein in such a manner. Data center10A hosts infrastructure equipment, such as networking and storage systems, redundant power supplies, and environmental controls. Service provider network7may be coupled to one or more networks administered by other providers, and may thus form part of a large-scale public network infrastructure, e.g., the Internet. In some examples, data center10A may represent one of many geographically distributed network data centers. As illustrated in the example ofFIG.1, data center10A is a facility that provides network services for customers11. Customers11may be collective entities such as enterprises and governments or individuals. For example, a network data center may host web services for several enterprises and end users. Other exemplary services may include data storage, virtual private networks, traffic engineering, file service, data mining, scientific, or super-computing, and so on. In some examples, data center10A is an individual network server, a network peer, or otherwise. In the example ofFIG.1, data center10A includes a set of storage systems and application servers, including server12A through server12X (collectively “servers12”) interconnected via high-speed switch fabric20provided by one or more tiers of physical network switches and routers. Servers12function as physical compute nodes of the data center. For example, each of servers12may provide an operating environment for execution of one or more customer-specific virtual machines36(“VMs” inFIG.1) or other virtualized instances, such as containers. Each of servers12may be alternatively referred to as a host computing device or, more simply, as a host. A server12may execute one or more virtualized instances, such as virtual machines, containers, or other virtual execution environment for running one or more services, such as virtualized network functions (VNFs). Switch fabric20may include top-of-rack (TOR) switches16A-16N coupled to a distribution layer of chassis switches18A-18M, and data center10A may include one or more non-edge switches, routers, hubs, gateways, security devices such as firewalls, intrusion detection, and/or intrusion prevention devices, servers, computer terminals, laptops, printers, databases, wireless mobile devices such as cellular phones or personal digital assistants, wireless access points, bridges, cable modems, application accelerators, or other network devices. Data center10A includes servers12A-12X interconnected via the high-speed switch fabric20provided by one or more tiers of physical network switches and routers. Switch fabric20is provided by the set of interconnected top-of-rack (TOR) switches18A-18M (collectively, “TOR switches18”) coupled to the distribution layer of chassis switches18A-18M (collectively, “chassis switches18”). Although not shown, data center10A may also include, for example, one or more non-edge switches, routers, hubs, gateways, security devices such as firewalls, intrusion detection, and/or intrusion prevention devices, servers, computer terminals, laptops, printers, databases, wireless mobile devices such as cellular phones or personal digital assistants, wireless access points, bridges, cable modems, application accelerators, or other network devices. In this example, TOR switches16and chassis switches18provide servers12with redundant (multi-homed) connectivity to gateway108and service provider network7. Chassis switches18aggregate traffic flows and provide high-speed connectivity between TOR switches16. TOR switches16may be network devices that provide layer 2 (MAC) and/or layer 3 (e.g., IP) routing and/or switching functionality. TOR switches16and chassis switches18may each include one or more processors and a memory, and that are capable of executing one or more software processes. Chassis switches18are coupled to gateway108, which may perform layer 3 routing to route network traffic between data center10A and customers11by service provider network7. Switch fabric20may perform layer 3 routing to route network traffic between data center10A and customers11by service provider network7. Gateway108acts to forward and receive packets between switch fabric20and service provider network7. Data center10A includes an overlay network that extends switch fabric20from physical switches18,16to software or “virtual” switches. For example, virtual routers30A-30X located in servers12A-12X, respectively, may extend the switch fabric20by communicatively coupling with one or more of the physical switches located within the switch fabric20. Virtual switches may dynamically create and manage one or more virtual networks usable for communication between application instances. In one example, virtual routers30A-30X execute the virtual network as an overlay network, which provides the capability to decouple an application's virtual address from a physical address (e.g., IP address) of the one of servers12A-12X on which the application is executing. Each virtual network may use its own addressing and security scheme and may be viewed as orthogonal from the physical network and its addressing scheme. Various techniques may be used to transport packets within and across virtual network(s) over the physical network. Software-Defined Networking (“SDN”) controller132provides a logically and in some cases physically centralized controller for facilitating operation of one or more virtual networks within data center10A in accordance with one or more examples of this disclosure. The terms SDN controller and Virtual Network Controller (“VNC”) may be used interchangeably throughout this disclosure. In some examples, SDN controller132operates in response to configuration input received from orchestration engine130via northbound API131, which in turn operates in response to configuration input received from an administrator24operating user interface device129. Additional information regarding SDN controller132operating in conjunction with other devices of data center10A or other software-defined networks is found in International Application Number PCT/US2013/044378, filed Jun. 5, 2013, and entitled PHYSICAL PATH DETERMINATION FOR VIRTUAL NETWORK PACKET FLOWS, which is incorporated by reference as if fully set forth herein. User interface device129may be implemented as any suitable computing system, such as a mobile or non-mobile computing device operated by a user and/or by administrator24. User interface device129may, for example, represent a workstation, a laptop or notebook computer, a desktop computer, a tablet computer, or any other computing device that may be operated by a user and/or present a user interface in accordance with one or more aspects of the present disclosure. In some examples, orchestration engine130manages functions of data center10A such as compute, storage, networking, and application resources. For example, orchestration engine130may create a virtual network for a tenant within data center10A or across data centers. Orchestration engine130may attach virtual machines (VMs) to a tenant's virtual network. Orchestration engine130may connect a tenant's virtual network to an external network, e.g. the Internet or a VPN. Orchestration engine130may implement a security policy across a group of VMs or to the boundary of a tenant's network. Orchestration engine130may deploy a network service (e.g. a load balancer) in a tenant's virtual network. In some examples, SDN controller132manages the network and networking services such load balancing, security, and allocate resources from servers12to various applications via southbound API133. That is, southbound API133represents a set of communication protocols utilized by SDN controller132to make the actual state of the network equal to the desired state as specified by orchestration engine130. For example, SDN controller132implements high-level requests from orchestration engine130by configuring physical switches, e.g. TOR switches16, chassis switches18, and switch fabric20; physical routers; physical service nodes such as firewalls and load balancers; and virtual services such as virtual firewalls in a VM. SDN controller132maintains routing, networking, and configuration information within a state database. Typically, the traffic between any two network devices, such as between network devices (not shown) within switch fabric20or between servers12and customers11or between servers12, for example, can traverse the physical network using many different paths. For example, there may be several different paths of equal cost between two network devices. In some cases, packets belonging to network traffic from one network device to the other may be distributed among the various possible paths using a routing strategy called multi-path routing at each network switch node. For example, the Internet Engineering Task Force (IETF) RFC 2992, “Analysis of an Equal-Cost Multi-Path Algorithm,” describes a routing technique for routing packets along multiple paths of equal cost. The techniques of RFC 2992 analyze one particular multipath routing strategy involving the assignment of flows to bins by hashing packet header fields that sends all packets from a particular network flow over a single deterministic path. For example, a “flow” can be defined by the five values used in a header of a packet, or “five-tuple,” i.e., the protocol, Source IP address, Destination IP address, Source port, and Destination port that are used to route packets through the physical network. For example, the protocol specifies the communications protocol, such as TCP or UDP, and Source port and Destination port refer to source and destination ports of the connection. A set of one or more packet data units (PDUs) that match a particular flow entry represent a flow. Flows may be broadly classified using any parameter of a PDU, such as source and destination data link (e.g., MAC) and network (e.g., IP) addresses, a Virtual Local Area Network (VLAN) tag, transport layer information, a Multiprotocol Label Switching (MPLS) or Generalized MPLS (GMPLS) label, and an ingress port of a network device receiving the flow. For example, a flow may be all PDUs transmitted in a Transmission Control Protocol (TCP) connection, all PDUs sourced by a particular MAC address or IP address, all PDUs having the same VLAN tag, or all PDUs received at the same switch port. Virtual routers (virtual router30A to virtual router30X, collectively “virtual routers30” inFIG.1) execute multiple routing instances for corresponding virtual networks within data center10A and routes the packets to appropriate virtual machines36executing within the operating environment provided by servers12. Each of servers12may include a virtual router. Packets received by virtual router30A of server12A, for instance, from the underlying physical network fabric may include an outer header to allow the physical network fabric to tunnel the payload or “inner packet” to a physical network address for a network interface of server12A. The outer header may include not only the physical network address of the network interface of the server but also a virtual network identifier such as a VxLAN tag or Multiprotocol Label Switching (MPLS) label that identifies one of the virtual networks as well as the corresponding routing instance executed by the virtual router. An inner packet includes an inner header having a destination network address that conform to the virtual network addressing space for the virtual network identified by the virtual network identifier. In some aspects, the virtual router buffers and aggregates multiple tunneled packets received from the underlying physical network fabric prior to delivery to the appropriate routing instance for the packets. That is, a virtual router executing on one of servers12may receive inbound tunnel packets of a packet flow from one or more TOR switches16within switch fabric20and, prior to routing the tunnel packets to a locally executing virtual machine, process the tunnel packets to construct a single, aggregate tunnel packet for forwarding to the virtual machine. That is, the virtual router may buffer multiple inbound tunnel packets and construct the single, tunnel packet in which the payloads of the multiple tunnel packets are combined into a single payload and the outer/overlay headers on the tunnel packets are removed and replaced with a single header virtual network identifier. In this way, the aggregate tunnel packet can be forwarded by the virtual router to the virtual machine as if a single inbound tunnel packet was received from the virtual network. Moreover, to perform the aggregation operation, the virtual router may leverage a kernel-based offload engine that seamlessly and automatically directs the aggregation of tunnel packets. Further example techniques by which the virtual routers forward traffic to the customer-specific virtual machines executing on servers12are described in U.S. patent application Ser. No. 14/228,844, entitled “PACKET SEGMENTATION OFFLOAD FOR VIRTUAL NETWORKS,” incorporated herein by reference. In some example implementations, virtual routers30executing on servers12steer received inbound tunnel packets among multiple processor cores to facilitate packet processing load balancing among the cores when processing the packets for routing to one or more virtual and/or physical machines. As one example, server12A includes multiple network interface cards and multiple processor cores to execute virtual router30A, and steers received packets among multiple processor cores to facilitate packet processing load balancing among the cores. For instance, a particular network interface card of server12A may be associated with a designated processor core to which the network interface card directs all received packets. The various processor cores, rather than processing each of the received packets, offload flows to one or more other processor cores, in accordance with a hash function applied to at least one of the inner and outer packet headers, for processing to take advantage of available work cycles of the other processor cores. In the example ofFIG.1, data center10A further includes a policy controller23that provides monitoring, scheduling, and performance management for data center10A. Policy controller23interacts with policy agents35A-35X (collectively “policy agents35” or “agents35”), which are deployed within at least some of the respective physical servers12for monitoring resource usage of the physical compute nodes as well as any virtualized host, such as VM36, executing on the physical host. In this way, policy agents35provide distributed mechanisms for collecting a wide variety of usage metrics as well as for local enforcement of policies installed by policy controller23. In example implementations, policy agents35run on the lowest level “compute nodes” of the infrastructure of data center10A that provide computational resources to execute application workload. A compute node may, for example, be a bare-metal host of server12, a virtual machine36, a container or the like. In addition, each agent within one of servers12A-12X may be communicatively coupled to other policy agents located in the same server or in a different server. The policy agents may be configured to issue a communication “ping,” also referred to as a “communication probe” or simply a “probe,” to one or more different policy agents. The communication link between the policy agents is illustratively shown as line206inFIG.1. The policy agent issuing the probe may direct the probe to another policy agent, for example a policy agent in a different server. The probe may be configured to be routed through the same devices within the switch fabric20that would be used for regular communications between the servers, for example the TOR switches16and chassis switches18of the switch fabric. The policy agent issuing the probe or probes may monitor and collect results related to responses, or lack thereof, received in reply to the issued probes. For example, a probe may not return, indicating non-connectivity between the server hosting the policy agent issuing the probe and the server hosting the policy agent that is a destination for the probe. A probe may return after a time, indicating a round-trip time (two-way latency). In some cases, a probe expected to be received by the policy agent that is a destination for the expected probe is not received within an expected time frame, indicating non-connectivity between the server hosting the policy agent issuing the probe and the server hosting the policy agent that is a destination for the probe. In some cases, policy agents may use the probes to determine one-way trip times between servers (latency). The monitored and collected results may include non-connectivity, round-trip times, and latencies among the servers. Communication probes may conform to various protocols, such as Internet Configuration Management Protocol (ICMP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and Hypertext Transfer Protocol (HTTP). For example, an issued probe may be an ICMP Echo Request message (also known as a Ping) or TCP SYN packet, while a returned probe may be an ICMP Echo Reply message or TCP ACK packet. Other example types of probes may include Operations, Administration, and Management (OAM) messages, such as those that conform to Y.1731 or IEEE 802.1ag. The collected results may then be forward to policy controller23for further processing, including for example for use in generating graphical user interfaces depicting various metrics and/or determined statuses for communications, related to the current state of communication links that may exist between various servers12within data center10A. Policy controller23obtains the usage metrics and metrics related to communication links between servers12from policy agents35, and constructs a dashboard203(e.g., a set of user interfaces) to provide visibility into operational performance and infrastructure resources of data center10A. Policy controller23may, for example, communicate dashboard203to UI device129for display to administrator24. In addition, policy controller23may apply analytics and machine learning to the collected metrics to provide real-time and historic monitoring, performance visibility and dynamic optimization to improve orchestration, security, accounting and planning within data center10A. As shown in the example ofFIG.1, policy controller23may define and maintain a rule base as a set of policies202. Policy controller23may administer control of each of servers12based on the set of policies202maintained by policy controller23. Policies202may be created or derived in response to input by administrator24and/or in response to operations performed by policy controller23. Policy controller23may, for example, observe operations of data center10A over time and apply machine learning techniques to generate one or more policies202. Policy controller23may periodically, occasionally, or continually refine policies202as further observations about data center10A are made. Policy controller23(e.g., an analytics engine within policy controller23) may determine how policies are deployed, implemented, and/or triggered at one or more of servers12. For instance, policy controller23may be configured to push one or more policies202to one or more of the policy agents35executing on servers12. Policy controller23may receive information about internal processor metrics from one or more of policy agents35, and determine if conditions of a rule for the one or more metrics are met. Policy controller23may analyze the internal processor metrics received from policy agents35, and based on the analysis, instruct or cause one or more policy agents35to perform one or more actions to modify the operation of the server associated with a policy agent. For example, policy controller23may push one or more policies including configurations for communication protocols to be used, intervals for issuing the communication probes, and/or metrics to be monitored for use in conjunction with the agents issuing the communication probes between agents. Information collected from the policy agents related to the issuance and monitoring of these communication probes may be used to generate a dashboard including graphical user interfaces that visually depict one or more status related to the metrics associated with and/or derived from the collected information and data associated with the issuance of the communication probes. In another example, policy controller23may be configured to gather information about the devices and arrangement of devices included in the switch fabric20, and generate a set of graphical user interfaces that visually depict these devices and the interconnections between these devices based on a set of predefined and/or user selectable rules. In some examples, policy controller23may be configured to determine and/or identify elements in the form of the virtual machines, containers, services, and/or applications executing on each of servers12. As used herein, a resource generally refers to a consumable component of the virtualization infrastructure, i.e., a component that is used by the infrastructure, such as CPUs, memory, disk, disk I/O, network I/O, virtual CPUs, and Contrail vrouters. A resource may have one or more characteristics each associated with a metric that is analyzed by the policy agent35(and/or policy controller23) and optionally reported. In general, an infrastructure element, also referred to herein as an element, is a component of the infrastructure that includes or consumes consumable resources in order to operate. Example elements includes hosts, physical or virtual network devices, instances (e.g., virtual machines, containers, or other virtual operating environment instances), and services. In some cases, an entity may be a resource for another entity. Virtual network devices may include, e.g., virtual routers and switches, vRouters, vSwitches, Open Virtual Switches, and Virtual Tunnel Forwarders (VTFs). A metric may be a value that measures the amount of a resource, for a characteristic of the resource, that is consumed by an element. Policy controller23may also analyze internal processor metrics received from policy agents35, and classify one or more virtual machines36based on the extent to which each virtual machine uses shared resources of servers12(e.g., classifications could be CPU-bound, cache-bound, memory-bound). Policy controller23may interact with orchestration engine130to cause orchestration engine130to adjust, based on the classifications of virtual machines36executing on servers12, the deployment of one or more virtual machines36on servers12. Policy controller23may be further configured to report information about whether the conditions of a rule are met to a client interface associated with user interface device129. Alternatively, or in addition, policy controller23may be further configured to report information about whether the conditions of a rule are met to one or more policy agents35and/or orchestration engine130. Policy controller23may be implemented as or within any suitable computing device, or across multiple computing devices. Policy controller23, or components of policy controller23, may be implemented as one or more modules of a computing device. In some examples, policy controller23may include a number of modules executing on a class of compute nodes (e.g., “infrastructure nodes”) included within data center10A. Such nodes may be OpenStack infrastructure service nodes or Kubernetes master nodes, and/or may be implemented as virtual machines. In some examples, policy controller23may have network connectivity to some or all other compute nodes within data center10A, and may also have network connectivity to other infrastructure services that manage data center10A. One or more policies202may include instructions to cause one or more of policy agents35to issue communication probes to one or more different agents35of the computer network system8, and to monitor for responses received in reply to issuing the communication probes. One or more policies202may include instructions to cause one or more policy agents35to monitor one or more metrics associated with servers12. One or more policies202may include instructions to cause one or more policy agents35to analyze one or more metrics associated with servers12to determine if connectivity exists between the source and destination agents associated with a communication probe, and for example to measure latency associated with the time for transmission and reception of the issued communication probes. One or more policies202may alternatively, or in addition, include instructions to cause policy agents35to report one or more metrics to policy controller23, including whether those metrics satisfy the conditions of a rule associated with one or more policies202. The reported information may include raw data, summary data, and sampling data as specified or required by one or more policies202. Dashboard203may represent a collection of graphical user interfaces presenting information about metrics, alarms, notifications, reports, connectivity and latency related to communication connections between network devices, network topologies, and other information about data center10A. Dashboard203may include one or more graphical user interfaces that are presented by user interface device129. User interface device129may detect interactions with dashboard203as user input (e.g., from administrator24). Dashboard203may, in response to user inputs, may cause configurations to be made to aspects of data center10A or projects executing on one or more virtual machines36of data center10A relating to network resources, data transfer limitations or costs, storage limitations or costs, and/or accounting reports. Various aspect of the graphical user interfaces and or related to the issuance of communication probes may be varied and/or changes in response to user inputs received at dashboard203. Dashboard203may include a graphical view that provides a quick, visual overview of resource utilization by instance using histograms. The bins of such histograms may represent the number of instances that used a given percentage of a resource, such CPU utilization. By presenting data using histograms, dashboard203presents information in a way that allows administrator24, if dashboard203is presented at user interface device129, to quickly identify patterns that indicate under-provisioned or over-provisioned instances. In some examples, dashboard203may highlight resource utilization by instances on a particular project or host, or total resource utilization across all hosts or projects, so that administrator24may understand the resource utilization in context of the entire infrastructure. Dashboard203may include information relating to costs for use of compute, network, and/or storage resources, as well as costs incurred by a project. Dashboard203may also present information about the health and risk for one or more virtual machines36or other resources within data center10A. In some examples, “health” may correspond to an indicator that reflects a current state of one or more virtual machines36. For example, an example virtual machine that exhibits a health problem may be currently operating outside of user-specified performance policy. “Risk” may correspond to an indicator that reflects a predicted future state of one or more virtual machines36, such that an example virtual machine that exhibits a risk problem may be may be unhealthy in the future. Health and risk indicators may be determined based on monitored metrics and/or alarms corresponding to those metrics. For example, if policy agent35is not receiving heartbeats from a host, then policy agent35may characterize that host and all of its instances as unhealthy. Policy controller23may update dashboard203to reflect the health of the relevant hosts, and may indicate that reason for the unhealthy state is one or more “missed heartbeats.” One or more policy agents35may execute on one or more of servers12to monitor some or all of the performance metrics associated with servers12and/or virtual machines36executing on servers12. Policy agents35may analyze monitored information and/or metrics and generate operational information and/or intelligence associated with an operational state of servers12and/or one or more virtual machines36executing on such servers12. Policy agents35may interact with a kernel operating one or more servers12to determine, extract, or receive internal processor metrics associated with use of shared resources by one or more processes and/or virtual machines36executing at servers12. Policy agents35may perform monitoring and analysis locally at each of servers12. In some examples, policy agents35may perform monitoring and/or analysis in a near and/or seemingly real-time manner. In the example ofFIG.1, and in accordance with one or more aspects of the present disclosure, policy agents35may monitor servers12. For example, policy agent35A of server12A may interact with components, modules, or other elements of server12A and/or one or more virtual machines36executing on servers12. Policy agent35A may, as a result of such interactions, collect information about one or more metrics associated with servers12and/or virtual machines36. Such metrics may be raw metrics, which may be based directly or read directly from servers12, virtual machines36, and/or other components of data center10A. In other examples, one or more of such metrics may be calculated metrics, which include those derived from raw metrics. In some examples, metrics may correspond to a percentage of total capacity relating to a particular resource, such as a percentage of CPU utilization, or CPU consumption, or Level 3 cache usage. However, metrics may correspond to other types of measures, such as how frequent one or more virtual machines36are reading and writing to memory. Policy controller23may configure policy agents35to monitor for conditions that trigger an alarm. For example, policy controller23may detect input from user interface device129that policy controller23determines corresponds to user input. Policy controller23may further determine that the user input corresponds to information sufficient to configure a user-specified alarm that is based on values for one or more metrics. Policy controller23may process the input and generate one or more policies202that implements the alarm settings. In some examples, such policies202may be configured so that the alarm is triggered when values of one or more metrics collected by policy agents35at servers12exceed a certain threshold. Policy controller23may communicate information about the generated policies202to one or more policy agents35executing on servers12. Policy agents35may monitor servers12for conditions on which the alarm is based, as specified by the policies202received from policy controller23. For instance, policy agent35A may monitor one or more metrics at server12A. Such metrics may involve server12A, all virtual machines36executing on server12A, and/or specific instances of virtual machines36. Policy agent35A may determine, based on the monitored metrics, that one or more values exceed a threshold set by or more policies202received from policy controller23. For instance, policy agent35A may determine whether CPU usage exceeds a threshold set by a policy (e.g., server12A CPU usage>50%). In other examples policy agent35A may evaluate whether one or more metrics is less than a threshold value (e.g., if server12A available disk space <20%, then raise an alert), or is equal to a threshold value (e.g., if the number of instances of virtual machines36equals 20, then raise an alert). If policy agent35A determines that the monitored metric triggers the threshold value, policy agent35A may raise an alarm condition and communicate information about the alarm to policy controller23. Policy controller23and/or policy agent35A may act on the alarm, such as by generating a notification. Policy controller23may update dashboard203to include the notification. Policy controller23may cause updated dashboard203to be presented at user interface device129, thereby notifying administrator24of the alarm condition. In some examples, policy controller23may generate policies and establish alarm conditions without user input. For example, policy controller23may apply analytics and machine learning to metrics collected by policy agents35. Policy controller23may analyze the metrics collected by policy agents35over various time periods. Policy controller23may determine, based on such analysis, information sufficient to configure an alarm for one or more metrics. Policy controller23may process the information and generate one or more policies202that implements the alarm settings. Policy controller23may communicate information about the policy to one or more policy agents35executing on servers12. Each of policy agents35may thereafter monitor conditions and respond to conditions that trigger an alarm pursuant to the corresponding policies202generated without user input. In accordance with techniques described herein, policy controller23generates profiles213for elements of data center10A. A profile is associated with an element or group of elements and is a collection of alarms to be evaluated against corresponding metrics for the alarms to determine whether the associated element or group of elements is “active” or “inactive.” For example, policy controller23, in response to input received via UI device129, may generate profiles213for respective hosts, instances (e.g., VMs36), network devices, groups thereof, and resources thereof (e.g., CPU, memory, disk, network interface, etc.). Moreover, a user or administrator configures elements of data center10A to be members of one or more groups of elements such that the elements and groups and have a “member-of” relationship. As examples, an OpenStack host (e.g., any of servers12) can be a member of one or more “host aggregates” that are each groups of one or more hosts. A Kubernetes container can be a member of (1) a pod, (2) a replication controller, (3) a namespace, and (4) several different services. A virtual machine36can be configured as a member of one or more “instance aggregates” that are each groups of one or more instances. A network device can be configured as a member of one or more “network device aggregates” that are each groups of one or more network devices. In each of the above examples, a user or agent may define profiles213for each element and group of elements. The term “aggregate” as used herein may refer to a set of one or more hosts, a set of one or more network devices, or a set of one or more instances. In general, the techniques make use of certain internal processor metrics relating to resources that are shared inside of a physical processor, such as metrics relating to an internal cache of the processor that is shared by software executing one or more cores within the processor or memory bus bandwidth that is consumed by the cores within the physical processor. Such metrics relating to resources shared inside a physical microprocessor may provide insights as to how virtual machines36(or processes within virtual machines36) executing on each of servers12are contending for or otherwise using shared resources internal to the processors. Such information may be used to pinpoint bottlenecks, instances of resource contention, and performance issues that might not otherwise be apparent from other metrics, such as CPU utilization or CPU load metrics. In some examples, the manner in which one or more virtual machines36operate and/or use such shared resources (such as a shared cache or memory bus) on a given server may adversely affect the operation of other virtual machines36on that same server. However, by monitoring only CPU usage, for example, it may be difficult to identify which particular virtual machine is causing performance problems for other virtual machines36. Yet by monitoring metrics of resources internal to the processor of each of servers12, it may be possible not only to identify which virtual machine may be causing performance problems for other virtual machines36on a given processor, but to also take steps to improve the policy control of all virtual machines36executing the processors of one or more servers12. If appropriate policy control is applied across data center10A, it may be possible to improve the operation, efficiency, and consistent performance of data center10A in the aggregate, and more effectively comply with service level agreements and performance guarantees. By monitoring internal processor metrics to identify resources shared within the processor of a server as consumed by elements including software processes executing on hardware cores internal to the processor, policy controller23of data center10A may identify virtual machines36, containers, and/or processes that are consuming shared resources in manner that may adversely affect the performance of other virtual machines36, containers, and/or processes executing on that server. By identifying processes that may be adversely affecting the operation of other processes, policy controller23of data center10A may take steps to address how such processes operate or use shared resources, and as a result, improve the aggregate performance of virtual machines, containers, and/or processes executing on any given server, and/or improve the operation of all servers12collectively. Accordingly, as a result of identifying processes adversely affecting the operation of other processes and taking appropriate responsive actions, virtual machines36may perform computing operations on servers12more efficiently, and more efficiently use shared resources of servers12. By performing computing operations more efficiently and more efficiently using shared resources of servers12, data center10A may perform computing tasks more quickly and with less latency. Therefore, aspects of this disclosure may improve the function of servers12and data center10A, because identifying and addressing any virtual machines36that are adversely affecting the operation of other virtual machines36may have the effect of enabling servers12and data center10A to perform computing tasks more quickly and with less latency. Further, assessment of metrics or conditions that may trigger an alarm may be implemented locally at each of servers12(e.g., by policy agents35). By performing such assessments locally, performance metrics associated with the assessment can be accessed at a higher frequency, which can permit or otherwise facilitate performing the assessment faster. Implementing the assessment locally may, in some cases, avoid the transmission of information indicative of performance metrics associated with assessment to another computing device (e.g., policy controller23) for analysis. As such, latency related to the transmission of such information can be mitigated or avoided entirely, which can result in substantial performance improvement in scenarios in which the number of performance metrics included in the assessment increases. In another example, the amount of information that is sent from the computing device can be significantly reduced when information indicative or otherwise representative of alarms and/or occurrence of an event is to be sent, as opposed to raw data obtained during the assessment of operational conditions. In yet another example, the time it takes to generate the alarm can be reduced in view of efficiency gains related to latency mitigation. Various components, functional units, and/or modules illustrated inFIG.1(e.g., user interface device129, orchestration engine130, SDN controller132, and policy controller23, policy agent35) and/or illustrated or described elsewhere in this disclosure may perform operations described using software, hardware, firmware, or a mixture of hardware, software, and firmware residing in and/or executing at one or more computing devices. For example, a computing device may execute one or more of such modules with multiple processors or multiple devices. A computing device may execute one or more of such modules as a virtual machine executing on underlying hardware. One or more of such modules may execute as one or more services of an operating system or computing platform. One or more of such modules may execute as one or more executable programs at an application layer of a computing platform. In other examples, functionality provided by a module could be implemented by a dedicated hardware device. Although certain modules, data stores, components, programs, executables, data items, functional units, and/or other items included within one or more storage devices may be illustrated separately, one or more of such items could be combined and operate as a single module, component, program, executable, data item, or functional unit. For example, one or more modules or data stores may be combined or partially combined so that they operate or provide functionality as a single module. Further, one or more modules may operate in conjunction with one another so that, for example, one module acts as a service or an extension of another module. Also, each module, data store, component, program, executable, data item, functional unit, or other item illustrated within a storage device may include multiple components, sub-components, modules, sub-modules, data stores, and/or other components or modules or data stores not illustrated. Further, each module, data store, component, program, executable, data item, functional unit, or other item illustrated within a storage device may be implemented in various ways. For example, each module, data store, component, program, executable, data item, functional unit, or other item illustrated within a storage device may be implemented as part of an operating system executed on a computing device. Alarms included in a profile213, when triggered or ‘active,’ determine whether the profile213is active. In addition, alarms for groups of elements of which an element is a member may also determine whether the profile213for the element is active. Because an element may be a member of multiple groups that overlap at least with respect to the element, policy controller23that generates and in some cases distributes profiles213to policy agents35may allow users and administrators of virtualization infrastructure of data center10A to express, with a profile213, a combination of alarms to be applied to an element or group by configuring membership of the element or group in another group, without having to manually configure an overall profile213for the element or the group. Further, the techniques may dynamically account for changes in a group membership by evaluating the profile213for the group using the new set of elements that are members of a group without having to reconfigure the profile213for the group. Policy controller23may distribute profiles213to policy agents35executing on computing devices, e.g., server12or network devices of data center10A. A policy agent35evaluates each of the received profiles213based on the one or more alarms included therein and on metrics received by the policy agent35indicating a performance of the element and, at least in some cases where the element is a member of a group, one or more other elements that are also members of the group. Further example techniques by of policy controller23are described in U.S. patent application Ser. No. 15/797,098, entitled “MICRO-LEVEL MONITORING, VISIBILITY AND CONTROL OF SHARED RESOURCES INTERNAL TO A PROCESSOR OF A HOST MACHINE FOR A VIRTUAL ENVIRONMENT,” incorporated herein by reference in its entirety. FIG.2is a block diagram illustrating an example implementation of data center10A ofFIG.1in further detail. In the example ofFIG.2, data center10A includes an overlay network that extends switch fabric20from physical switches16,18to software or “virtual” switches30A-30X (again, collectively “virtual routers30”). Virtual routers30dynamically create and manage one or more virtual networks34usable for communication between application instances. In one example, virtual routers30execute the virtual network as an overlay network, which provides the capability to decouple an application's virtual address from a physical address (e.g., IP address) of the one of servers12A-12X (“servers12”) on which the application is executing. Each virtual network may use its own addressing and security scheme and may be viewed as orthogonal from the physical network and its addressing scheme. Various techniques may be used to transport packets within and across virtual networks34over the physical network. In some examples, the techniques described in this disclosure provide multicast service within virtual networks34without requiring multicast support in the underlying physical network. Each virtual router30may execute within a hypervisor, a host operating system or other component of each of servers12. Each of servers12may represent an x86 or other general-purpose or special-purpose server capable of executing virtual machines36. In the example ofFIG.2, virtual router30A executes within hypervisor31, also often referred to as a virtual machine manager (VMM), which provides a virtualization platform that allows multiple operating systems to concurrently run on one of servers12. In the example ofFIG.2, virtual router30A manages virtual networks34, each of which provides a network environment for execution of one or more virtual machines (VMs)36on top of the virtualization platform provided by hypervisor31. Each VM36is associated with one of the virtual networks VN0-VN1and may represent tenant VMs running customer applications such as Web servers, database servers, enterprise applications, or hosting virtualized services used to create service chains. In some cases, any one or more of servers12or another computing device may host customer applications directly, i.e., not as virtual machines. In some cases, some of VMs36may represent containers, another form of virtualized execution environment. That is, both virtual machines and container are examples of virtualized execution environments for executing workloads. In general, each VM36may be any type of software application and may be assigned a virtual address for use within a corresponding virtual network34, where each of the virtual networks may be a different virtual subnet provided by virtual router30A. A VM36may be assigned its own virtual layer three (L3) IP address, for example, for sending and receiving communications but may be unaware of an IP address of the physical server12A on which the virtual machine is executing. In this way, a “virtual address” is an address for an application that differs from the logical address for the underlying, physical computer system, e.g., server12A in the example ofFIG.1or2. In one implementation, each of servers12includes a corresponding one of virtual network (VN) agents35A-35X (collectively, “VN agents35”) that controls the overlay of virtual networks34and that coordinates the routing of data packets within server12. In general, each VN agent35communicates with virtual network controller22, which generates commands to control routing of packets through data center10A. VN agents35may operate as a proxy for control plane messages between virtual machines36and virtual network controller22. For example, a VM36may request to send a message using its virtual address via the VN agent35A, and VN agent35A may in turn send the message and request that a response to the message be received for the virtual address of the VM36that originated the first message. In some cases, a VM36may invoke a procedure or function call presented by an application programming interface of VN agent35A, and the VN agent35A may handle encapsulation of the message as well, including addressing. In accordance with the techniques of the disclosure, each VN agent35further includes a respective policy agent139A-139X (collectively, policy agents139″) for applying one or more policies to one or more categories, as will described in more detail below. In one example, network packets, e.g., layer three (L3) IP packets or layer two (L2) Ethernet packets generated or consumed by the instances of applications executed by virtual machines36within the virtual network domain may be encapsulated in another packet (e.g., another IP or Ethernet packet) that is transported by the physical network. The packet transported in a virtual network may be referred to herein as an “inner packet” while the physical network packet may be referred to herein as an “outer packet” or a “tunnel packet.” Encapsulation and/or de-capsulation of virtual network packets within physical network packets may be performed within virtual routers30, e.g., within the hypervisor or the host operating system running on each of servers12. As another example, encapsulation and de-capsulation functions may be performed at the edge of switch fabric14at a first-hop TOR switch16that is one hop removed from the application instance that originated the packet. This functionality is referred to herein as tunneling and may be used within data center10A to create one or more overlay networks. Besides IPinIP, other example tunneling protocols that may be used include IP over GRE, VxLAN, MPLS over GRE, MPLS over UDP, etc. As noted above, virtual network controller22provides a logically centralized controller for facilitating operation of one or more virtual networks within data center10A. Virtual network controller22may, for example, maintain a routing information base, e.g., one or more routing tables that store routing information for the physical network as well as one or more overlay networks of data center10A. Similarly, switches16,18and virtual routers30maintain routing information, such as one or more routing and/or forwarding tables. In one example implementation, virtual router30A of hypervisor31implements a network forwarding table (NFT)32for each virtual network34. In general, each NFT32stores forwarding information for the corresponding virtual network34and identifies where data packets are to be forwarded and whether the packets are to be encapsulated in a tunneling protocol, such as with a tunnel header that may include one or more headers for different layers of the virtual network protocol stack. In accordance with the techniques of the disclosure, virtual network controller22further includes policy controller23. Policy controller23is configured to tag a plurality of objects across a plurality of levels. In some examples, the plurality of levels includes a level of an object, such as a global environment level, a project level, a virtual network level, a virtual machine level, or an interface level of the object. In further examples, policy controller23tags the plurality of objects across a plurality of categories. In some examples, the plurality of categories include applications executing within VMs36, deployments, application tiers, geographic sites, virtual networks, VMs36, interfaces, projects, security requirements, quality requirements, users, or compliance requirements. In some examples, policy controller23further distributes, to respective policy agents139of VN agents35, a plurality of policies. Each policy of the plurality of policies includes one or more policy rules for controlling network traffic. Each of the plurality of policies may further include instructions for issuing communication probes from the agent to one or more different agents. Each policy rule of the plurality of policy rules specifies one or more tags, each tag further specifying one or more dimensions of the categories. Upon receiving the plurality of policies, each policy agent139applies the one or more policy rules of each of the plurality of policies to tagged objects corresponding to tags of the one or more policy rules to control network traffic between the tagged objects. For example, policy agents139under the direction of policy controller23express the one or more policy rules at logical interfaces of VMs36that are tagged with tags corresponding to tags of the one or more policy rules. Policy agents139permit or block network traffic to and/or from the logical interfaces based on the one or more policy rules. As an illustration of the above, in the example ofFIG.2, policy agent139A under the direction of policy controller23has tagged virtual machines VM036, VM136, VM236, and VM336as belonging to a category described by tag 1. Further, policy agent139A has categorized virtual networks VN034, VN134, and VN234in a category described by tag 2, respectively. Furthermore, virtual machines VM036, VM136, and VM236belong to a first project33, which policy agent139A has categorized as belonging to a category described by tag 3. Still further, policy agent139A has categorized server112A as belonging to a category described by tag 4. In some examples, the one or more policy rules describe security or firewall rules, and based on the one or more security or firewall rules, each policy agent139may allow or deny network traffic between categories described by the one or more tags. In other examples, each of the one or more policy rules describe one or more firewall, network, application, or global policy rules. Thus, by applying the policies as described herein, policy agents139may apply specific policy rules so as to enable workloads with specific dimensions. Accordingly, the techniques of the disclosure may allow for scalable deployment of policies across different environments within a plurality of data centers10. Further, the techniques of the disclosure may reduce the complexity and simplify management of such policies within the plurality of data centers10. In some examples, policies are “inherited” in that policies applied to parent objects are likewise applied to objects that are children of the parent objects in a hierarchical fashion. As one example, policy agent139A receives a first policy rule that includes tag 3 that is applied to project33. Policy agent139A applies the first policy rule to project33. Policy agent139A further applies the first policy rule to children of the first project, such as virtual networks, virtual machines, and interfaces within the first project, but not to virtual networks, virtual machines, and interfaces within a second project, so as to permit or deny network traffic to the first project and objects within the first project as specified by the first policy rule. In the example ofFIG.2, policy agent139A applies the first policy rule to virtual machines VM036, VM136, and VM236because virtual machines VM036, VM136, and VM236are children of project33. However, policy agent139A does not apply the first policy rule to virtual machine VM336, which is not part of project33. Continuing the forgoing example, policy agent139A receives a second policy rule that includes tag 1 that describes virtual machines36. Policy agent139A therefore applies both the first policy rule and the second policy rule to virtual machines VM036, VM136, and VM236. Further, policy agent139A applies both the first policy rule and the second policy rule to interfaces of virtual machines VM036, VM136, and VM236(not depicted), which are children objects of virtual machines36. However, policy agent139A does not apply the second policy rule to project33because project33is a parent, not a child, of virtual machines VM036, VM136, and VM236. In some examples of the techniques of the disclosure, the policy framework implements complex regular expressions to apply tags for such policies to specific environments or infrastructure. For example, such policy rules include tag expressions to group and/or segment workloads. As a further example, policy agent139A applies a match condition to match the tag expressions with tagged objects corresponding to the tag expressions. Thus, global policies may be distributed to a plurality of policy agents139, and each policy agent139may selectively apply only those policy rules that include tag expressions that correspond to tagged objects. In some examples, policy controller23distributes one or more policy rules via Border Gateway Protocol (BGP). Such a BGP message may include an action for a particular traffic flow, such as allowing or denying the flow and a list of one or more destination protocols and ports for application of the specified action. In such an example, the BGP message may further specify one or more tags for an object as an extended community for BGP. Additional information with respect to BGP is described in “BGP MPLS-Based Ethernet VPN,” RFC 7432, Internet Engineering Task Force (IETF), February 2015, available at https://tools.ietf.org/html/rfc7432, the entire contents of which is incorporated herein by reference. FIG.3is a block diagram illustrating an example of a plurality of data centers10in which examples of the techniques described herein may be implemented. Each of data centers10A-10X (collectively “datacenters10”) may operate in a substantially similar fashion to datacenters10ofFIG.1andFIG.2. In some examples, one or more of datacenters10implements a cloud environment executing Openstack, while in another example, one or more of data centers10is a cloud environment executing Kubernetes, while in yet another example, one or more of data centers10implements a cloud environment executing Amazon Web Services, while in yet another example, one or more of data centers10executes on “bare metal” infrastructure. In some examples, each of data centers10executes a different infrastructure than each other data center10. In the example ofFIG.3, each of data centers10comprises a database308and three different environments: a development environment310, a staging environment312, and a production environment314. Each environment of each data center10is capable of executing one or more applications302A-302D (collectively, “applications302”), such as application302A executing within development environment310A of data center10A, application302B executing within production environment314A of data center10A, application302C executing within development environment310B of data center10B, and application302D executing within staging environment312B of data center10B. In one example, each application302is an instance of a finance application. Policies as described herein are robust and extensible for application in many different types of cloud environments described above. Within an environment, each application may include a plurality of applications. In the example ofFIG.3, application302A of development environment310A executes web process304A and Application Program Interface (API) process306A. Further, application302B of production environment314A executes web process304B and API process306B. Application302C of development environment310B executes web process304C and API process306C. Still further, application302D of staging environment312B executes web process304D and API process306D. However, in other examples, each application302of a respective environment310A,312A,314A,310B,312B, or314B may execute additional or different processes than each other application302. Furthermore, in some examples, the applications executed within each respective environment310A,310B,312A,312B,314A, and314B may be similar or different from the applications executed within each other environment310A,310B,312A,312B,314A, and314B. It may be desirable to ensure that processes304,306of each application302of a respective environment310,312,314does not exchange network traffic with processes304,306of an application302within a different environment310,312,314, while still permitting network traffic to flow between different processes304,306within the same application302. For example, an administrator may desire to allow network traffic between web process304A and API process306A because each of web process304A and API process306operate within the same application302A within development environment310A. Further, the administrator may desire to prevent network traffic between web process304A (e.g., operating within development environment310A of data center10A) and API process306B (e.g., operating within production environment314A of data center10A) or between web process304A (e.g., operating within development environment310A of data center10A) and API process306C (e.g., operating within development environment310B of data center10B). Further, an administrator may desire to permit processes304,306executing within an application302within the same data center10to access the same database308, regardless of the specific environment310,312, or314within which the particular application302executes. For example, an administrator may desire to allow API process306A executing within application302A and API process306B executing within application302B to each access database308A. If the administrator were to independently manage network traffic policies for each process304,306within each application302within each environment312,312, and314within each data center10, the administrator may have difficulty ensuring that each network traffic policy meets security, reliability, and quality requirements. Furthermore, upgrading one or more of the policies across multiple data centers10may become cumbersome or unmanageable. In accordance with the techniques of the disclosure, one or more tags specifying one or more categories are applied to each process304,306. As depicted in the example ofFIG.3, policy controller23of VNC22applies tag “1A” to web process304A, which specifies that web process304A belongs to development environment310A. Similarly, policy controller23applies tag “2A” to API process306A, which specifies that web process304A belongs to development environment310A. In contrast, policy controller23applies tag “1B” to web process304B and tag “2B” to API process306B, each of which specify that the respective application belongs to production environment314A. Furthermore, tag “1C” is applied to web process304C and tag “2C” is applied API process306C, each of which specify that the respective application belongs to development environment310B; and tag “1D” is applied to web process304D and tag “2D” is applied to API process306D, each of which specify that the respective application belongs to staging environment312B. In one example, a policy agent139of a VN agent35receives, from policy controller23, one or more policies specifying one or more policy rules for allowing or blocking network traffic to and from applications302. Each of the one or more policy rules specify one or more tags, each of the tags specifying one or more categories for application of the policy rules. Policy agent139applies, to each process304,306of each application302, only those policy rules that include tags matching tags applied to respective processes304,306. For example, policy agent139may receive a first policy rule that permits network traffic destined for API process306B and includes tag 1B. Policy agent139may examine tags for each of processes304,306, but apply the policy rule only to web process304B because web process304B is categorized within the same category as specified by tag 1B. VN agent35may subsequently permit network traffic flowing from an interface of a virtual machine executing web process304B to an interface of a virtual machine executing API process306B and block other traffic flowing to the interface of the virtual machine executing API process306B, such as traffic flowing from an interface of a virtual machine executing web process304A. Similarly, policy agent139may receive a second policy rule that permits network traffic destined for database308A and includes tags 2A and 2B. Policy agent139may examine categories applied to each of processes304,306. Policy agent139applies the policy rule only to API processes306A and306B because API processes306A and306B are categorized within the same category as specified by tags 2A and 2B, respectively. VN agent35may subsequently permit network traffic flowing from either an interface of a virtual machine executing API process306A or the interface of the virtual machine executing API process306B to an interface of database308A and block other traffic flowing to the interface of database308A, such as traffic flowing from a virtual machine executing API process306D. As depicted in the foregoing example, the second policy rule may specify tags 2A and 2B in a Boolean expression, e.g., (2A OR 2B). However, in accordance with the techniques of the disclosure, each policy rule may further include complex regular expressions that specify tags for one or more objects. For example, a policy rule may include one or more Boolean operators (e.g., AND, OR, NOT), mathematical operators (e.g., addition, subtraction, multiplication, division), bit-shifting operations, matrix operations, text parsing operators, or a combination of any of the foregoing. In the foregoing examples, policy agent139implements whitelisting, in that all traffic between designated interfaces of virtual machines is preemptively blocked, and the one or more policy rules whitelist specific traffic flows (e.g., specify specific traffic flows between interfaces of virtual machines that are permitted). In alternate examples, policy agent139implements blacklisting, in that all traffic between interfaces of virtual machines is preemptively allowed, and the one or more policy rules blacklist specific traffic flows (e.g., specify specific traffic flows between interfaces of virtual machines that are blocked). In alternate examples, the one or more policy rules may include a combination of whitelisting or blacklisting specific traffic flows or groups of traffic flows between interfaces of virtual machines. In the example ofFIG.3, policy agent139is depicted as receiving, from policy controller23, tags specifying the specific environments to which processes304,306belong. However, in other examples, policy agent139may receive tags that specify other categories for organizing processes304,306. For example, tags for processes304,306may additionally, or alternatively, specify one or more application types, deployments, application tiers, geographic sites, virtual networks, virtual machines, interfaces, projects, security requirements, quality requirements, users, or compliance requirements. For examples, one or more of data centers10implements a cloud environment executing OpenStack. In this example, the cloud environment supports three projects, each providing a finance application comprising three virtual networks. Within such a data center10, policy controller23may assign one or more tags at a project level. For example, policy controller23may assign a tag comprising a key of “application” and a value of “finance” and a tag comprising a key of “deployment” and a value of “development” to a first project. Policy controller23may assign a tag comprising a key of “application” and a value of “finance” and a tag comprising a key of “deployment” and a value of “staging” to a second project. Policy controller23may assign a tag comprising a key of “application” and a value of “finance” and a tag comprising a key of “deployment” and a value of “production” to a third project. Further, within each project, policy controller23may assign tags to each of the three virtual networks. For example, policy controller23assigns a tag comprising a key of “tier” and a value of “web” to a first virtual network, a tag comprising a key of “tier” and a value of “application” to a second virtual network, and a tag comprising a key of “tier” and a value of “database” to a third virtual network. Policy controller23distributes one or more policy rules to policy agent139, which applies the one or more policy rules to interfaces of virtual machines within the data center10based on the tags applied to the projects and virtual networks. In some examples, one or more of data centers10is a cloud environment executing Kubernetes. In this example, the cloud environment supports two namespaces, each providing a finance application comprising three virtual networks. Within such a data center10, policy controller23may assign one or more tags at a namespace level. For example, policy controller23may assign a tag comprising a key of “application” and a value of “finance,” a tag comprising a key of “deployment” and a value of “development,” and a tag comprising a key of “site” and a value of “US” to a first namespace. Policy controller23may assign a tag comprising a key of “application” and a value of “finance,” a tag comprising a key of “deployment” and a value of “production,” and a tag comprising a key of “site” and a value of “US” to a second namespace. Further, within each namespace, policy controller23may assign tags to each of the three virtual networks. For example, policy controller23assigns a tag comprising a key of “tier” and a value of “web” to a first virtual network, a tag comprising a key of “tier” and a value of “application” to a second virtual network, and a tag comprising a key of “tier” and a value of “database” to a third virtual network. Policy controller23distributes one or more policy rules to policy agent139, which applies the one or more policy rules to interfaces of virtual machines within the data center10based on the tags applied to the projects and virtual networks. In some examples, a first data center10A implements a cloud environment executing Amazon Web Services and a second data center10B executes on “bare metal” infrastructure.” In this example, data center10A supports a first finance application comprising three virtual networks and executing within a development deployment, while data center10A supports a second finance application comprising three virtual networks and executing within a production deployment. In this example, policy controller23may assign one or more tags at an application level. For example, policy controller23may assign a tag comprising a key of “application” and a value of “finance,” a tag comprising a key of “deployment” and a value of “development,” and a tag comprising a key of “site” and a value of “US” to the first application executing within data center10A. Policy controller23may further assign a tag comprising a key of “application” and a value of “finance,” a tag comprising a key of “deployment” and a value of “production,” and a tag comprising a key of “site” and a value of “US” to the second application executing within data center10B. Further, within each application, policy controller23may assign tags to each of the three virtual networks. For example, policy controller23assigns a tag comprising a key of “tier” and a value of “web” to a first virtual network, a tag comprising a key of “tier” and a value of “application” to a second virtual network, and a tag comprising a key of “tier” and a value of “database” to a third virtual network. Policy controller23distributes one or more policy rules to policy agent139, which applies the one or more policy rules to interfaces of virtual machines within the data center10based on the tags applied to the projects and virtual networks. In the example ofFIG.3, policy controller23is described as distributing policy rules including tags for objects at a project level. However, policy controller23may additionally or alternatively distributing policy rules including tags specifying various different object levels, such as a global environment level, a project level, a virtual network level, a virtual machine level, or an interface level. In an example implantation of the techniques of the disclosure, policy controller23uses a plurality of configuration objects to implement the one or more policies. As one example, policy controller23applies a first set of configuration objects at a global level. The first set configuration objects includes global application policy sets, global firewall policies, global firewall rules, and global tags across a plurality of levels and/or categories. Policy controller23distributes, to policy agents139, the first set of configuration objects at the global level. Policy agents139match global tags associated with global application policy sets, global firewall policies, and global firewall rules to objects tagged with the global tags. Based on the global application policy sets, global firewall policies, and global firewall rules, policy agents139allow or block network traffic between interfaces of the objects tagged with the global tags. Furthermore, policy controller23applies a second set of configuration objects at a project level. The second set of policy rules may include project-specific application policy sets, firewall policies, firewall rules, and tags across a plurality of levels. Policy controller23distributes, to policy agents139, the second set of configuration objects at the project level. Policy agents139match project tags associated with project-specific application policy sets, firewall policies, and firewall rules to objects tagged with the project tags. Based on the project-specific application policy sets, firewall policies, and firewall rules, policy agents139allow or block network traffic between interfaces of the objects tagged with the project tags. In further examples, policy controller23may specify lower-level configuration objects, such as application policy sets, firewall policies, firewall rules, and tags defined at a virtual network-specific level, a virtual machine-specific level, and/or an interface-specific level. By doing so, policy controller23may apply a hierarchical set of policies to a plurality of objects within one or more data centers10. Thus, the techniques of the disclosure allow for distribution of simplified traffic policies that are scalable and robust across many different types of deployments and execution environments. FIG.4is a block diagram illustrating an example computing device100that executes a virtual router120for virtual networks in accordance with one or more aspects of the present disclosure. Computing device100may represent any of servers12ofFIGS.1-3or other device, such as any of TOR switches16. Computing device100includes in this example a system bus142coupling hardware components of a computing device100hardware environment. System bus142couples memory144, network interface cards (NICs)106A-106B (collectively, “NICs106”), storage disk107, and multi-core computing environment102having a plurality of processing cores108A-108J (collectively, “processing cores108”). Network interface cards106include interfaces configured to exchange packets using links of an underlying physical network. Multi-core computing environment102may include any number of processors and any number of hardware cores from, for example, four to thousands. Each of processing cores108each includes an independent execution unit to perform instructions that conform to an instruction set architecture for the core. Processing cores108may each be implemented as separate integrated circuits (ICs) or may be combined within one or more multi-core processors (or “many-core” processors) that are each implemented using a single IC (i.e., a chip multiprocessor). Disk107represents computer readable storage media that includes volatile and/or non-volatile, removable and/or non-removable media implemented in any method or technology for storage of information such as processor-readable instructions, data structures, program modules, or other data. Computer readable storage media includes, but is not limited to, random access memory (RAM), read-only memory (ROM), EEPROM, flash memory, CD-ROM, digital versatile discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information and that can be accessed by cores108. Main memory144includes one or more computer-readable storage media, which may include random-access memory (RAM) such as various forms of dynamic RAM (DRAM), e.g., DDR2/DDR3 SDRAM, or static RAM (SRAM), flash memory, or any other form of fixed or removable storage medium that can be used to carry or store desired program code and program data in the form of instructions or data structures and that can be accessed by a computer. Main memory144provides a physical address space composed of addressable memory locations. Memory144may in some examples present a non-uniform memory access (NUMA) architecture to multi-core computing environment102. That is, cores108may not have equal memory access time to the various storage media that constitute memory144. Cores108may be configured in some instances to use the portions of memory144that offer the lowest memory latency for the cores to reduce overall memory latency. In some instances, a physical address space for a computer-readable storage medium may be shared among one or more cores108(i.e., a shared memory). For example, cores108A,108B may be connected via a memory bus (not shown) to one or more DRAM packages, modules, and/or chips (also not shown) that present a physical address space accessible by cores108A,108B. While this physical address space may offer the lowest memory access time to cores108A,108B of any of portions of memory144, at least some of the remaining portions of memory144may be directly accessible to cores108A,108B. One or more of cores108may also include an L1/L2/L3 cache or a combination thereof. The respective caches for cores108offer the lowest-latency memory access of any of storage media for the cores108. Memory144, network interface cards (NICs)106A-106B (collectively, “NICs106”), storage disk107, and multi-core computing environment102provide an operating environment for a software stack that executes a virtual router120and one or more virtual machines110A-110K (collectively, “virtual machines110”). Virtual machines110may represent example instances of any of virtual machines36ofFIG.2. The computing device100partitions the virtual and/or physical address space provided by main memory144and in the case of virtual memory by disk107into user space111, allocated for running user processes, and kernel space112, which is protected and generally inaccessible by user processes. An operating system kernel (not shown inFIG.4) may execute in kernel space and may include, for example, a Linux, Berkeley Software Distribution (BSD), another Unix-variant kernel, or a Windows server operating system kernel, available from Microsoft Corp. Computing device100may in some instances execute a hypervisor to manage virtual machines110(also not shown inFIG.4). An example hypervisor31is illustrated inFIG.2. Example hypervisors include Kernel-based Virtual Machine (KVM) for the Linux kernel, Xen, ESXi available from VMware, Windows Hyper-V available from Microsoft, and other open-source and proprietary hypervisors. In some examples, specialized hardware programmed with routing information such as FIBs124may execute the virtual router120. Eth0114A and Eth1114B represent devices according to a software device model and provide device driver software routines for handling packets for receipt/transmission by corresponding NICs106. Packets received by NICs106from the underlying physical network fabric for the virtual networks may include an outer header to allow the physical network fabric to tunnel the payload or “inner packet” to a physical network address for one of NICs106. The outer header may include not only the physical network address but also a virtual network identifier such as a VxLAN tag or Multiprotocol Label Switching (MPLS) label that identifies one of the virtual networks as well as the corresponding routing instance122. An inner packet includes an inner header having a destination network address that conform to the virtual network addressing space for the virtual network identified by the virtual network identifier. For example, virtual router forwarding plane128may receive by Eth1from NIC106a packet having an outer header than includes a VxLAN associated in virtual router forwarding plane128with routing instance122A. The packet may have an inner header having a destination network address that is a destination address of VM110A that taps, via tap interface146A, into routing instance122A. Virtual router120in this example includes a kernel space112module: virtual router forwarding plane128, as well as a user space111module: VN agent35. Virtual router forwarding plane128executes the “forwarding plane” or packet forwarding functionality of the virtual router120and VN agent35executes the “control plane” functionality of the virtual router120. VN agent35may represent an example instance of any of VN agents35ofFIG.2. Virtual router forwarding plane128includes multiple routing instances122A-122C (collectively, “routing instances122”) for corresponding virtual networks. Each of routing instances122includes a corresponding one of forwarding information bases (FIBs)124A-124C (collectively, “FIBs124”) and flow tables126A-126C (collectively, “flow tables126”). Although illustrated as separate data structures, flow tables126may in some instances be logical tables implemented as a single table or other associative data structure in which entries for respective flow tables126are identifiable by the virtual network identifier (e.g., a VRF identifier such as VxLAN tag or MPLS label)). FIBs124include lookup tables that map destination addresses to destination next hops. The destination addresses may include layer 3 network prefixes or layer 2 MAC addresses. Flow tables126enable application of forwarding policies to flows. Each of flow tables126includes flow table entries that each match one or more flows that may traverse virtual router forwarding plane128and include a forwarding policy for application to matching flows. For example, virtual router forwarding plane128attempts to match packets processed by routing instance122A to one of the flow table entries of flow table126A. If a matching flow table entry exists for a given packet, virtual router forwarding plane128applies the flow actions specified in a policy to the packet. This may be referred to as “fast-path” packet processing. If a matching flow table entry does not exist for the packet, the packet may represent an initial packet for a new packet flow and virtual router forwarding plane128may request VN agent35to install a flow table entry in the flow table for the new packet flow. This may be referred to as “slow-path” packet processing for initial packets of packet flows and is represented inFIG.4by slow path140. In this example, VN agent35may be a user space111process executed by computing device100. VN agent35includes configuration data134, virtual routing and forwarding instances configurations136(“VRFs136”), and policy table138(“policies138”). VN agent35exchanges control information with one or more virtual network controllers (e.g., VNC22ofFIGS.1-2). Control information may include, virtual network routes, low-level configuration state such as routing instances and forwarding policy for installation to configuration data134, VRFs136, and policies138. VN agent35may also report analytics state, install forwarding state to FIBs124of virtual router forwarding plane128, discover VMs110and attributes thereof. As noted above, VN agent35further applies slow-path packet processing for the first (initial) packet of each new flow traversing virtual router forwarding plane128and installs corresponding flow entries to flow tables126for the new flows for fast path processing by virtual router forwarding plane128for subsequent packets of the flows. In accordance with the techniques of the disclosure, VN agent35further includes tag database141. As described above, policy controller23of VNC22assigns one or more tags specifying one or more levels and/or one or more categories for the objects. Policy controller23may apply tags to objects across a plurality of levels or categories, as well as apply the tags to objects across a plurality of data centers. Policy controller23distributes these tags to VN agent35, which stores the assigned tags in tag database141. VN agent35further includes a policy agent139. Policy agent139receives, from policy controller23, one or more policies and stores such policies in policies138. Each of policies138may include one or more policy rules that spec whether particular network traffic should be allowed or blocked. Further, each policy rule includes one or more policy tags that specify to which objects, such as applications executing on VMs110, the policy rules apply. For example, policy agent139may cross reference the one or more tags of a policy rule with one or more tags in tag database141to determine one or more objects to which the policy rule should be applied. Upon determining the one or more objects to which the policy rule should be applied, policy agent139allows or blocks network traffic originating from or destined to the one or more objects in accordance with the policy rule. In some examples, policy agent139allows or blocks network traffic at an interface level of VMs110. In this example, interfaces146inherit tags from parent objects, such as VMs110, virtual networks, parent projects, or global policy sets.10. Thus, policy agent139allows or blocks network traffic destined for or originating from interfaces146of VMs110by matching tags of one or more policy rules to tags applied to interfaces146. As one example, policy agent139receives, from policy controller23, a first policy rule that permits network traffic originating from interface146C and includes tag 1. Policy agent139may examine categories applied to interfaces146, determine that a category of interface146A matches tag 1 of the first policy rule, and apply the policy rule only to interface146A. VN agent35may subsequently permit network traffic originating from interface146C and destined for interface146A. As another example, policy agent139receives, from policy controller23, a second policy rule that blocks network traffic originating from interface146C and includes tag 2. Policy agent139may examine categories applied to interfaces146, determine that the category of interface146B matches tag 2 of the second policy rule, and apply the policy rule only to interface146B. VN agent35may subsequently block network traffic originating from interface146C and destined for interface146B. In addition, the policy agent139may receive from the policy controller23instructions related to communication probes to be issued by the policy agent139. The policy controller23may include instructions regarding the communication protocol to be used in issuing the communication probes, the interval for issuing the probes, and what data and other information the policy agent139would monitor for and collect in conjunction with the issuance of the probes. The Policy controller23may also instruct the policy agent139on when and how to report any results related to the issuance of the communication probes back to the policy controller23. To update policies for a particular object or group of objects, an administrator may instruct policy controller23to distribute new policies to policy agents139of respective VN agents35of a plurality of data centers10. In some examples, policy controller23distributes the new policies via a BGP message. In this fashion, an administrator may manage a scalable, multi-dimensional policy framework for a plurality of data centers. Further, each policy agent139may automatically apply the relevant policies without requiring the administrator to individually manage or update policies for each VN agent35. Thus, the techniques of the disclosure may greatly reduce the complexity and simplify management of such policies across the plurality of data centers10. FIG.5is a block diagram of an example network500in which a controller23manages servers or compute nodes and provides a visualization of various metrics associated with the performance of a computer network through a dashboard203in accordance with one or more aspects of the present disclosure. In the example illustrated inFIG.5, controller23is shown as part of cluster502. In other examples, however, controller23is not part of cluster502, and in such an example, “cluster502” refers to the compute nodes but not the controller. Cluster502ofFIG.5may represent a cloud-based computing network and/or a computing domain or project, or other types of computing clusters. Where cluster502ofFIG.5represents a cloud environment, such a cloud environment may be an OpenStack cloud environment or a Kubernetes cloud environment. Cluster502may be distributed across multiple environments, such as different data centers. Controller23may operate in the manner described and illustrated in connection withFIGS.1-4. For instance, in some examples controller23may interact with monitoring or policy agents (not shown inFIG.5, but e.g. policy agents35A-35X,FIG.1), which are deployed within one or more of servers12(i.e., hosts or compute nodes) for monitoring resource usage and other performance metrics related to the operations of the servers or physical compute nodes as well as any virtualized host or instance, such as VMs or containers, implemented on one or more compute nodes. As described herein, the agents within cluster502may gather and publish information on message bus215about various performance parameters and/or metrics associated with the performance of a computer network, which may include physical devices included in switch fabric20and additional virtual devices that couple servers12. The monitoring/policy agents may provide distributed mechanisms for collecting a wide variety of information related to the computer network coupling servers12, and provide the information to controller23. For example, the policy agents may issue communication probes, in some examples at some predetermined intervals, the probes directed to other policy agents on a same or a different server within the computer network. Various parameters, such as the receipt of a response at the policy agent that was the destination of an issued probe, and a measured time lapse between the time of issuance of the probe and the time of receipt of a response from the destination agent at the source agent, may be measured and recorded for each communication probe that is issued. Various metrics and performance statuses may be measured and/or calculated/determined based on the issuance of the probes and/or based on the information reported to controller23by the policy agents issuing the probes and monitoring the responses to the issued probes. Further, controller23may issue polices and rules, for example saved as polices202(FIG.1), which are communicated to one or more of the policy agents issuing probes. The policies and rules may be used to control the issuance of the probes and/or to provide rules related to the gathering of information related to the issuance of the probes. For example, the format for a communication protocol to be used for the issued probes and/or the time interval for issuing the probes may be defined by the policies provided to one or more of the policy agents by controller23. Further, the policies issued to the policy agents by controller23may define what types of data and/or other information is to be collected by the policy agents based on the issuance of the probes, and how and when to report the collected data or other information to the controller23. Dashboard203may be implemented in the manner described and illustrated in connection withFIGS.1-4. As described in connection withFIG.1, dashboard203may be created, updated, and/or maintained primarily by controller23and/or by a dashboard module executing on policy controller23. In some examples, a computing device separate from controller23may generate dashboard203. As illustrated inFIG.5, controller23may generate dashboard203, which may represent a collection of graphical user interfaces (e.g., including user interface510) that provide information about topology, structure, hierarchy, utilization, and/or performance statuses and metrics associated with infrastructure elements. Controller23generates and outputs a graphical user interface screen, e.g., user interface510, for display by UI device129. In the example ofFIG.5, user interface510includes a set of application icons arranged in an application selection area511, a view control area512, and a main display area513. A graphical representation of a set of aggregates displayed within the main display area513corresponds to aggregates of policy agents located on servers12, and grouped together to form the individual aggregates shown as the interconnected circles in user interface510. To form aggregates, controller23may use tag information configured for servers12and referenced in policies applies by controller23. Such tag information may identify each of servers12as hosting one or more instances each tagged with a label for an aggregate (e.g., “web”, “db”, “EBC105”, etc.). A server12that hosts instances tagged with different labels for aggregates inherits the labels, in which case the server12may belong to multiple aggregates. Controller23associates results data from the policy agents for the servers12with aggregates that include the servers, such as by the servers hosting instances that belong to the aggregates. In addition, one or more of infrastructure element representations provided in main display area513may include an indicator (e.g., a color or other visual indicator) that provides information about utilization, metrics, health, condition, and/or other status information relating to infrastructure of network500. For instance, a graphical indicator, such as an arrowhead, which is associated with an aggregate that is being visually represented in main display area513may be rendered in a particular color that represents a status associated with a status or a metric selected for the current display being rendered as user interface510. For example, a metric associated with a graphical display may relate to latency rates for communications sent from and returned to a particular agent or set of agents within a same aggregate of the computer network. Different colors may be rendered for a graphical indicator associated with the particular agent or set of agents associated with a same aggregate based on a status determined for the agent or set of agents and based on a measurement of the selected metric, e.g., the measured latency for communication probes issued by the agent or set of agents to another agent or set of agents. A red color indicator might represent a latency rate having a measured time value that exceeds a first threshold value, a yellow color indicator might represent a latency rate that falls within a middle range of threshold time values, and a green color indicator might represent a latency rate that less than a threshold time value. A color indicator different from the colors used to indicate a status value for a measured latency value may be used for example when the communication link being indicated by the graphical indicator is considered to be “unavailable.” In some examples, controller23may, in response to inputs from a user (e.g., interactions with view control area512within user interface510), generate or update the graphical user interfaces within dashboard203so that infrastructure elements are filtered by devices type, interconnections, health, risk, aggregate, project, network (virtual or physical), type, and/or in other ways. In such examples, a filter may cause some infrastructure elements to be hidden within dashboard203or a user interface of dashboard203, while presenting other infrastructure elements within dashboard203. Filters may be applied using functional paradigms. For example, each filter may correspond to a function, so that for a given set of “x” resources, elements, or other items to be filtered, and filter functions “f” and “g,” application of the filter may compute f(g(x)). Where filters follow functional paradigms, applying filters in a different order (e.g., g(f(x))) will have the same result. In some examples, some or all of filter functions are pure, so that no state information outside of the function's context is changed. Every time a filter value (or other user interface component) is changed, controller23may apply an appropriate filter to all the resources in a resource hash table, and then reassign the resource array to the resultant array. Filters may be applied and/or reapplied when an interaction with user interface510is detected, when one or more infrastructure element representations within main display area513are selected or changed, when metric data is received by controller23, and/or in other situations. In some examples, as part of generating user interface510the controller23provides a visual display of aggregates as a set of interconnected circles, which may be referred to as the “clusters view,” and which may be displayed as a default version of the user interface provided by dashboard203. The “clusters view” of the graphical user interface may also be displayed based on receiving an input to dashboard203of the selection of the “Clusters” button included in the view control area512, as further illustrated and described below with respect toFIG.6. In other examples, controller23provides an arrangement of a set of status blocks arranged in a grid of one or more columns and one or more rows, as further illustrated and described with respect toFIGS.7-17. Various user inputs may be provided to the user interface being displayed at any given time by dashboard203to allow for more or less detail related to a specific aggregate, a specific agent, or a specific set of agents to be rendered as the user interface being displayed by dashboard203. In addition, user inputs may be used to control the communication protocols used by the agents to issue and receive communication probes, the time intervals for issuing the probes, and the metric(s) to be measured and/or displayed as part of the graphical display being provided at any particular time as the user interface being generated by controller23and provided through dashboard203. The systems, devices and methods as described in this disclosure allow the generation of these graphical user interfaces that depict, in real-time or near real-time, various metrics associated with the operation of the devices providing communications within a computer network being monitored. In addition, the graphical user interfaces allow display of status associated with a selected metric, such as communication times between agents, at a high or overall level, or at various levels of detail, based on user selectable inputs. Further, inputs to the system generating the graphical user interfaces allow a user to quickly select and change between different communication protocols for generating and transmitting the communication probes between agents and/or serves within the computer network, and to select for display values for various metrics that were measured and/or calculated based on the issuance of these communication probes. Although the graphic symbols as described throughout this disclosure may relate to physical devices comprising the computer network being modeled by the graphical user interfaces, the representations included in the examples of graphic user interfaces are not limited to only physical devices. Graphic symbols as described in this disclosure may also include graphic symbols representing virtual entities, such as virtual servers, virtual router, and/or virtual instance of other computer/software entities that may have connections to one or more physical devices and/or to other virtual entities that may be graphically rendered and visually represented in a graphical user interface according to any of the examples described in this disclosure. The graphical user interfaces described in this disclosure, and any equivalents thereof that may be generated and displayed using the systems, devices, and methods described herein, provide a visual tool that may allow a user, such as an information technology (IT) professional or a technician, to quickly and efficiently model, visualize, and locate issues associated with the performance of a computer network in real or near real-time. The user interfaces as described in this disclosure allow a user to view a variety of performance statuses and metrics associated with the operation of a computer network at a high overall level, and to quickly move to more a detailed-level user interface that provides a visual representation of a more specific portion or portions of the computer network based on simple inputs to the system generating the user interfaces, such as inputs to user interface129coupled to controller23. FIG.6illustrates an example graphical user interface610output by a computing device that presents a cluster view of aggregates configured for a computer network in accordance with one or more aspects of the present disclosure. The illustration of user interface610includes features that remain the same or similar to corresponding features illustrated and described with respect to user interface510andFIG.5. These corresponding features as shown inFIG.6in conjunction with user interface610retain the same reference numbers, and may provide the same or similar features and perform the same or similar functions as was described above with respect to user interface510. Such corresponding features may include application selection area511, view control area512, and the portion of the user interface indicated as the main display area513. Examples of differences between user interface510ofFIG.5and user interface610as illustrated and described with respect toFIG.6are further described below. User interface610as shown inFIG.6includes application selection area511, view control area512, and a main display area513which may be the same or similar to the corresponding areas as described above with respect toFIG.5. As shownFIG.6, application selection area511includes a plurality of user selectable applications, such as “Dashboard,” “Charts,” “Alarms,” “Heat Map,” “Plan,” “Reports,” “Chargeback,” “Network Topology,” and “Mesh Connectivity,” with a graphical icon associated with each of the user selectable applications. When a user interacts with the user interface610, for example by positioning a cursor (e.g., cursor650) in the area where one of the selectable applications is located and indicates a selection of the particular selectable application, one or more applications may be launched, and various graphical displays associated with the selected application(s) may be presented in the view control area512and/or main display area513of user interface610. Selection of the text or the icon associated with “Mesh Connectivity” in area511as shown inFIG.6may be used to launch the application(s) used to generate and display the graphical user interfaces associated with monitoring computer networks as described throughout this disclosure. The user interface610as illustrated inFIG.6may appear as a default display when area603associated with the “Mesh Connectivity” text or the associated icon is first selected from within the application selection area511, and the application(s) used to generate the graphical user interfaces associated with monitoring a computer network is not already launched. In other examples, a display of the same or similar user interface to that illustrated inFIG.6may be provided when the application(s) used to generate the graphical user interfaces associated with monitoring a computer network is already launched and the user selectable “Clusters” button616provided within the “View” box611is selected. When an example of the “clusters view” as shown inFIG.6is being provided as the user interface begin output from dashboard203, the view control area512may include a plurality of boxes, including a “View” box611, a “Search Node” box612, a “Filter Groups” box613, and a “Visual Analytics” box614. When the “clusters view” is being provided, the main display area513as shown inFIG.6may include a plurality graphic symbols representing aggregates, such as aggregates631-638, illustratively shown as circles arranged generally in a circular arrangement relative to one another. Each circle representing one of the aggregates631-638includes a text label included within the circle, the text label indicative of a name assigned to that particular aggregate. As described above, each aggregate is a grouping together of a predefined number of agents that are included in a set of servers included in a computer network being monitored and modeled by the mesh connectivity application(s). Main display area513further includes a graphic depiction of a set of lines, each line of the set of lines extending between one of the circles representing an aggregate and another one of the circles representing a different aggregate. Each line of the set of lines in main display area513represents one or more communication interconnections between policy agent(s) included in the aggregates that the lines, respectively, extend between. For example, line640as shown in main display area513extends between the “web” aggregate638and the “migration” aggregate634. Line640has a first end that includes a first graphical indicator641and a second end that is opposite the first end and that includes a second graphical indicator642. Aggregate638may be referred to as a source aggregate and aggregate634a destination aggregate. First graphical indicator641and second graphical indicator642as illustrated in user interface610appear as arrowheads located on the respective ends of line640. However, the graphical indicators641and642are not limited to any particular shape, and may include shapes other than an arrowhead, such as a dot, a square, an ellipse, or some other graphical shape or symbol. First graphical indicator641is positioned at the end of line640proximate to the “migration” aggregate634, and the second graphical indicator642is positioned at the end of line640proximate to the “web” aggregate638. First graphical indicator641may provide a visual indication of a status determined based on at least one metric associated with communications originating from the one or more policy agents of nodes grouped into aggregate638and having a destination or destinations associated with one or a plurality of agents of nodes grouped into aggregate634. For the purpose of illustration, an example of the “web” aggregate638as illustrated in user interface610includes two agents shown as square shapes labeled “A” and “B” and positioned within the circle representing the “web” aggregate638, and the “migration” aggregate634includes two agents shown as square shapes labeled “C” and “D” and positioned within the circle representing “migration” aggregate634. Communication probes originating from aggregate638may be generated from agent “A” as the source, and directed to both agent “C” and to agent “D” of aggregate634as the destinations for these probes. Probes originating from aggregate638may also be generated by agent “B” as the source, and directed to both agent “C” and to agent “D” of aggregate634as the destinations for these probes. One or more metrics may be collected as a set of results (“results data”) based on the issuance of the communication probes having an agent of aggregate638as the source and an agent of aggregate634as the destination for the probes. A status that may be determined based on an analysis of the set of results, and the visual indication of that status, may be provided by first graphical indicator641, for example on an updated and real-time or near real-time basis. The visual indication of the status provided by first graphical indicator641in some examples may be provided by rendering the graphic symbol associated with first graphical indicator641in one of a plurality of colors, each of the plurality of colors indicative of a particular status associated with the analysis of the set of results that were collected based on probes originating from the agent(s) of the “web” aggregate638and directed to agent(s) of the “migration” aggregate634. Continuing with the illustrative example described above, a described above a set of probes may be configured to generate communication probes originating from agent “A” and directed to each of agents “C” and “D” of aggregate634, along with communication probes originating from agent “B” and directed to each of the agents “C” and “D” of aggregate634. The communication probes are configured to instruct the destination agents (i.e., agents “C” and “D”) to respond to the source agent (i.e., agents “A” and “B”), respectively, with a response communication acknowledging receipt of the respective probes if and when the communication probes are received at the destination agent(s). At least one metric that may be collected with respect to the issuance of these communication probes concerns the availability or unavailability of a communication connection between the source agent and the destination agent. At least one metric that may be collected with respect to these communication probes includes a latency time measured between the time of issuance of the probe from the source agent and the time when a response to the issued probe is received back from the destination agent at the source agent. In some examples, when a response is not received back at the source agent within a predetermined amount of time following the issuance of a probe, the particular communication link between the source agent and the destination agent may be labeled as having an “unavailable” status. The measured time for each response issued as a communication probe from the agent or agents in a given aggregate may be collected as a set of results associated with the communication probes issued by the agent(s) of that particular aggregate. For example, the measured times for each of the probes issued by any of the agents in a particular aggregate to agent in another particular aggregate may be individually compared to various ranges of threshold values, and a status assigned to each of the probes based on the response time measured for each issued probes that results in a response from the destination agent. In other examples, further analysis of the set of results may be performed to determine one or more overall statuses associated with the communication probes issued by the agent or agents in one aggregate to the agent or agents in another aggregate. For example, if any of the communication probes issued by the agent(s) of a given aggregate to agent(s) of another aggregate was/were determined to be “unavailable,” an overall status for a set of results associated with the probes may be determined as “unavailable.” In another example, if at least one probe originating from each of the agents included in an aggregate (such as aggregate638) and directed to at least one agent in another aggregate (such as aggregate634) was determined to be “unavailable,” an overall status for the set of results associated with the probes may be determined as “unavailable.” In another example, all of the probes originating from agents of aggregate638may have been responded to by the destination agents in aggregate634. In such instances, the analysis of the set of results associated with the probes may include determining which of the probes resulted in the slowest response time, i.e., which probe had the largest latency. The measured response time for the slowest response may then be compared to the ranges of threshold values for response times, and any overall status indicator for a graphic indicator associated with the source agent(s) may then be determined based on the particular range of time for the slowest communication probe. Setting of an overall status for an indicator associated with a status determined for an aggregate is not limited to the above-described examples, and may be based on other parameters and/or determinations based on the set of results or the analysis thereof associated with a set of communication probes issued by a particular agent or set of agents including in a particular aggregate. For example, an average and/or a median value for the latency times measured for a set of communication probes issued by agent(s) of an aggregate, (such as the agents of aggregate638and directed to agent(s) of aggregate634) may be calculated, and the calculated values used to determine an overall status associated with the communication probes being issued by the source aggregate, e.g. agents of aggregate638. Each of the communication probe may be configured as a probe based on a policy. The policy may dictate for example a communication protocol to be used for the formatting and transmission of the probe. In some examples, the communication protocol may be a user selectable protocol that a system user may provide as an input to the controller of the computer system being used to generate the probes. As shown in user interface610, the visual analytics box614includes a pull-down menu labeled “Protocol.” Using the selections available from this pull-down menu, a user may select a protocol, such as the “ICMP” protocol, (i.e., the “Internet Control Message Protocol”), which is then to be used by the system to configure the probes that are to be issued by the agents of the computer network being monitored through the issuance of the probes. In some examples, once a user selection of a particular protocol using the pull-down protocol menu has been received, any probes issued within the computer network being monitored will be configured using the selected communication protocol. In addition, the policy may also determine the rate or interval for issuing the probes within the computer network being monitored through the use of the communication probes. For example, the policy may dictate that communication probes are to be issued by agents of each aggregate, such as the “web” aggregate638, at some predefined interval of time. By way of example, at each time interval determined by the policy, the agent(s) included in the “web” aggregate638issues a set of communication probes to each agent included in the “migration” aggregate634. The responses and/or lack of responses received by the agents of the “web” aggregate638are collected as a set of results associated with that particular round of probes. Based on rules determined by the policy, an overall status is determined for the set of results associated with the particular round of probes, and a visual indication for first graphical indicator641may be rendered to provide a visual indication, such as a particular color indication, based on the determined overall status. The visual indication for first graphical indicator641may be updated in real-time as soon as the set of results for the next and latest set of probes has been analyzed. The visual indication for first graphical indicator641may be maintained until the next set of probes is issued by the agents of aggregate638, and a new set of results is collected and analyzed to determine a new overall status for first graphical indicator641. As such, the visual indication of the overall status for the communications provided by first indictor641includes a real-time or near real-time representation of at least a status or a metric defined by the policy and having agent(s) of the “web” aggregate638as the source and having agents of the “‘migration” aggregate634as the destination. The above described examples have illustratively described communication probes issued by agents in the “web” aggregate638to agents in the “migration” aggregate634. Examples of the computer network monitoring systems may also include a set of communication probes issued from agents with each of the aggregates631-638to other agents located in different ones of aggregates631-638, and displaying a visual indicator based on the set of results associated with each of the source aggregates for the issued probes. Again referring to the “web” aggregate638and the “migration” aggregate634as shown in user interface610, each of the agents “C” and “D” of aggregate634may be configured to issue probes to each of the agents “A” and “B” of aggregate638. The probes issued by the agents of the “migration” aggregate634may be configured for example based on rules included in the policy and/or based on user selections as described for the probes issued by the agents of the “web” aggregate638. For example, the policy or a user selection may be used to set a communication protocol that is then used to configure and transmit the probes issued by the agents included in “migration” aggregate634to agents of the “web” aggregate638. The interval for issuing set of probes from the agents included in aggregate634may also be determined by rules included in the policy. In a manner similar to any of the examples described above with respect to probes issued by the “web” aggregate638, the probes issued by the “migration” aggrege634may be monitored for various aspects, such as response or lack of response and/or for measured response times. The policies and rules provided to each individual agent or set of agents within an aggregate does not necessarily need to be a same set of policies or rules dictated by controller23to other aggregates of the computer network. For example, the time intervals at which the agent(s) of one aggregate or the agents within a given aggregate are instructed to issue probes over may be a different time interval, for example a shorter interval or a longer time interval, than the agent(s) of a different aggregate are instructed to issues their communication probes over. Further, the policies or rules provided by controller23to an aggregate may include an indication of which agent or agents are to issue probes, and for examples which agent or agents within an aggregate are not to issue probes. The policies or rules provided by controller23to a particular aggregate may include an indication of which other agent or agents the source agent in the particular aggregate is to issue probes to, and/or which agents or agents the source probe is not to issues probes to as a destination agent. In various examples, the polices or rules issued by controller23to an aggregate may include an indication of what metrics and/or performance parameters the agent issuing the probes is required to collect data or other information for, and how this data and/or other information is to be reported to the controller23. The measurements made based on monitoring the probes issued by the agents included in “migration” aggregate634may be collected as a set of results associated with the issuance of these probes, and analyzed to determine at least one metric and/or an overall status for the set of results associated with the issuance of the probes. In a manner similar to that described above with respect to any of the examples related to first graphical indictor641, the second graphical indicator642may be located at the second end of line640and proximate to the circle representing the “web” aggregate638, and may be rendered to include a visual indication, for example a particular color of one of a plurality of colors, that represents the determined status associated with the set of results collected for the latest probes issued by the agents of aggregate634. Based on the setting of the visual indications provided by the first graphical indicator641and the second graphical indicator642, a real-time or near real-time visual representation of statuses associated with both the commutations originating from and directed to, respectively, the “web” aggregate638and the “migration” aggregate634may be provided as part of user interface610. As further illustrated inFIG.6, each of the circles representing aggregates631-638includes an individual line connecting each aggregate to one other respective aggregate, each line having a first graphical indicator included at a first end of the line proximate to one aggregate and a second graphical indicator included at a second end of the line opposite the first end of the line and proximate to a second aggregate that the line extends to. In some examples, the first graphical indicator and the second graphical indicator for each line may be rendered to provide a visual indication of the status determined for communication probes sourced from the respective agents of the aggregate pointed to or otherwise indicated by the graphical indicator in the user interface610and directed to the agent(s) of the aggregate at the opposite end of the line extending between the two aggregates. As such, a communication status for each and every aggregate with every other aggregate included within a computer network being monitored by the use of the probes may be visually provided in a single display, such as the graphical user interface610illustrated byFIG.6. Thus, examples of the “clusters view” as illustratively shown by user interface610may be used to provide a visual representation of a set of overall and/or high-level statuses in real-time or near real-time associated with a group of aggregates forming a computer network that is being monitored by the issuance of communication probes over the network structure used to communicatively couple a plurality of servers forming at least a portion of the computer network. Additional portions of view control area512and/or portions of the main display area513may provide user selectable areas, for example areas selectable by a cursor such as cursor650. The display being provide by dashboard203as user interface610may be manipulated and/or changed based on user inputs to control and change the visual aspects of the user interface610being displayed and/or the parameters used to control the issuance of the probes being used to collect the sets of results being analyzed to provide the displayed results. For example, based on user inputs, the “Nodes” button617in the “View” box611may be selected to provide a more detailed view of the monitored computer network that includes only certain aggregates of the available aggregates, e.g., communications between a selected pair of aggreges of aggregates631-638. The aggregates to be including in the more detailed view may be chosen based on user inputs provided to the “X-Axis” and the “Y-Axis” pull-down menus provided in the “Filter Groups” box613. These inputs used to request a more detailed view may result in different examples of a graphical user interface currently being provided as an output by a computer system, such as the computer system generating the graphical user interface610. In other examples, selecting one of the graphical indicators (e.g., first graphical indictor641, second graphical indicator642), which are associated with one of the lines extending between the circles representing aggregates631-638may cause the graphical user interface to be generated that displays a more detailed view of the agents of the selected source aggregate and the agents of the destination agents in the aggregate coupled to the source aggregate by the line associated with the selected graphical indicator. Examples of these more detailed views are further illustrated and described below with respect toFIGS.7-17. The example graphical user interface610as shown inFIG.6provides an efficient display of the overall statuses that may be associated with an entire computer network being monitored though the use of the probes issued by the agents of the computer network. As such, a user such as an IT specialist or technician, may easily monitor and spot trouble or potential trouble with respect to the communication couplings between the servers within the monitored computer network on a real-time or a near real-time basis. This includes providing graphical user interfaces as described herein for computer networks that may be large and expansive, and/or may include computing devices that may be located in different physical locations, such as different geographical locations, but presented together in simple to understood graphical user interfaces. In addition, features of the graphical user interface610allow a user to drill down into portions of the monitored computer network to obtain a more detailed view, and thus a better perspective of specific portions and/or devices within the monitored computer network that for example may be experiencing issues with communications between servers. The detailed views may be generated and provided based on one or a small number of user provided inputs, thus allowing speed and efficiency in maneuvering around the displays illustrative of the various portions of the computer network being monitored. FIG.7illustrates an example graphical user interface710output by a computing device that presents a detailed view of aggregates configured for a computer network in accordance with one or more aspects of the present disclosure. The illustration of user interface710includes features that remain the same as or similar to corresponding features illustrated and described with respect to user interface510inFIG.5and user interface610inFIG.6. These corresponding features as shown inFIG.7in conjunction with user interface710retain the same reference numbers, and may provide the same or similar features and perform the same or similar functions as were described above with respect to user interfaces510and610. Such corresponding features may include application selection area511, view control area512, and main display area513. Examples of differences between user interface510ofFIG.5, user interface610ofFIG.6, and user interface710as illustrated and described with respect toFIG.7are further described below. The main display area513of user interface710as shown inFIG.7may be generated in response to a user input selecting the “Nodes” button617in the “View” box611, and specifying the aggregates to be displayed by providing inputs to the “X-axis” and the “Y-axis” menu selections made using the pull-down menus in the “Filter Groups” box613. For example, user inputs may be received indicative of a selection of a particular aggregate, for example the “web” aggregate, in the “X-axis” menu of the “Filter Groups” box613, along with user inputs indicative of a selection of another and different aggregate, for example the “migration” aggregate634, in the “Y-axis” menu of the Filter Groups” box613. As shown in user interface710, the “X-axis” menu displays the word “web” and the “Y-axis” menu displays the work “migration” in the pull-down menus, respectively, included in the “Filter Groups” box613. These selections may result in a display as shown in the main display area513of user interface710. In other examples, main display area513of user interface710as shown inFIG.7may be generated in response to a user input selecting the first graphical indicator641in the user interface610illustrated inFIG.6, for example by using cursor650to select (e.g., click on) the area of the user interface610ofFIG.6that is displaying the first graphical indicator641. As shown inFIG.7, user interface710includes a graphic depiction of a circle representing the “web” aggregate638coupled by line640to a circle representing the “migration” aggregate634. The graphical representation of aggregate638and aggregate634coupled by line640may be positioned across a lower portion, for example within a lower third of main display area513, with line640extending in a horizontal orientation between the circles representing aggregates638and634. As shown inFIG.7, the circle representing the “web” aggregate638is positioned toward the lower left-hand side of the main display area513, with the circle representing the “migration” aggregate634is positioned toward the lower right-hand side of the main display area. The first graphical indicator641may be illustrated as part of user interface710, and if illustrated, may be positioned at a first end of line640proximate to the circle representing the “migration” aggregate634. In some examples of user interface710, the first indictor641may be rendered, for example using a particular color, that is indicative of the overall status determined for probes issued by the agents of the “web” aggregate638to the agents of the “migration” aggregate634. In other examples, line640may be rendered, for example using a particular color, that is indicative of the overall status determined for probes issued by the agents of the “web” aggregate638to the agents of the “migration” aggregate634. The layout of the circles representing the aggregates coupled by the horizontal line may be referred to as a “dumbbell configuration,” in view of a shape similar to an exercise dumbbell. User interface710also includes a group of status blocks731-734positioned above the graphical depictions of aggregates638,634and line640, the status blocks generally indicated by bracket730(also referred to as “status blocks730”). Status blocks730may be positioned approximately in the middle portion of main display area513. The individual status blocks731-734may each comprise a rectangular shape, and having each rectangular shape arranged in a grid pattern including two columns and two rows of status blocks. The number of columns included in the grid may correspond to the number of agents included in the source aggregate that is issuing communication probes to other (destination) agents, and the number of rows included in the grid may correspond to the number of destination agents to which communication probes are being sent to in a different aggregate. Using user interface710as shown inFIG.7as an illustrative example, the agents of the “web” aggregate638has been selected to be the source of the communication probes, and the “migration” aggregate634has been selected to be the aggregate where the destination agent(s) for the communication probes have been grouped. Again for illustrative purposes, the “web” aggregate638includes two agents “A” and “B” that have issued communication probes to the two agents “C” and “D” grouped in the “migration” aggregate634. Because the “web” aggregate638includes two agents issuing communication probes, status blocks730includes two columns, the left-side column indicated by arrow “A,” and the right-side column indicated by the arrow “B”. In addition, because the “migration” aggregate634includes two destination agents for the communication probes being issued by agents “A” and “B,” status blocks730includes two rows, the upper row indicated by arrow “C,” and the lower row indicated by arrow “D.” Each of the individual status blocks731-734may be rendered, respectively, for example using a particular color, which is indicative of a status associated with the issuance of communication probe(s) from one particular agent in the “web” aggregate638to one particular destination agent in the “migration” aggregate634. In the example user interface710, status block731may be rendered in a particular color, for example in red, yellow, or green, to represent a status measured or determined for a selected metric associated with the communication probe(s) issued by agent “A” of the “web” aggregate638to agent “C” of the “migration” aggregate634as the destination agent. As shown in user interface710, the selected metric is the “Latency” value, measured in milliseconds, as illustrated by the “Metric” menu in the “Visual Analytics” box614shown in the view control area512. The “Visual Analytics” box614also includes a legend, generally indicated by bracket615(referred to as “legend615”), which includes a key showing colored circles620,621,622, and623representing the colors used to depict various statuses determined for each of the status blocks730. For example, in legend615a first circle620may be green in color, and indicates a status for latency associated with the issuance of a communication probe having a measured value between zero and 0.57 milliseconds. A second circle621may be yellow in color, and indicates a status for latency associated with the issuance of communication probes having a measured values between 0.57 and 0.78 milliseconds. A third circle622may be red in color, and indicated a status for latency associated with the issuance of communication probes having a measured value of 0.78 to 0.87 milliseconds. In some examples, a fourth circle623may be a color, such as grey, that indicates an “unavailable” status associated with the latency of communication probes. The status of “unavailable” in some examples may indicate a latency value that exceeds 0.87 milliseconds for an issued communication probe, and/or that the issued probes was never responded to by the destination agent. The above described example of legend615and the use of colors as the status indicators for the latency is intended to be a non-limiting example, and examples of user interfaces as described in this disclosure are not limited to use of color or to any particular range of values associates with statuses that may be determined for a particular metric. In some example, another form of visual indication, such as different patterns (e.g., cross-hatch, vertical, horizontal, and diagonal lines) may be used instead of or in conjunction with color to represent a particular status associated with a selected metric. In addition, different metrics may have a different number, for example, two, three, four or more different statuses that may be associated with a measured value for the metric. As such, legend615may change from the illustrative example shown by user interface710to display a number of different statuses and associated text explaining a range or ranges of values associated with the different statuses based on the selected metric being displayed by the status indication being provided by the user interface. In some examples, the number and range of values associated with a measured or determined value for a metric may be predefined as part of the policy being used to generate a user display such as user interface710that provides a visual indication of one or more statuses associated with the communication probes issued within a computer network. Returning to the example of individual status blocks, status block731may be rendered in one of the colors depicted by legend615based for example on the latest measured latency time for a communication probe issued by agent “A” to agent “C” as the destination for the probe. The color of status block731may be updated to represent the latest latency value measured from the last probe issued by agent “A” to agent “C,’ and thus is updated in real-time or near real-time depict the latest status associated with latency for these communication probes. In other examples, an average value, for example over some pre-determined number of the most recently issued probes issued by agent “A” to agent “C,” may be calculated a value used to determine the current status for the latency metric associated probes issued by agent “A” to agent “C” and thus to determine the color to be used to render graphic block731in user interface710. In a similar manner, a status may be determined for each of status blocks732,733, and734, and these blocks may be rendered, respectively, in user interface710to depict the current status determined for each of these blocks. For example, a current status related to latency may be determined for communication probe(s) issued by agent “B” to agent “C,” and block732, which is positioned in column “B” and row “C,” may be rendered for example in a color indicative of the determined status for these probes. A current status related to latency may be determined for communication probe(s) issued by agent “A” to agent “D,” and block733, which is positioned in column “A” and row “D,” may be rendered, for example in a color, which indicates the determined status for these particular probe(s). A current status related to latency may be determined for communication probe(s) issued by agent “B” to agent “D,” and block734, which is positioned in column “B” and row “D,” may be rendered, for example in a color, which indicates the determined status for these particular probe(s). Thus, a status for each set of communication probes being issued by the agents in a particular aggregate, such as “web” aggregate638, to each of the individual agents grouped in another aggregate, such as “migration” aggregate634, may be displayed at one time in a compact format provided by the status blocks730. The number of columns and the number of rows provide within the status block730may be varied based on the selection of a new source aggregate and/or a new destination aggregate for display as part of user interface710. For example, if a new and different aggregate is selected for display as the aggregate having the source agents issuing the communication probes, example using the “Search Source” field in the “Search Node” box612, a number of columns included in status blocks720may be reset based on the number of agents located in the newly selected aggregate that is/are issuing communication probes. In another example, a new and different aggregate may be selected for display as the aggregate having the destination nodes for receiving the communication probes, for example using the “Search Destination” field in the “Search Node” box612. A number of rows included in the status blocks720may be reset based on the number of destination agents located in the newly selected aggregate that includes the destination agents. The status visually depicted by the reconfigured status blocks730are updated based on the determined status values for the selected metric measured or determined for the newly selected source and/or destination agents. The selection of a new source and/or destination agent may also be made in some examples by providing inputs to the “X-axis” and/or the “Y-axis” menus provided in the “Filter Groups” box613. As such, user interface710may provide a quick and convenient way to visually display the status associated with a selected metric for all agents issuing communication probes between any two aggregates within a computer network, and to quickly and conveniently move between different source aggregates and/or different destination aggregates. User interface710may provide one or more additional forms of graphical information that may be helpful to a user viewing the user interface. For example, user interface710may include headers711and712. Header711may include text extending in a horizontal orientation and above the position of the status blocks within main display area513. Header711may include an indication of the agent(s) included in a particular aggregate that includes the agents operating as the source of the communication probe(s) for which statuses are being displayed as part of user interface710. By way of example, header711as shown in user interface710includes text “Hosts in web Aggregate,” indicating that the columns included as status blocks730represent agents within the “web” aggregate of the computer network. Header712may include text extending in a vertical orientation and located to the left-hand side of the position of the status blocks730within main display area513. Header712may include an indication of the agents included in a particular aggregate that are the destination agents for the communication probes being issued by the agents represented by the columns of the status blocks730. By way of example, header712as shown in user interface710includes text stating, “Hosts in migration Aggregate,” indicating that the rows included as status blocks730represent agents within the “migration” aggregate of the computer network. Each of these header may be automatically updated when a new source aggregate and/or a new destination aggregate is selected for display as part of user interface710. In other examples, user interface710includes a general status box740. General status box740is not limited to having any particular format, and may include any type of text and/or graphical elements, such as graphical symbols, that may appear somewhere in the main display area513. As shown inFIG.7, general status box740includes a listing of text and associated numerical values, including text related to “Average latency,” “Max Latency,” and “Min Latency.” Each of these text listings is associated with a numerical value that may be a measured or determined value associated with the text, respectively. For example, as illustrated the selected metric associated with user interface710is “latency.” The information provided in the general status box740is associated with an average value, a maximum value, and a minimum value measured or determined for latency associated with the communication probes being issued by agents “A” and “B” to destination agents “C” and “D.” The average latency value of 0.21 illustrated in box740may be the average latency value calculated based on latency values associated with each of status blocks731-734. The maximum latency value of 0.22 illustrated in general status box740may be the highest measured latency for any of the communication probes associated with one of status block731-734, and a value of 0.19 illustrated in the general status box740may be the lowest measured latency value for any of the communication probes associated with one of the status blacks731-734compared to the other latency values for blocks731-734. Different levels of information, and information related to different aggregates at a same level of detail within the computer network, may be made in response to user inputs to change user interface710to a different user interface displaying a different graphical display. For example, a user input indicating a selection of the “Cluster” button616may cause the user interface being provided by dashboard203to return to a view like user interface610shown inFIG.6and displaying all of the aggregates of a computer network interconnected with graphical lines and status indications, such as graphical indicator641,642,643, and644as shown inFIG.6. In some examples, a cluster and an aggregate may refer to a same entity, e.g., a grouping a servers. A user input (e.g., via cursor650) indicating a selection of any one of the graphical status indicators may return the user interface to a display similar to user interface710as shown inFIG.7, but with the aggregates displayed in the “dumbbell” configuration and the associated status blocks for the source and destination aggregates associated with the selected graphical status indicator being displayed. As such, a user of the graphical user interfaces as described herein may toggle back and forth between a “Clusters view” similar to that shown as interface610inFIG.6and a user interface including a “dumbbell” configuration and status blocks showing more details related to agents grouped into a particular pair of source/destination aggregates. In another example, a user input indicating a selection of the “View all” button750in user interface710as shown inFIG.7may cause the user interface to be rendered to provide a grid of status blocks representing a status for each agent issuing communication probes to any other agent in the computer network. An example of such a user interface is further illustrated and described below with resect toFIGS.14-17. In a further example, user input indicating a selection of any of the status blocks731-733included in status blocks730of user interface710as shown inFIG.7, or any status block displayed as part of a user interface for any of the agents included in the computer network, may result in additional details being displayed with respect to the communication probes and/or metrics and status information associated with the selected status block. Examples of these additional details being provided as part of a user interface are further illustrated and described below with respect toFIGS.8,10,11,13and15. FIG.8illustrates an example graphical user interface810output by a computing device that presents a detailed view of aggregates configured for a computer in accordance with one or more aspects of the present disclosure. The illustration of user interface810includes features that remain the same as or similar to corresponding features illustrated and described with respect to user interface510inFIG.5and user interface610inFIG.6. These corresponding features as shown inFIG.8in conjunction with user interface810retain the same reference numbers, and may provide the same or similar features and may perform the same or similar functions as were described above with respect to user interfaces510and610. Such corresponding features may include application selection area511, view control area512, and main display area513. Examples of differences between user interface510ofFIG.5, user interface610ofFIG.6, and user interface810are further described below. In addition, features such as the general status box740, the “View all” button750, and status blocks731-734as illustrated and described with respect to user interface710andFIG.7may also be provided as part of user interface810, and may provide the same or similar features and perform the same or similar functions in user interface810as described with respect to user interface710. As shown in user interface810ofFIG.8, status block733is highlighted for example after being selected based on user inputs. Status block733may be selected based on user inputs made by positioning cursor650so that the cursor either hovers over status block733for some amount of time, such as one second, or is positioned over status block733within user interface810, and a selection input, such as an input from a computer mouse button, is received. Once selected, status block733may be highlighted for example by displaying a heavier and/or a different color line extending around the perimeter of the block. In addition or in the alternative, extension lines, such as extension lines815and817, may be generated that extend in vertical and horizonal directions, respectively, from status block733. The extension lines may be a particular color, such as a blue color, that further distinguish the extension lines815and817from the other objects being displayed in main display area513of user interface810. Extension line815may extend away from block733in at least one direction and terminate in a text tag816. Text tag816may include text displayed within the perimeter of a shape, such as an oval as shown in user interface810. Text tag816may display the name of an agent associated with column where status block733is located. For example, as shown in user interface810, text tag816includes the text “ace86,” indicating that the policy agent associated with the column where status block733is located is associated with the “ace86” policy agent located in the “web” aggregate638. Similarly, extension line817extends away from status block733, but in a horizontal direction. Text tag818is located near a termination point of extension line817. Text tag818may include text displayed within the perimeter of a shape, such as an oval as shown in user interface810. Text tag818may display the name of a policy agent associated with the row where status block733is located. For example, as shown in user interface810, text tag818includes the text “ace32,” indicating that the policy agent associated with the row where status block733is located is associated with the “ace32” policy agent located in the “migration” aggregate634. The shape used for text tags816and818are not limited to any particular shape or to including an oval shape, and may be other shapes, such as rectangles, circles, triangles, or squares. Further the type of line and/or the color used to display extension lines815and817and the outlining used to highlight status block733are not limited to a particular type of line or to a particular color of line, and may include other types of lines, such as dashed or centerline configuration, in one or more colors including black, that may be used to depict the extensions lines and text tags as depicted by user interface810. In addition to or instead of generating the extension lines and text tags, selection of status block733may cause a pop-up box814to be displayed as part of user interface810. As shown inFIG.8, pop-up box814displays a table with data enclosed within the boarder of the pop-up box. In this illustrative example, pop-up box814may include a row labeled “Source,” a row labeled “Destination,” a row labeled “Status,” and a row labeled “Latency.” An associated entry for the “Source” row of pop-up box814indicates that the source agent issuing probes used in determining a status associated with a selected metric being depicted for status block733as part of user interface810is the agent named “ace86,” and any entry for the “Destination” row of pop-up box814indicates that “ace32, was the destination agent, which also corresponds to the labels displayed in text tag816and818, respectively. The “status” row of pop-up box814indicates a status rating or ranking having a value of “1.” The determined latency, which is the selected metric being used in user interface810to determine a status associated with communication probes issued by agent “ace86” and directed to agent “ace32,” has a value of 0.19 (milliseconds) as indicated in the “Latency” row of pop-up box814. Similar but corresponding information, including extension lines and a pop-up text block that includes information related to any selected one of status blocks731-734may be displayed when the respective block is selected within the main display area513. As such, user interface810is configured to quickly and efficiency display more detailed level information related to any of the status blocks being displayed within the main display area513for a status associated with and/or measured values for a selected metric for a selected pair of source/destination aggregates when status blocks731-734or similar are provide as part of a graphical user interface. FIG.9illustrates an example graphical user interface910output by a computing device that presents a detailed view of aggregates configured for a computer network in accordance with one or more aspects of the present disclosure. The illustration of user interface910includes features that remain the same as or similar to corresponding features illustrated and described with respect to user interface510inFIG.5and user interface610inFIG.6. These corresponding features as shown inFIG.9in conjunction with user interface910retain the same reference numbers, and may provide the same or similar features and may perform the same or similar functions as were described above with respect to user interfaces510and610. Such corresponding features may include application selection area511, view control area512, and main display area513. Examples of differences between user interface510ofFIG.5, user interface610ofFIG.6, and user interface910as illustrated and described with respect toFIG.9are further described below. In addition, features such as the “View all” button750as illustrated and described with respect to user interface710andFIG.7may also be provided as part of user interface910, and may provide the same or similar features and perform the same or similar functions in user interface910as described with respect to user interface710. User interface910as shown inFIG.9includes a “dumbbell” configuration in the lower portion of main display area513that includes a graphic depiction of a circle representing the “web” aggregate638coupled by line645to a circle representing the “ace88” aggregate633. The graphical representation of aggregate638and aggregate633coupled by line645may be positioned across a lower portion, for example within a lower third of main display area513, with line645extending in a horizontal orientation between the circles representing aggregates638and633. As shown inFIG.9, the circle representing the “web” aggregate638is positioned toward the lower left-hand side of the main display area513, and represents the aggregate having the agents that are the source for the communication probes being graphically represented by user interface910. The circle representing the “ace88” aggregate633is positioned toward the lower right-hand side of the main display area513, and represents the aggregate having the agent (e.g., a single agent “E” in this example) that is the destination for the communication probes being graphically represented by user interface910. The third graphical indicator643may be illustrated as part of user interface910, and if illustrated, may be positioned at a first end of line645proximate to the circle representing “ace88” aggregate633. User interface910also includes a group of status blocks931-932positioned above the graphical depictions of aggregates638,633and line645, the status blocks generally indicated by bracket930(also referred to as “status blocks930”). Status blocks930may be positioned approximately in the middle portion of main display area513. The individual status blocks931-932may each comprise a rectangular shape, and having each rectangular shape arranged in a grid pattern including two columns and one row of status blocks. The number of columns included in the grid may correspond to the number of agents included in the source aggregate that is issuing communication probes to other (destination) agent(s), and the number of rows included in the grid may correspond to the number of destination agent(s) to which communication probes are being sent to and that are located in a different aggregate. Continuing with the example configuration where the “web” aggregate638includes two agents “A” and “B” and the “ace88” aggregate includes a single agent “E,” status blocks930includes two columns indicated by arrow “A” and arrow “B,” respectively, and a single row indicated by arrow “E.” In a similar manner to that describe above with respect to user interface710, user interface910as shown inFIG.9may provide an indication of a status determined for each of status blocks931and932. These status blocks may be rendered, respectively, in user interface910to depict the current status for the selected metric, for example latency, determined on an individual basis for each of status blocks931and931. The indication of that determined status may for example be shown by the color used to render the status blocks in the display of user interface910, and which color is keyed to the legend615provided in the view control area512. For example, a current status related to latency may be determined for communication probe(s) issued by agent “A” to agent “E,” at status block931, which is positioned in column “A” and in row “E” may be rendered for example in a color indicative of the determined status for these particular probes. A current status related to latency may also be determined for communication probe(s) issued by agent “B” to agent “E,” and status block932, which is positioned in column “B” and row “E,” may be rendered, for example in a color, which indicates the determined status for these particular probe(s). The determined statuses displayed by user interface910for each of the status blocks930may be based on the latest set of probes sent from the agents of the “web” aggregate638, or may be based on some other calculated metric, such as an average value for a number of previously issued communication probes issued by ‘web” aggregate638to the agent of the “ace88” aggregate. Additional information may be provided in the main display area513of user interface910. For example, a header911may be provide that names the aggregate that is the source for the communication probes, and thus determines the number of columns included in status blocks930. Header912may provide the name of the aggregate where the destination agent or agents for the communication probes being depicted by user interface910are grouped. User interface910may include a general status box940similar to the general status box740described above with respect to user interface710andFIG.7. General status box940ofFIG.9may provide any of the types of information described above with respect to general status box740, but with numerical values representing the measured or determined values for one or more metrics, such as latency, associated with the communication probes being issued by the source aggregate to the agents of the destination aggregate being represented in user interface910. As shown inFIG.9, general status box940includes a listing of text and associated numerical values, including text related to “Average latency,” “Max Latency,” and “Min Latency” for the communication probes issued by the “web” aggregate638to the agent of the “ace88” aggregate633. User interface910may be generated for example in response to an indication of the selection of the third graphical status indicator643from a “cluster view” such as provided by user interface610ofFIG.6. In other examples, user interface910as shown inFIG.9may be generated in response to inputs to the “Filter Groups” box613indicated that the “web” aggregate is to be displayed as the source aggregate and the “ace88” aggregate is to be displayed as the destination aggregate. User interface910may also be configured to allow additional information related to the status associated with any of the status blocks included in the display of status blocks930in a manner similar to that described above with respect toFIG.8and user interfaces710and810. Examples of these additional details being provided as part of a user interface are further illustrated and described below with respect toFIGS.10and11. FIG.10illustrates an example graphical user interface1010output by a computing device that presents a detailed view of aggregates configured for a computer network in accordance with one or more aspects of the present disclosure. User interface1010may be generated by dashboard203as a result of receiving an indication of the selection of status block931in user interface910, for example based on user inputs provided through manipulation of cursor650. When status block931is selected, the status block may be highlighted in user interface1010for example by displaying a heavier and/or a different color line extending around the perimeter of the block. In addition or in the alternative, extension lines, such as extension lines1015and1017, may be generated that extend in vertical and horizonal directions, respectively, from status block931. The extension lines may be a particular color, such as a blue color, that further distinguish the extension lines1015and1017from the other objects being displayed in main display area513of user interface1010. Extension line1015may extend away from status block931in at least one direction and terminate in a text tag1016. Text tag1016may include text displayed within the perimeter of a shape indicative of a name of the agent associated with the column where status block931is located, for example “ace86” as illustrated inFIG.10. Similarly, extension line1017extends away from status block931, but in a horizontal direction. Text tag1018is located near a termination point of extension line1017, and may include text displayed within the perimeter of a shape indicative of the name of a policy agent associated with the row where status block931is located, for example “ace88” as illustrated inFIG.10. The shapes, colors and format of the extension lines and text tags are not limited to the particular example illustrated for user interface1010, and may be formed using other colors and/or other shapes for the extension lines and or text tags provide by examples of user interface1010. In addition to or instead of generating the extension lines and text tags, selection of block931may cause a pop-up box1014to also be displayed as part of user interface1010. As shown inFIG.10, pop-up box1014includes a display of a table with data enclosed within the boarder of the pop-up box. In this illustrative example, pop-up box1014may include a row labeled “Source,” a row labeled “Destination,” a row labeled “Status,” and a row labeled “Latency.” An associated entry for the “Source” row indicates that the source agent issuing probes used in determining a status being depicted for status block931as part of user interface1010is the agent named “ace86,” and an associated entry for the “Destination” row indicates that the destination agent is “ace88,” which corresponds to the labels displayed in text tag1016and1018, respectively. The “status” row of pop-up box1014indicates a status rating or ranking having a value of “1.” The determined latency, which is the selected metric being used in user interface1010to determine a status associated with communication probes issued from agent “ace86” and directed to agent “ace88,” has a value of 0.14 (milliseconds) as indicated in the “Latency” row of pop-up box1014. Similar but corresponding information, including extension lines and a pop-up text block that includes information related to any selected one of status blocks931-932may be displayed with the respective status block selected within the main display area513. As such, user interface1010is configured to quickly and efficiency display more detailed level information related to any of the status blocks being displayed within the main display area513for a selected metric and a selected pair of source/destination aggregates. User interface1010, when generated by dashboard203, provides additional details and information related to the one or more selected metrics associated with communitive probes issued from a source agent to a destination agent, in this example communication probe(s) issued by agent “ace86” and directed to agent “ace88,” for example where the number of agents issuing the probes is different (e.g., larger than) the number of destination agent the probes are being directed to. FIG.11illustrates another example graphical user interface1110output by a computing device that presents a detailed view of aggregates configured for a computer network in accordance with one or more aspects of the present disclosure. User interface1101may be generated by dashboard203as a result of receiving an indication of the selection of status block932in user interface910, for example based on user inputs provided through manipulation of cursor650. In a manner similar to that describe above with respect to a selection of status block931as illustrated in user interface1010inFIG.10, when status block932is selected, the status block may be highlighted as shown in user interface1110inFIG.11. Highlighting may include any of the techniques described above for highlighting a selected status block for example by displaying a heavier and/or a different color line extending around the perimeter of the block. In addition or in the alternative, extension lines, such as extension lines1115and1117, may be generated that extend in vertical and horizonal directions, respectively, from status block932. The extension lines may be a particular color, such as a blue color, that further distinguish the extension lines1115and1117from the other objects being displayed in main display area513of user interface1110. The extension lines may terminate in text tags,1116,1118, respectively. Text tag1116may include text displayed within the perimeter of a shape indicative of a name of the agent associated with the column where status block932is located, for example “172.16.85.11” as illustrated inFIG.11. Similarly, extension line1117extends away from block932, but in a horizontal direction. Text tag1118may include text displayed within the perimeter of a shape indicative of the name of a policy agent associated with the row where status block932is located, for example “ace88” as illustrated inFIG.11. The shapes, colors and format of the extension lines and text tags are not limited to the particular examples illustrated and described above for user interface1110, and may be formed using other colors and/or other shapes for the extension lines and or text tags provide by examples of user interface1110. User interface1110may also include a display of pop-up box1114. As shown inFIG.11, pop-up box1114includes a display of a table with data enclosed within the boarder of the pop-up box. In this illustrative example, pop-up box1114may be formatted using any of the formats, or similar formats and may include any of the types of information described above with respect to pop-up box1014and user interface101, but including information specifically related to the agents and the statuses associated with the communication probes being issues by and directed to the agents associated with status block932. User interface1110, when generated by dashboard203, provides additional details and information to the one or more selected metrics associated with commutative probes issued from a source agent to a destination agent, in this example communication probe(s) issued by agent “172.16.85.11” and directed to agent “ace88.” FIG.12illustrates an example graphical user interface1210output by a computing device that presents a detailed view of aggregates configured for a computer network in accordance with one or more aspects of the present disclosure. The illustration of user interface1210includes features that remain the same as or similar to corresponding features illustrated and described with respect to user interface510inFIG.5and user interface610inFIG.6. These corresponding features as shown inFIG.12in conjunction with user interface1210retain the same reference numbers, and may provide the same or similar features and may perform the same or similar functions as were described above with respect to user interfaces510and610. Such corresponding features may include application selection area511, view control area512, and main display area513. Examples of differences between user interface510ofFIG.5, user interface610ofFIG.6, and user interface1210as illustrated and described with respect toFIG.12are further described below. In addition, features such as the “View all” button750as illustrated and described with respect to user interface710andFIG.7may also be provided as part of user interface1210, and may provide the same or similar features and perform the same or similar functions in user interface1210as described with respect to user interface710. User interface1210as shown inFIG.12includes a “dumbbell” configuration and a set of status blocks1230along with additional features arranged in a configuration similar to that illustrated and described above with respect to user interface910andFIG.9. For example, user interface1210as shown inFIG.12includes a “dumbbell” configuration in the lower portion of main display area513that includes a graphic depiction of a circle representing the “ace88” aggregate633coupled by line645to a circle representing the “web” aggregate638. However, in contrast to user interface910, user interface1210has the “ace88” aggregate positioned at the left-hand side of the “dumbbell” configuration, and the “web” aggregate positioned at the right-hand side of the dumbbell” configuration. The fourth graphic indicator644, if provided as part of user interface1210, is position along line645and proximate to the circle representing the “web” aggregate638. The layout depicted in user interface1210indicates the single agent of the “ace88” aggregate is the source agent for the communication probes for which the statuses are indicated through the rendering of the status indications for status blocks1230, and the agents of the “web” aggregate638are the destination agents for these communication probes. Because in the illustrative example shown inFIG.12for user interface1210the source aggregate “ace88” has only one agent (e.g., agent “E”), a single column is included in status blocks1230, represented by arrow “E”. The two rows, represented by arrows “A” and “B,” represent the two agents “A” and “B,” respectively, receiving communication probes from the single agent of the “ace88” aggregate. Status block1231may be rendered in user interface1210to represent a status, in some example using a color and in real-time or near real-time, determined for the communication probes issued by the agent “E” of the “ace88” aggregate and directed to agent “A” of the “web” aggregate. Status block1232may be located directly below status block1231in the same column where status block1231is located. Status block1232may be rendered in user interface1210to represent a status, in some examples using a color and in real-time or near real-time, determined for the communication probes issued by the agent “E” of the “ace88” aggregate and directed to agent “B” of the “web” aggregate. Additional information may be provided in the main display area513of user interface1210in a same or similar manner as describe above with respect to user interface910andFIG.9, but including information corresponding to the source/destination agents being represented by the depiction provide in main display area513as user interface1210. For example, a header1211may be provided that names the aggregate that is the source for the communication probes, and thus determines the number of columns included in status blocks1230. Header1212may provide the name of the aggregate where the destination agent or agents for the communication probes being depicted by user interface1210are grouped, and thus also corresponds to the number of rows included in status blocks1230. User interface1210may include a general status box1240similar to the general status box740described above with respect to user interface710andFIG.7. General status box1240ofFIG.12may provide any of the types of information described above with respect to general status box740, but with numerical values representing the measured or determined statuses for a one or more metrics, such as latency, associated with the communication probes being issued by the source aggregate and directed to the agents of the destination aggregate being represented in user interface1210. As shown inFIG.12, general status box1240includes a listing of text and associated numerical values, including text related to “Average Latency,” “Max Latency,” and “Min Latency” for the communication probes issued by the “ace88” aggregate633and directed to the agents of the “web” aggregate638. User interface1210may be generated for example in response to an indication of the selection of the fourth graphical status indicator644from a “cluster view,” for example as provided by user interface610ofFIG.6. In other examples, user interface1210as shown inFIG.12may be generated in response to inputs to the “Filter Groups” box613indicated that the “ace88” aggregate is to be displayed as the source aggregate and the “web” aggregate is to be displayed as the destination aggregate. User interface1210may also be configured to allow additional information related to the statuses associate with any of the status blocks included in the display of status blocks1230in a manner similar to that described above with respect toFIG.8and user interfaces710and810. An example of these additional details being provided as part of a user interface such as user interface1210are further illustrated and described below with respect toFIG.13. FIG.13illustrates an example graphical user interface1310output by a computing device that presents a detailed view of aggregates configured for a computer network in accordance with one or more aspects of the present disclosure. User interface1310may be generated by dashboard203as a result of receiving an indication of the selection of status block1231in user interface1210, for example based on user inputs provided through manipulation of cursor650. When status block1231is selected, the status block may be highlighted in user interface1310for example by displaying a heavier and/or a different color line extending around the perimeter of the block. In addition or in the alternative, extension lines, such as extension lines1315and1317, may be generated that extend in vertical and horizonal directions, respectively, from status block1231. The extension lines may be a particular color, such as a blue color, that further distinguish the extension lines1315and1317from the other objects being displayed in main display area513of user interface1310. Extension line1315may extend away from status block1231in at least one direction and terminate in a text tag1316. Text tag1316may include text displayed within the perimeter of a shape indicative of a name of the agent associated with the column where status block1231is located, for example “ace88” as illustrated inFIG.13. Similarly, extension line1317extends away from status block1231, but in a horizontal direction. Text tag1318is located near a termination point of extension line1317, and may include text displayed within the perimeter of a shape indicative of the name of a policy agent associated with the row where status block1231is located, for example “172.16.85.11” as illustrated inFIG.13. The shapes, colors and format of the extension lines and text tags are not limited to the particular example illustrated for user interface1310, and may be formed using other colors and/or other shapes for the extension lines and or text tags provided by examples of user interface1310. In addition to or instead of generating the extension lines and text tags, selection of status block1231may cause a pop-up box1314to also be displayed as part of user interface1310. As shown inFIG.13, pop-up box1314includes a display of a table with data enclosed within the boarder of the pop-up box. In this illustrative example, pop-up box1314may include a row labeled “Source,” a row labeled “Destination,” a row labeled “Status,” and a row labeled “Latency.” An associated entry for the “Source” row indicates that the source agent issuing probes used in determining a status being depicted for status block1231as part of user interface1310is the agent named “ace88,” and the associated entry for the “Destination” row indicates that the destination agent is “172.16.85.11,” which corresponds to the labels displayed in text tag1316and1318, respectively. The “status” row of pop-up box1314indicates a status rating or ranking having a value of “1.” The determined latency, which is the selected metric being used in user interface1310to determine a statuses associated with communication probes issued from agent “ace88” and directed to agent “172.16.85.11” has a value of 0.81 (milliseconds) as indicated in the “Latency” row of pop-up box1314. Similar but corresponding information, including extension lines and a pop-up text block that includes information related to any selected one of status blocks1231and1232, may be displayed with the respective status block that is selected within the main display area513. As such, user interface1310is configured to quickly and efficiency display more detailed level information related to any of the status blocks being displayed within the main display area513for a selected metric and a selected pair of source/destination aggregates. User interface1310, when generated by dashboard203, provides additional details and information related to the one or more selected metrics associated with communitive probes issued from a source agent to a destination agent, in this example communication probe(s) issued by agent “ace88” that are directed to agents of the “web” aggregate, for example where the number of agents issuing the probes is different (e.g., smaller than) the number of destination agents the probes are being directed to. FIG.14illustrates an example graphical user interface1410output by a computing device that presents a detailed view of statuses for policy agents configured for a computer network in accordance with one or more aspects of the present disclosure. The illustration of user interface1410includes features that remain the same as or are similar to corresponding features illustrated and described with respect to user interface510inFIG.5and user interface610inFIG.6. These corresponding features as shown inFIG.14in conjunction with user interface1410retain the same reference numbers, and may provide the same or similar features and perform in a same or similar manner as were described above with respect to user interfaces510and610. Such corresponding features may include application selection area511, view control area512, and main display area513. Examples of differences between user interface510ofFIG.5, user interface610ofFIG.6, and user interface1410as illustrated and described with respect toFIG.14are further described below. As shown inFIG.14, main display area513includes a grid of status blocks1430(referred to as “status blocks1430”), which includes individual status blocks (e.g., blocks1440(TYP)), which are arranged in rows and columns. The columns are generally indicated by arrows1431, and the rows are generally indicated by arrows1432. Each column in the grid represents a single agent acting as a source of communication probes issued to a plurality of destination agents that are represented by the rows of status blocks. For example, the left-hand most column (generally indicated by arrow1433) of the grid includes status block1434in the upper-most row of the grid (generally indicated by arrow1436), and includes a series of additional status blocks positioned below status block1434within the same column1433. Each status block in column1433may provide a graphical indication of a status associated with the communication probes issued by a same source agents, for example agent “ace88,” and directed to a set of destination agents, each destination agent represented by an individual status block in one row of the grid and located within column1433. In some examples, a color is used to individually and respectively render each of status blocks1430included in the grid. The color chosen to render a given status block may represent the latest status determined for the communication probe(s) associated with the particular source agent and the particular destination agent represented by the given status block. For example, status block1434may be rendered in user interface1410using a red color to fill in the status block. The red color may be representative of a determined latency for communication probe(s) issued by the agent associated with column1433and directed to the destination agent associated with the upper-most row in the grid (generally indicated by arrow1436), e.g., as having a status value of between 0.78 to 0.87 milliseconds and assigned a red color based on the statuses depicted by legend615in the view control area513. In another example, status block1435, also associated with the same agent acting as the source of the communication probes providing communication probes associated with column1433, may be rendered in a different color, such as green, indicative of the status determined for the communication probes issued by the source agent associated with column1433and directed to the destination agent associated with the row immediately below row1436/(i.e., the second row for the top in the grid). Thus, each status block1440(TYP) provided within status blocks1430may be rendered in a color that is keyed to legend615and that represents a determine status associated with the selected metric, e.g., “latency,” for the particular source/destination agents associated with each individual status block. As such, the graphical depiction of status blocks1430provides a detail-level view of information related to a large number, or in some examples, all of the agents issuing communication probes to other agents in a computer network. The use of the color to render each status block provides a visual representation of the individual statuses that may allow a user viewing user interface1410to quickly determine which of the status blocks represent available and unavailable communication links between agents, and/or to quickly visualize a metric-related status determined for each of the “available” communication links between agents within the computer network being represented by user interface1410. The indication of a determined status associated with each of the status blocks1430is not limited to being depicted via a color, and other indications of statuses, such as different patterns used to fill the status blocks, forms of animation such as flashing the graphical shapes or symbols representing one or more of the status blocks1430, and/or use of graphical symbols or text, for example within the perimeter of one or more of the status blocks1430, may be used to indicate a status associated with at least one metric being depicted through the rendering of the status blocks as part of user interface1410. In some examples, the status indications, for example the colors used to render each of the status block1430, may be updated in real or near real-time to indicate the status of a selected metric being measured or determined based on the most resent set of the communication probes issued from each of the source agents and directed to a destination agent. In other examples, the status indications provided by user interface1410may be updated on some periodic interval, or may be updated based on a running calculation, such as an averaging of an overall status determined for a set of communication probes associated with each pair of source/destination agents represented by the status blocks provide in user interface1410. Further, the number of columns and rows shown inFIG.14is an illustrative and non-limiting example of a display including status blocks1430. The number of columns and/or rows provided in a grid of status blocks may vary based for example on the number of agents issuing communication probes and the number of agents these communication probes are being directed to. In addition, the number and/or arrangement of the columns and rows may be determined based on a policy or a set of rules operated on by dashboard203and used in generating user interface1410. For example, the order of the agents assigned to the columns included in a grid of status blocks1430may be arranged in an alphabetical order sorted using the names assigned to the agents, respectively, and the order of the rows may similarly be arranged in an alphabetical order based on the names assigned to the destination agents. In some example, all of the source agents are also destination agents. In other examples, one or more of the source agents are not also destination agents, and one or more of the destination agents are not source agents depicted within an example of user interface1410. Additional information may be provided as part of user interface1410. For example, a first header1411may extend across main display area513in a horizontal orientation above the grid of status blocks1430, and may display text, for example “Host in all aggregates,” indicating that the agents represented in the “X-axis” and thus by the columns in the grid of status blocks1430, represents all of the agents in the aggregates of a computer network. A second header1412may extend across main display area513in a vertical orientation and positioned along the left-hand side of the grid of status blocks1430, and may display text, for example “Host in all Aggregates,” indicating that the agents represented in the “Y-axis,” and thus by the rows in the grid of status blocks1430, represent all of the destination agents in the aggregates of a computer network. Examples of status blocks1430are not limited to having the individual blocks represented by a rectangular shape. Other shapes, such as circles, squares, triangle, ovals, and ellipses may be used to represent one or more of the status blocks included within status blocks1430. Regardless of the shape of the status blocks including in status blocks1430, the user interface1410may be configured to display a status indicator of some type associated with at least one selected metric for each of the status blocks included in status block1430. User interface1410having a configuration of status blocks1430or similar in some examples is the default view provided as user interface1410when a user input indication selection of the “Nodes” button617is received by dashboard203. In other examples, a user interface having a configuration of status blocks1430or similar may be provided as user interface1410when a value of “All” is selected or entered for both the “X-axis” and the “Y-axis” fields included within the “Filter Groups” box613of the view control area512. In still other examples, a user interface having a configuration of status blocks1430or similar, may be provide as user interface1410when an input indicative of the selection of the “View all” button750as shown for example inFIGS.7-13is received. In some examples, a user input indicating a selection of any one of the individual status blocks included in status blocks1430and being displayed as part of user interface1410may result in additional details being displayed with respect to the communication probes and/or metrics and status information associated with the selected status block. Examples of these additional details being provided as part of a user interface are further illustrated and described below with respect toFIG.15. FIG.15illustrates an example graphical user interface1510output by a computing device that presents a detailed view of aggregates configured for a computer network in accordance with one or more aspects of the present disclosure. The illustration of user interface1510includes features that remain the same as or are similar to corresponding features illustrated and described with respect to user interface510inFIG.5and user interface610inFIG.6. These corresponding features as shown inFIG.15in conjunction with user interface1510retain the same reference numbers, and may provide the same or similar features as were described above with respect to user interfaces510and610. Such corresponding features may include application selection area511, view control area512, and main display area513. Examples of differences between user interface510ofFIG.5, user interface610ofFIG.6, and user interface1510as illustrated and described with respect toFIG.15are further described below. In addition, features such as the grid of status blocks1430as illustrated and described with respect to user interface1410andFIG.14may also be provided as part of user interface1510, and may provide the same or similar features and perform the same or similar functions in user interface1510as described with respect to user interface1410. As shown in user interface1510inFIG.15, status block1437of status blocks1430is highlighted within the user interface, for example after being selected based on user inputs. Status block1437may be selected based on user inputs made by positioning cursor650so that the cursor either hovers over status block1437for some amount of time, such as one second, or is positioned over status block1437within user interface1510, and a selection input, such as a computer mouse button input, is received. Once selected, status block1437may be highlighted for example by displaying a heavier and/or a different color line extending around the perimeter of the status block. In addition or in the alternative, extension lines, such as extension lines1515and1517, may be generated that extend in vertical and horizonal directions, respectively, from status block1437. The extension lines may be a particular color, such as a blue color, that further distinguish the extension lines1515and1517from the other objects being displayed in main display area513of user interface1510. Extension line1515may extend away from block1437in at least one direction and terminate in a text tag1516. Text tag1516may include text displayed within the perimeter of a shape, such as an oval as shown in user interface1510. Text tag1516may display the name of an agent associated with the column where status block1437is located. For example, as shown in user interface1510, text tag1516includes the text “ace38,” indicating that the policy agent associated with the column where block1437is located is associated with the source agent “ace38” issuing the communication probes having a status being depicted by status block1437. Similarly, extension line1517extends away from status block1437, but in a horizontal direction. Text tag1518is located near one termination point of extension line1517. Text tag1518may include text displayed within the perimeter of a shape, such as an oval as shown in user interface1510. Text tag1518may display the name of a policy agent associated with the row where block1437is located. For example, as shown in user interface1510, text tag1518includes the text “ace85,” indicating that the policy agent associated with the destination agent to which the communication probes issued by the “ace38” agent and having a status being depicted by status block1437are directed to. The shape used for text tags1516and1518are not limited to any particular shape or to including an oval shape, and may be other shapes such as rectangles, circles, triangles, or squares. Further the type of line and/or the color used to display extension lines1515and1517and the outlining used to highlight status block1437are not limited to a particular type of line or to a particular color of line, and may include other types of lines, such as dashed or centerline configuration, in one or more colors including black, that may be used to depict the extensions lines and text tags as depicted by user interface1510. In addition to or instead of generating the extension lines and text tags, selection of status block1437may cause a pop-up box1514to also be displayed as part of user interface1510. As shown inFIG.15, pop-up box1514displays a table with data enclosed within the boarder of the pop-up box. In this illustrative example, pop-up box1514may include a row labeled “Source,” a row labeled “Destination,” a row labeled “Status,” and a row labeled “Latency.” An associated entry for the “Source” row indicates that the source agent issuing probes used in determining a status for a metric that is being depicted for status block1437is the agent named “ace38,” and the destination agent was “ace85,” which corresponds to the labels displayed in text tag1516and1518, respectively. The “status” row of pop-up box1514indicates a status rating or ranking having a value of “1.” The determined latency, which is the selected metric being used in user interface1510to determine a status associated with communication probes issued from agent “ace38” to agent “ace85,” has a value of 0.21 (milliseconds) as indicated in the “Latency” row of pop-up box1514. Similar but corresponding information, including extension lines and a pop-up text boxes that include information related to any selected one of the status blocks1430may be displayed when the respective status block is selected within the main display area513. As such, user interface1510is configured to quickly and efficiency display more detailed information related to any of the status blocks being displayed within the main display area513for a selected metric and a selected pair of source/destination agents. FIG.16illustrates an example graphical user interface1610output by a computing device that presents a detailed view of statuses for policy agents configured for a computer network in accordance with one or more aspects of the present disclosure. User interface1610may be an example of user interface1410as illustrated and described above with respect toFIG.14, and may include any of the features configured to perform any combination of the functions described with respect to user interface1410. Examples of differences between user interface1410and user interface1610are further described below. As shown inFIG.16, user interface1610includes an example menu1612. Menu1612may appear as a selection of different communication protocols that appear when a user input to the “Protocol” field within the “Visual Analytics” box614is received by dashboard203that is generating user interface1610. Menu1612may include a listing of different communication protocols that may be selected as the communication protocol used to issue the communication probes issuing from the source agents having statuses depicted in the grid of status blocks1430. Protocol selections available through menu1612are not limited to any particular protocols or to a particular number of protocols, and may include any type and/or number of different protocols available for use by the system controller the agents issuing the communication probes in a computer network, such as controller23. In the example illustrated inFIG.16, the available list of communication protocols includes (ICMP), (TCP), (HTTP). In some examples, the currently selected protocol may be indicated by a check-mark, as shown for the ICMP protocol in menu1612. A selection of a new communication protocol may be indicated by highlighting the new protocol in a different color, such a blue, to show a user the new protocol that may then be selected for use in issuing the communication probes going forward. In the example shown inFIG.16, the “TCP” protocol is highlighted, indicating that the “TCP” protocol may be selected as the new and next protocol to be used for formatting and transmitting the communication probes going forward. Once an indication of a new protocol has been made, in some examples the controller (such as controller23,FIG.1) provides instructions to the agents to use the new communication protocol to format and transmit any communication probes directed to the destination agents, and further, that the responses provided by the destination agents to the issued probes are to also be formatted and transmitted using the new protocol. In addition, the status indications provided for example by the color rendered for each of the status blocks1430included in the display provided by user interface1610will now be based on at least one metric measured or determined in response to the issuance of the communication probes using the newly selected communication protocol. Menu1612may also be displayed in any of the other user interfaces described thought this disclosure, including the “clusters view” illustrated and described with respect to user interface610andFIG.6and/or any of the “dumbbell” configurations illustrated and described with respect to any of the user interfaces ofFIGS.7-13. In a manner similar to that described above with respect to user interface1610, selection of a new communication protocol through any of the user interfaces described throughout this disclosure, and any equivalents thereof, may cause the system monitoring the computer network to instruct the agents issuing the communication probes to use the newly selected protocol for formatting and transmission of any probes going forward, and any status indications provide as part of these user interfaces may be based on at least one metric measured or determined based on the communication probes issued using the newly selected communication protocol. FIG.17illustrates an example graphical user interface1710output by a computing device that presents a detailed view of statuses for policy agents configured for a computer network in accordance with one or more aspects of the present disclosure. User interface1710may be an example of user interface1410as illustrated and described above with respect toFIG.14, and may include any of the features configured to perform any combination of the functions described with respect to user interface1410. Examples of differences between user interface1410and user interface1710are further described below. As shown inFIG.17, user interface1710includes an example menu1712. Menu1712may appear as a selection of different metrics that appears when a user input to the “Metric” field within the “Visual Analytics” box614is received by dashboard203that is generating user interface1710. Menu1712may include a listing of different metrics that may be selected to be displayed, or to be used to display a status determined for a set of communication probes based on the selected metric. Metric selections available through menu1712are not limited to any particular metrics or to a particular number of metrics, and may include any type and/or number of different metrics available for use by the system controller and the agents issuing the communication probes in a computer network, such as controller23. In the example illustrated inFIG.17, the available list of metrics includes “Connectivity” and “Latency.” In some examples, the currently selected protocol may be indicated by a check-mark, as shown for the “Latency” metric in menu1712. A selection of a new metric may be indicated by highlighting the new metric in a different color, such a blue, to show a user the new metric that may then be selected for use determining a status associated with the communication probes rendered as part of the user interface1710going forward. In the example shown inFIG.17, the “Latency” metric is highlighted, indicating that the “Latency” metric may be selected as the new and next metric to be used for formatting and transmitting the communication probes going forward. Once an indication of a new metric has been made, in some examples the controller, such as controller23, provides instructions to at least monitor and collect data or other information associated with the communication probes and the newly selected metric, and to report this data and/or information to the controller. In addition, the status indications provided for example by the colors used to render each of the status blocks1430included in the display provided by user interface1710will now be based on a status determined for the communication probes and for the newly selected metric. Menu1712may also be displayed in any of the other user interfaces described thought this disclosure, including the “clusters view” illustrated and described with respect to user interface610andFIG.6and/or any of the “dumbbell” configurations illustrated and described with respect to any of the user interfaces ofFIGS.7-13. In a manner similar to that described above with respect to user interface1710, selection of a new metric through any of the user interfaces described throughout this disclosure, and any equivalents thereof, may cause the controller gathering information from the agents to provide data and/or information related to the newly selected metric, and to render the user interface being provide by dashboard203to reflect status indications provide as part of these user interfaces based on the newly selected metric. FIG.18illustrates a method1800for generating graphical user interfaces for displaying statuses associated with communication probes issues in a computer network in accordance with one or more aspects of the present disclosure. Method1800is described as being performed, at least in part, by controller23and policy agents35as illustrated and described with respect toFIGS.1-5. However, examples of method1800are not limited to being performed in whole or in part by controller23and/or by policy agents35. Method1800may be performed by any processing circuitry of a computing device and/or by any processing and communication circuitry located in a plurality of servers coupled to a computer network system, such as computer network system8as illustrated and described with respect toFIG.1. Method1800is some examples includes downloading policies from controller23to a plurality of policy agents35, which are deployed within at least some of the respective physical servers12(FIG.1) of a computer network system (block1802). The policies may include a set of rules that are to be used by the respective policy agents to format and transmit communication probes to one or more other policy agents within the computer network system. The policies may include instructions related to what type of communication format (e.g. ICMP, TCP, HTTP) the policy agent(s) is/are to use to format and transmit communication probes to other policy agents. The polices may include a list of destination agents that a given policy agent, or a group of policy agents grouped together in a common aggregate, are to transmit communication probes to. In some examples, the policies include instructions indicative of a time interval that is to be used by the policy agent for issuing subsequent sets of probes to other policy agents. In some examples, the policies include instructions on what parameter(s) the policy agent is/are to monitor for with respect to the issued probes. For example, the polices may instruct a policy agent to issue a communication probe to one or more other policy agents as destination agents, with a request that the destination agent respond back to the policy agent (source agent) issuing the communication probe upon receipt of the probe at the destination agent. The source agent is instructed by the policies to monitor and record whether a response was received in reply to an issued probe, and if so, the time interval between issuance of the probe and receipt of the acknowledgement back from the destination agent that the communication probe was received at the destination agent. Method1800may include the policy agents35issuing communication probes from one or more source agents directed to one or more other policy agents in the computer network system8(block1804). Issuing communication probes may include formatting and transmitting communication probes from each of a plurality of source agents to a list of destination agents for each respective source agent based on the polices downloaded to the source agents from controller23. Issuing the communication probes may include formatting and transmitting communication probes directed to the destination agents using a communication protocol dictated by the policies downloaded to the source agents from controller23. In various examples, the communication probes are configured to be transmitted through the same network devices, such network devices included in switch fabric20, used by the servers12where the policy agents are deployed to provide communicates between the servers. Method1800may include the policy agents monitoring for responses to issued probes (block1806). Monitoring for responses to issued probes may include determining whether a response from a destination agent was received in reply to an issued probe directed from a source agent to the destination agent. Monitoring for responses to issued probes may include receiving, at the source agent, a response from a destination agent indicating that a communication probe issued by the source agent and directed to a particular destination agent was received at the particular destination agent, and determining a time value for the amount of time taken for the response to be received back at the source agent following issuance of the communication probe by the source agent. Method1800may include controller23receiving data and/or other information reported by the source agents back to controller23(block1808). The data and/or other information may be collected by the policy agents35as a result of issuing communication probes, and monitoring for response in reply to the issued probes. In some examples, the policy agents35report the data and/or other information to the controller23in real-time as the results from the issuance of the communication probes is collected. Method1800may include control23determining one or more statuses for the communication probes based on the data and/or other information reported to the controller23by the policy agents35(block1810). Determining a status for a given communication probe associated with a source agent may include determining that a response to an issued probe was not received back from the destination agent, and as a result assigning a status of “unavailable” to that communication probe. Determining a status for a given communication probe may include comparing a measured time between the issuance of a communication probe and the receipt of a response from the destination agent back at the source probe to one or more threshold values, and assigning a status to the communication probe based on a range of the one or more threshold values that the measured time falls into. Method1800may include controller23generating a dashboard203including one or more user interfaces (block1812). A user interface of the one or more user interfaces may be output for display on a display device, such as a display screen of user interface129, for viewing for by a user, such as administrator24. The formatting of the user interfaces may include any of the arrangements of a user interface described in this disclosure. For example, the user interface being displayed may include a “clusters view” similar to user interface610ofFIG.6, which includes a graphical display representing a set of aggregates that include the source agents issuing communication probes. The graphical display may include a set of graphic shapes, such as a set of circles, each circle representative of one of the aggregates, the circles coupled to one another in the user interface via a set of graphical lines, each graphical line including one or more graphical indicators providing a visual indication, for example by use of colors, indicative of a determined status associated with the agent or agents of the respective aggregates. In other examples, the user interface being displayed may include a “dumbbell” configuration showing a pair of circles coupled by a line, one circle representing an aggregate of the computer network system that includes an agent or multiple agents issuing communication probes, and the second circle representing an aggregate of the computer network system that includes one or more destination agents for the communication probes being issue by the source agent(s). The user interface may include a set of status blocks (e.g., status blocks730,FIG.7) arranged in a grid of one or more columns and one or more rows, each status block being rendered in the user interface to represent a status, for example the current status in real-time or near real-time, determined for a selected metric, such as latency, for the communication probes being issue by the source agent(s). In another example, the user interface includes a grid of status blocks only, (e.g., status blocks1430,FIG.14), without the “dumbbell” configuration. The grid of status blocks includes individual status blocks representing a status for one or more communication probes issued from a particular source agent to the particular destination agent, each status block being rendered, for example using a color of a plurality of colors, to indicate a status determined for the communication probe(s) associated with that status block. In some examples, the grid of status block represents status blocks associated with source agents grouped into at least two different aggregates, and may include a large number, such as one hundred or more, of individual status blocks. Display of the user interfaces in method1800may include receiving inputs, for example by administration24provided to user interface device129, to modify the user interface being displayed. User inputs may include any of the inputs to a user interface described throughout this disclosure, including user inputs to select what level of detail and/or which statuses for a particular source agents and destination agent are to be displayed, and a request for additional details, including details illustrated as a pop-up box (e.g., pop-up box1514,FIG.15) to be displayed for a particular status block. Other user inputs to a user interface being displayed as part of method1800may allow a user, such as administrator24, to indicate a selection of the communication protocol to be used by the policy agents35for formatting and transmitting the communication probes transmitted between the policy agents. User inputs may also be used to dictate what metric, (e.g., latency, connectivity), which is the be used in determining the statuses associated with the communication probes being displayed as part of the user interface for one or more source/destination pairs of agents of the computer network system. The techniques described herein may be implemented in hardware, software, firmware, or any combination thereof. Various features described as modules, engines, units or components may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices or other hardware devices. In some cases, various features of electronic circuitry may be implemented as one or more integrated circuit devices, such as an integrated circuit chip or chipset. If implemented in hardware, this disclosure may be directed to an apparatus such a processor or an integrated circuit device, such as an integrated circuit chip or chipset. Alternatively or additionally, if implemented in software or firmware, the techniques may be realized at least in part by a computer-readable data storage medium comprising instructions that, when executed, cause a processor to perform one or more of the methods described above. For example, the computer-readable data storage medium may store such instructions for execution by a processor. A computer-readable medium may form part of a computer program product, which may include packaging materials. A computer-readable medium may comprise a computer data storage medium such as random access memory (RAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), Flash memory, magnetic or optical data storage media, and the like. In some examples, an article of manufacture may comprise one or more computer-readable storage media. In some examples, the computer-readable storage media may comprise non-transitory media. The term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. In certain examples, a non-transitory storage medium may store data that can, over time, change (e.g., in RAM or cache). The code or instructions may be software and/or firmware executed by processing circuitry including one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, functionality described in this disclosure may be provided within software modules or hardware modules.
217,223
11943119
DETAILED DESCRIPTION Referring toFIG.1, in accordance with systems and techniques that are disclosed herein, a cloud service manager60offers and delivers (instantiates, provisions and deploys, for example) services to manage the lifecycles (e.g., manage the building, ongoing management, reporting, metering, reporting and so forth) of existing cloud services and combinations of these existing cloud services for end users. More particularly, as disclosed herein, the cloud service manager60orchestrates the use of application programming interfaces (APIs) of existing cloud services for managing the lifecycles of the existing cloud services and combinations of the existing cloud services for users of user end systems50(desktops, portable computers, smartphones, clients, thin clients, servers, and so forth). Depending on the particular implementation, the selection and ordering of the cloud lifecycle management services may be performed by a given user (an administrator, for example) for a group of end users (users of an enterprise, for example); or the selection and ordering of the cloud capabilities may be performed by a given user (an Internet-based user or employee, for example) for the given user's individual use. As depicted inFIG.1, the cloud service manager60may be accessed by a given end user system50via network fabric29(network fabric formed from one or more of local area network (LAN) fabric, wide area network (WAN) fabric, Internet fabric, and so forth). As such, depending on the particular implementation, the cloud service manager60may reside on an Internet server, reside on a server within a private LAN, reside on a server within a WAN, reside on a desktop computer, or may be a web or SaaS (Software as a service), as just a few examples. In general, the users of the cloud service manager60may select and order “cloud capabilities” through the cloud service manager60. In general, the “cloud capabilities” refer to user-selected combinations of existing cloud services that are provided by existing cloud resources20, as well as lifecycle management services that are offered and delivered by the cloud service manager60. All of these cloud capabilities (the existing cloud services, the combinations of the existing cloud services and the lifecycle management services) are generally referred to herein as “cloud capabilities” herein. The cloud capabilities are, in general, associated with services that are associated with a “cloud,” which may be, as examples, a public cloud (a cloud formed from an Internet-based network and provides hosted cloud services that are generally available to members of the public); a private cloud (a cloud formed from a private, limited access network, (such as an enterprise network) which provides hosted cloud services to a limited group of members); a virtual private cloud (a cloud formed from a public network providing hosted cloud services to a limited group of members); a hybrid cloud (a cloud formed from a combination of two or more of the aforementioned clouds); and so forth. In general, the cloud service manager60contains a storefront or marketplace module62that, through its user interface63, allows a user to access a service consumption module66(of the cloud service manager60) for purposes of browsing and selecting offered cloud capabilities. Moreover, through the access to the service consumption module66, users may further customize (e.g., configure, for example) details of the selected cloud capabilities; agree to terms and/or conditions for receiving the selected cloud capabilities; order the cloud capabilities (subscribe to the capabilities, pay for the capabilities, and so forth); potentially build or modify a “recipe”, specifying a way to combine multiple cloud capabilities or provide lifecycle management; subsequently update the cloud capability selection(s); scale up and scale down the cloud capabilities; and in general, manage the lifecycle(s) of the ordered cloud capabilities, including retiring the capabilities. To facilitate this user selection and control, the service consumption module66contains one or multiple cloud service catalogs41(depending on the particular implementation) and/or different views of the same catalog(s)41, which describe available cloud capabilities. The catalog41itself may be a federation or aggregation of catalogs. The users may browse through the catalog(s)41using, for example, a graphical user interface (GUI)65of the interface63. In accordance with some implementations, the service consumption module66may contain one or more APIs/interfaces for purposes of permitting users to browse through the catalog(s)41using the GUI65. It is noted that different users may have access to different catalog(s)41for different views of the catalog(s)41(different content or different commercial terms), depending on the agreement/subscription in place. By accessing the service catalog(s)41, users may select, order, customize and combine cloud capabilities; and automate the instantiation and configuration of selected cloud capabilities. More specifically, in accordance with example implementations, via the service consumption module66, users may select combinations of various existing cloud resources20to form a selected set of cloud services and, in general, set up a service to manage the lifecycle of this combination for a given user or group of users. As examples, the existing cloud resources20may include such resources as an Infrastructure as a Service (IaaS) resource20-1(a resource that provides hosted equipment, such as servers, storage components and network components as a service); a Platform as a Service (PaaS) resource20-2(a resource that provides a hosted computing platform such as an operating system, hardware, storage, and so forth); a Software as a Service (SaaS) resource20-3(a resource that provides hosted applications as a service); a DataBase as a Service (DBaaS) resource20-4(a resource that provides a hosted database as a service); and so forth. The available existing cloud resources20further include, in accordance with example implementations, resources20that provide other services that may be useful for the cloud, such as (as examples), resources20-5,20-6and20-7that provide services derived from their provisioning using the Server Automation (SA), Database Middleware Automation (DMA), Matrix Operating Environment (MOE), or Operations Orchestration (OO) software available from Hewlett Packard and other any other infrastructure provisioning or IaaS provisioning system. Thus, in general, the cloud resources may include these as well as other cloud services/capabilities20-8, in accordance with further implementations. It is noted that one or multiple of the existing cloud resources20may be provided by the cloud service manager60, in accordance with example implementations. In accordance with exemplary techniques and systems that are disclosed herein, users may access the catalog(s)41to select and order one or more of the following cloud services: services provided by the existing cloud resources20; services provided by combinations of the existing cloud resources20; and services to manage the lifecycle of selected services/combinations of services, including services directed to building, monitoring, metering, and reporting services. Moreover, the cloud service manager60allows agile development of these services, as users may configure various aspects of these services, as further described herein. In addition to presenting the service offerings, the service consumption module66regulates user subscriptions to these services, in accordance with example implementations. In this manner, as depicted inFIG.1, in addition to the catalogs41describing the service offerings, the service consumption module66may contain such other information as user login components42(components containing passwords, login identifications and so forth); user and tenant information; user subscription components35(components describing subscription contract terms, subscription rates, and so forth); and an engine40that contains logic that allows access and modification to the offered services, updating of subscription data, updating of login information and so forth. The cloud service manager60contains a service delivery module68to deliver services that are described in the catalogs41and are selected by the users. More specifically, in accordance with example implementations, using the palette of available cloud resources and their resource offerings and actions, cloud service designers and/or administrators may construct plans, or “service blueprints70,” which are stored in a service repository64and set forth structured plans of automated actions for instantiating and configuring the cloud capabilities that are described and offered in the catalog(s)41. Due to these pre-existing service blueprints70, logic of an engine92of the service delivery module68may automatically undertake the actions to instantiate and configure the selected cloud capabilities, thereby avoiding manual actions by the users pertaining to instantiation and configuration of the selected cloud capabilities. In accordance with example implementations, the service blueprint70is a set of workflows/recipes/scripts that correspond to particular lifecycle management actions that may be performed to orchestrate the APIs of the appropriate cloud resources for purposes of managing the lifecycle of a given cloud capability. In this regard, the actions are workflows and calls to resource offering interfaces, in accordance with some implementations. In accordance with example implementations, designers/administrators may use GUIs of the service delivery module68to orchestrate/compose multiple such service blueprints70into service blueprints70of new cloud capabilities. The designers/administrators may also use GUI-based tools of the service delivery module68to modify existing service blueprints70and form new service blueprints70based on combinations of existing service blueprints70. In addition to selecting pre-existing service blueprints70, in accordance with some implementations, the service delivery module68may permit users to construct service blueprints70, modify existing service blueprints70, and/or create new service blueprints70from a combination of existing service blueprints70. In accordance with some implementations, a service blueprint70may be constructed using a workflow189that is illustrated inFIG.4. Pursuant to the workflow189, a cloud service provider190may include a provider interface191that has GUIs and tools that allow a designer/administrator to construct orchestrated flows192-1and192-2, which are defined by associated process definitions194. These orchestrated flows, in turn, create actions196for resource offerings195. Thus, for example, the workflow189ofFIG.4produces may produce one or more service blueprints70that have a design197constructed of service components198and resource bindings199. More specifically, in accordance with example implementations, each service blueprint70is an object (objects formed from machine executable instructions, that performs various actions, or functions, that may be taken in connection with an associated offered cloud capability, or service) and has an associated collection of functions, or “recipes,” which may be executed to cause the orchestration of the appropriate cloud service APIs to provision, instantiate and build a cloud service (formed from one or more existing cloud services, for example); manage a cloud service; monitor a cloud service; meter a cloud service; and so forth. A recipe can be a script or workflow or any other executable, in accordance with example implementations, which may be executed by logic of the engine92of the service delivery module68for purposes of performing the actions specified by the service blueprint70. In accordance with example implementations, the service blueprints70may be associated with various commercial terms, such as prices; contract periods; terms associated with a service level agreement (SLA); and so forth, which are stored in subscription components35of the service composition module66. A service becomes a service offering when associated to these terms. These terms that accompany given service blueprints70may be described in the catalogs41, in accordance with some implementations and, in general, may be set forth by a product designer. A given service blueprint70may be instantiated/deployed by executing its associated recipe(s), which results in service instances44that may be tracked by, for example, information technology (IT) management systems by feeding the service instances into an IT service management (ITSM) service, a real time service management (RTSM) service, or a configuration management database (CMDB) with a full topology of how a service instance is supported/implemented. In this manner, in accordance with example implementations, the service delivery module68may contain a service instance service management component44(e.g. RTSM or CMDB or ITSM (Information Service Management) for this purpose. If shared across an ITSM system, the component44is available for other management systems to monitor and manage separately the instantiated instances (identified and tracked based on topology information stored in the database). In accordance with some implementations, the actions to set up the monitoring and management are achieved through the use of the service blueprints70. A given service blueprint70may further specify actions that are taken to handle errors associated with given composition cloud service are handled and actions that taken to report such errors. In general, other service blueprints70may specify how the lifecycle of a given service composition is monitored and managed during the full lifecycle of the service. For example, a given recipe may notify the owner of the system (the owner of the cloud resources20, for example) about an error; repeat faulty steps with the same or other resource in a pool; track issues and trace back steps and tear down some of the instantiated resources/services; and so forth. A given service blueprint70may also describe a structured plan for usage metering and/or reporting. For monitoring, the instance and monitoring service may be setup/configured to perform the monitoring tasks; or, alternatively, a CMDB/RTSM may be in place to let a monitoring suite such as ITSM (as an example) to automatically discover and monitor. The metering and reporting may be performed the same way by setting up the metering/reporting and adding probes or counters that allow metering (measured CPU usage, time used, memory used, or traffic used per component by using a monitoring system to interact with agents or configuring service scalable to do so to generate charging data records (CDRs) for their use and provide them to metering systems). Reporting may be accomplished by inquiring the monitoring and/or metering management systems. Thus, to summarize, referring toFIG.2in conjunction withFIG.1, in accordance with exemplary implementations, a technique100includes providing (block104) a catalog to offer a cloud service to manage the lifecycle of a group of at least one existing cloud service and associating (block106) a service blueprint with the offered cloud service to orchestrate API(s) to manage the lifecycle. The technique100includes receiving (block110) user selection of the offered cloud service and executing (block114) recipes associated with the service blueprint to deliver the selected cloud service. In accordance with exemplary implementations, a given recipe may automate the actions that a given user may otherwise undertake for purposes of setting up the ordered cloud service. For example, referring toFIG.3in conjunction withFIG.1, an exemplary recipe150may use, for example, three execution branches160,170and180for purposes of setting up the infrastructure, middleware and application layers, respectively, of an ordered cloud service. For example, exemplary branch160may include stages162,164and166for purposes of provisioning servers, which include tiers for a database, an application server and a portal and load balancer, respectively; exemplary branch170may include states172and174for purposes of provisioning the servers with database and middleware, respectively; and branch180may include states182,184and186for purposes of deploying the applications. As depicted inFIG.3, the branches160,170and180may, in general, be performed in parallel for the different tiers. In accordance with example implementations, a service blueprint70may be at least partially constructed by a user/designer specifying/modifying at least part of a recipe for a given cloud service. In this manner, the user/designer may begin the design starting with “mandatory steps” or “recommended steps” for a given service blueprint70, in accordance with some implementations, for purposes of recommending proper management of the resources. In accordance with some implementations, cloud service designers may design new recipes to build higher level services as executable or work flow/composition/business process/scripts (i.e., flows of conditions and actions) of API calls to the resource interfaces and API calls to other functions (calls to activation/provisioning service resources, for example). Moreover, new recipes may be constructed and existing recipes may be modified by the users of the cloud service manager60/designers. It is noted that the recipes may be constructed using, for example, an API of the cloud service manager60to design a script; or the construction of the recipes may be GUI-based. In this regard, in accordance with some implementations, a designer may edit the service blueprint70with GUI objects representing each resource or service involved. The GUI links may represent the workflow (customizable conditions and actions, for example). By clicking on the object, the designer may then be able to customize each service blueprint of the resource or service (e.g. set variables or link variables to other contexts, etc.) For example, in accordance with some implementations, the designer may use the logic of the engine40of the service consumption module66to add, delete or other modify recipes for a given service blueprint70; or create a new service blueprint70. In accordance with some implementations, the GUI guides the designer through this process. It is noted, that in accordance with some implementations, different GUIs may be provided for the different users and designers. In this regard, the storefront module62may contain various GUIs for designers and possibly for users to modify, delete and create service blueprints70. Moreover, separate screens may be presented in the portal to manage order capabilities. Administrators may also use the screens if the user has a problem. In accordance with some implementations, in general, the designer is a different persona from the user. However it is possible that a designer is made available for a user who has or wants to order a service. For example, in accordance with some implementations, designers use the service consumption module66to generate service blueprints for the different offerings however they do leave parts (contextual parameters, for example) of service blueprints customizable (e.g., select OS of computing resources, or size of storage, make other selections, provide options available, and so forth). A user who has or wants to order a service (typically technical users like developers) may customize the service blueprints they want or have ordered with a designer that may only change what is left unspecified (and within the limits of the options). Thus, in general, the certain contextual parameters of one or multiple service blueprints70may be set up at the time of execution and/or may be customized by a user or other persona. In accordance with some implementations, an instantiated service blueprint70may be captured in an instantiated service repository46. In this regard, data captured in the repository46may be viewed via the user interface63for purposes of displaying reports and statuses of purchased services to the users. The service repository46, in addition to being populated via the designer tools, may ingest/aggregate/federate from different service repositories. It is noted that the users may also use GUI-based tools for purposes of viewing order statuses and managing order capabilities, in accordance with further implementations. A corresponding console page may also be used to call other service blueprint-related functions to manage the service instances. It is noted that information and alerts about the service blueprints resulting from monitoring the instances ensures that service blueprints recipes include deployment of appropriate agent/tool/setup to ensure management, and management tools associated to the resources are configured to monitor the instances. It is noted that other implementations are contemplated and are within the scope of the appended claims. For example, a given catalog41may ingest or aggregate/federate other catalogs that may or may not be associated with service blueprints70, in accordance with further implementations. Among its other features, the service delivery module68may further include resource provider components42describing the cloud resource providers; resource environment components44describing the cloud resource provider environments; and resource offering components30, which are components that expose (the APIs the existing cloud resources20. In general, the resource offering components30describe offering details, such as the cloud service resources60, the capacities of the resources20, the number of requests that can be made to provision the cloud resources20, and so forth. The resource offering components30may be automatically updated as requirements and capabilities of the cloud resources20change, in accordance with example implementations. The service delivery module68may offer components that the user may control through the GUI65for purposes of managing an ordered cloud service. For example, the service delivery module68may contain a user accessible lifecycle controller45for purposes of managing the lifecycle (reserve, instantiate, monitor, scale up/scale down, acquire usage details, uninstantiate, unreserve, and so forth) of the service as well as a scaler47to scale up or down (scale up/down the bandwidth, storage capacity, processing power, and so forth) the cloud service. It is noted the user may see the RTSM (instance repository) for the services/capabilities that user has ordered/subscribed to and perform actions on them. The actions that are performed executes the corresponding scripts in the service blueprints associated with the capability/service on the instance in question] Referring toFIG.5, in accordance with example implementations, the cloud service manager60includes one or multiple physical machines200(N physical machines200-1. . .200-N, being depicted as examples inFIG.5). The physical machine200is a machine that is made of actual hardware210and actual machine executable instructions250. Although the physical machines200are depicted inFIG.5as being contained within corresponding boxes, a particular physical machine200may be a distributed machine, which has multiple nodes that provide a distributed and parallel processing system. In accordance with exemplary implementations, the physical machine200may be located within one cabinet (or rack); or alternatively, the physical machine200may be located in multiple cabinets (or racks). A given physical machine200may include such hardware210as one or more processor214and a memory220that stores machine executable instructions250, application data, configuration data and so forth. In general, the processor214may be a processing core, a central processing unit (CPU), and so forth. Moreover, in general, the memory220is a non-transitory memory, which may include semiconductor storage devices, magnetic storage devices, optical storage devices, and so forth. The physical machine200may include various other hardware components, such as a network interface216and one or more of the following; mass storage drives; a display, input devices, such as a mouse and a keyboard; removable media devices; and so forth. The machine executable instructions250contained in the physical machine200may, when executed by the processor(s)214, cause the processor(s)214to form one or more components of the cloud service manager60. In general, the physical machines200communicate with each other over a communication link270. This communication link270, in turn, may be coupled to the user end devices50(seeFIG.1) and as such, may form at least part of the network fabric51(seeFIG.1). As non-limiting examples, the communication link270represents one or multiple types of network fabric (i.e., wide area network (WAN) connections, local area network (LAN) connections, wireless connections, Internet connections, and so forth). Thus, the communication link270may represent one or more multiple buses or fast interconnects. As an example, the cloud service provider may be an application server farm, a cloud server farm, a storage server farm (or storage area network), a web server farm, a switch, a router farm, and so forth. Although two physical machines200(physical machines200-1and200-N) are depicted inFIG.5for purposes of a non-limiting example, it is understood that the cloud service manager60may contain a single physical machine200or may contain more than two physical machines200, depending on the particular implementation (i.e., “N” may be “1,” “2,” or a number greater than “2”). Other implementations are contemplated and are within the scope of the appended claims. For example, referring back toFIG.1, in further implementations, the cloud service manager60may provide one or more of the underlying existing cloud services and as such, may function as one of the cloud resources20. As a more specific example, in accordance with some implementations, the cloud service manager60may provide the SA,00and/or MOE service. As examples of further implementations, the cloud service manager60may be a cloud service (SaaS), may be executed by a web server, may be an application executed on a user end system50, and so forth. While a limited number of examples have been disclosed herein, those skilled in the art, having the benefit of this disclosure, will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variation
26,779
11943120
DETAILED DESCRIPTION A system, method, and computer-readable medium are disclosed for performing a telemetry aggregation operation, described in greater detail herein. Various aspects of the invention reflect an appreciation that it is common for a typical data center to monitor and manage tens, if not hundreds, of thousands of different assets, such as certain computing and networking devices, described in greater detail herein. Certain aspects of the invention likewise reflect an appreciation that such data center assets are typically implemented to work in combination with one another for a particular purpose. Likewise, various aspects of the invention reflect an appreciation that such purposes generally involve the performance of a wide variety of tasks, operations, and processes to service certain workloads. Certain aspects of the invention likewise reflect an appreciation that the use of cloud-based data center management systems often prove to be advantageous as they allow monitoring and management functions to be performed from anywhere, at any time, according to the user's particular needs, and typically at a reduced cost. However, various aspects of the invention likewise reflect an appreciation that the use of such cloud-based approaches may pose certain challenges. For example, communication channels are typically one-way and hindered by firewalls, proxies, and complicated network set-ups. Likewise, various aspects of the invention reflect an appreciation that the monitoring, collection, aggregation, and analysis of certain telemetry information associated with large numbers of data center assets may pose yet additional challenges. For example, certain data center assets may generate higher quantities, or more granular, telemetry information than others. Likewise, certain data center assets may not be able generate telemetry information as quickly as desired, and as a result, might impeded the timely collection of telemetry information from co-located data center assets. Accordingly, certain aspects of the invention reflect an appreciation there is a need for an always-connected, bidirectional connection to managed data center assets located on the customer's premises such that management actions based upon pertinent and timely telemetry information can be securely performed in an efficient manner. For purposes of this disclosure, an information handling system may include any instrumentality or aggregate of instrumentalities operable to compute, classify, process, transmit, receive, retrieve, originate, switch, store, display, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, or other purposes. For example, an information handling system may be a personal computer, a network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price. The information handling system may include random access memory (RAM), one or more processing resources such as a central processing unit (CPU) or hardware or software control logic, ROM, and/or other types of nonvolatile memory. Additional components of the information handling system may include one or more disk drives, one or more network ports for communicating with external devices as well as various input and output (I/O) devices, such as a keyboard, a mouse, and a video display. The information handling system may also include one or more buses operable to transmit communications between the various hardware components. FIG.1is a generalized illustration of an information handling system100that can be used to implement the system and method of the present invention. The information handling system100includes a processor (e.g., central processor unit or “CPU”)102, input/output (I/O) devices104, such as a display, a keyboard, a mouse, a touchpad or touchscreen, and associated controllers, a hard drive or disk storage106, and various other subsystems108. In various embodiments, the information handling system100also includes network port110operable to connect to a network140, which is likewise accessible by a service provider server142. The information handling system100likewise includes system memory112, which is interconnected to the foregoing via one or more buses114. System memory112further comprises operating system (OS)116and in various embodiments may also comprise a data center monitoring and management console118, or a connectivity management system (CMS) client136. In one embodiment, the information handling system100is able to download the data center monitoring and management console118, or the CMS client136, or both, from the service provider server142. In another embodiment, the functionality respectively provided by the data center monitoring and management console118, or the CMS client136, or both, may be provided as a service from the service provider server142. In certain embodiments, the data center monitoring and management console118may include a monitoring module120, a management module122, an analysis engine124, a connectivity management system (CMS)126, a telemetry aggregation system (TAS)136, or a combination thereof. In certain embodiments, the CMS126may be implemented to include a CMS aggregator128. In certain embodiments, the data center monitoring and management console118may be implemented to perform a data center monitoring and management operation. In certain embodiments, the information handling system100may be implemented to include either a CMS126, or a CMS client136, or both. In certain embodiments, the data center monitoring and management operation may be performed during operation of an information handling system100. In various embodiments, performance of the data center monitoring and management operation may result in the realization of improved monitoring and management of certain data center assets, as described in greater detail herein. In certain embodiments, the CMS126may be implemented in combination with the CMS client136to perform a connectivity management operation, described in greater detail herein. As an example, the CMS126may be implemented on one information handling system100, while the CMS client136may be implemented on another, as likewise described in greater detail herein. FIG.2is a simplified block diagram of a data center monitoring and management environment implemented in accordance with an embodiment of the invention. As used herein, a data center broadly refers to a building, a dedicated space within a building, or a group of buildings, used to house a collection of interrelated data center assets244implemented to work in combination with one another for a particular purpose. As likewise used herein, a data center asset244broadly refers to anything, tangible or intangible, that can be owned, controlled, or enabled to produce value as a result of its use within a data center. In certain embodiments, a data center asset244may include a product, or a service, or a combination of the two. As used herein, a tangible data center asset244broadly refers to data center asset244having a physical substance, such as a computing or network device. Examples of computing devices may include personal computers (PCs), laptop PCs, tablet computers, servers, mainframe computers, Redundant Arrays of Independent Disks (RAID) storage units, their associated internal and external components, and so forth. Likewise, examples of network devices may include routers, switches, hubs, repeaters, bridges, gateways, and so forth. Other examples of a tangible data center asset244may include certain data center personnel, such as a data center system administrator, operator, or technician, and so forth. Other examples of a tangible data center asset244may include certain maintenance, repair, and operations (MRO) items, such as replacement and upgrade parts for a particular data center asset244. In certain embodiments, such MRO items may be in the form of consumables, such as air filters, fuses, fasteners, and so forth. As likewise used herein, an intangible data center asset244broadly refers to a data center asset244that lacks physical substance. Examples of intangible data center assets244may include software applications, software services, firmware code, and other non-physical, computer-based assets. Other examples of intangible data center assets244may include digital assets, such as structured and unstructured data of all kinds, still images, video images, audio recordings of speech and other sounds, and so forth. Further examples of intangible data center assets244may include intellectual property, such as patents, trademarks, copyrights, trade names, franchises, goodwill, and knowledge resources, such as data center asset244documentation. Yet other examples of intangible data center assets244may include certain tasks, functions, operations, procedures, or processes performed by data center personnel. Those of skill in the art will recognize that many such examples of tangible and intangible data center assets244are possible. Accordingly, the foregoing is not intended to limit the spirit, scope or intent of the invention. In certain embodiments, the value produced by a data center asset244may be tangible or intangible. As used herein, tangible value broadly refers to value that can be measured. Examples of tangible value may include return on investment (ROI), total cost of ownership (TCO), internal rate of return (IRR), increased performance, more efficient use of resources, improvement in sales, decreased customer support costs, and so forth. As likewise used herein, intangible value broadly refers to value that provides a benefit that may be difficult to measure. Examples of intangible value may include improvements in user experience, customer support, and market perception. Skilled practitioner of the art will recognize that many such examples of tangible and intangible value are possible. Accordingly, the foregoing is not intended to limit the spirit, scope or intent of the invention. In certain embodiments, the data center monitoring and management environment200may include a data center monitoring and management console118. In certain embodiments, the data center monitoring and management console118may be implemented to perform a data center monitoring and management operation. As used herein, a data center monitoring and management operation broadly refers to any task, function, procedure, or process performed, directly or indirectly, within a data center monitoring and management environment200to procure, deploy, configure, implement, operate, monitor, manage, maintain, or remediate a data center asset244. In certain embodiments, a data center monitoring and management operation may include a data center monitoring task. As used herein, a data center monitoring task broadly refers to any function, operation, procedure, or process performed, directly or indirectly, within a data center monitoring and management environment200to monitor the operational status of a particular data center asset244. In various embodiments, a particular data center asset244may be implemented to generate an alert if its operational status exceeds certain parameters. In these embodiments, the definition of such parameters, and the method by which they may be selected, is a matter of design choice. For example, an internal cooling fan of a server may begin to fail, which in turn may cause the operational temperature of the server to exceed its rated level. In this example, the server may be implemented to generate an alert, which provides notification of the occurrence of a data center issue. As used herein, a data center issue broadly refers to an operational situation associated with a particular component of a data monitoring and management environment200, which if not corrected, may result in negative consequences. In certain embodiments, a data center issue may be related to the occurrence, or predicted occurrence, of an anomaly within the data center monitoring and management environment200. In certain embodiments, the anomaly may be related to unusual or unexpected behavior of one or more data center assets244. In certain embodiments, a data center monitoring and management operation may include a data center management task. As used herein, a data center management task broadly refers to any function, operation, procedure, or process performed, directly or indirectly, within a data center monitoring and management environment200to manage a particular data center asset244. In certain embodiments, a data center management task may include a data center deployment operation, a data center remediation operation, a data center remediation documentation operation, a connectivity management operation, or a combination thereof. As used herein, a data center deployment operation broadly refers to any function, task, procedure, or process performed, directly or indirectly, within a data center monitoring and management environment200to install a software file, such as a configuration file, a new software application, a version of an operating system, and so forth, on a data center asset244. As likewise used herein, a data center remediation operation broadly refers to any function, task, procedure, or process performed, directly or indirectly, within a data center monitoring and management environment200to correct an operational situation associated with a component of a data monitoring and management environment200, which if not corrected, may result in negative consequences. A data center remediation documentation operation, as likewise used herein, broadly refers to any function, task, procedure, or process performed, directly or indirectly, within a data center monitoring and management environment200to retrieve, generate, revise, update, or store remediation documentation that may be used in the performance of a data center remediation operation. Likewise, as used herein, a connectivity management operation (also referred to as a data center connectivity management operation) broadly refers to any task, function, procedure, or process performed, directly or indirectly, to manage connectivity between a particular data center asset244and a particular data center monitoring and management console118. In various embodiments, one or more connectivity management operations may be performed to ensure that data exchanged between a particular data center asset244and a particular data center monitoring and management console118during a communication session is secured. In certain of these embodiments, as described in greater detail herein, various cryptographic approaches familiar to skilled practitioners of the art may be used to secure a particular communication session. In certain embodiments, the data center monitoring and management console118may be implemented to receive an alert corresponding to a particular data center issue. In various embodiments, the data center monitoring and management console118may be implemented to receive certain data associated with the operation of a particular data center asset244. In certain embodiments, such operational data may be received through the use of telemetry approaches familiar to those of skill in the art. In various embodiments, the data center monitoring console118may be implemented to process certain operational data received from a particular data center asset to determine whether a data center issue has occurred, is occurring, or is anticipated to occur. In certain embodiments, the data center monitoring and management console118may be implemented to include a monitoring module120, a management monitor122, an analysis engine124, and a connectivity management system (CMS)126, and a telemetry aggregation system (TAS)130, or a combination thereof. In certain embodiments, the monitoring module120may be implemented to monitor the procurement, deployment, implementation, operation, management, maintenance, or remediation of a particular data center asset244at any point in its lifecycle. In certain embodiments, the management module122may be implemented to manage the procurement, deployment, implementation, operation, monitoring, maintenance, or remediation of a particular data center asset244at any point in its lifecycle. In various embodiments, the monitoring module120, the management module122, the analysis engine124, and the CMS126, and the TAS130, may be implemented, individually or in combination with one another, to perform a data center asset monitoring and management operation, as likewise described in greater detail herein. In various embodiments, a CMS client136may be implemented on certain user devices204, or certain data center assets244, or a combination thereof. In various embodiments, the CMS126may be implemented in combination with a particular CMS client136to perform a connectivity management operation, as described in greater detail herein. In certain of these embodiments, the CMS126may likewise be implemented with the TAS130to perform the connectivity management operation. In certain embodiments, the data center monitoring and management environment200may include a repository of data center monitoring and management data220. In certain embodiments, the repository of data center monitoring and management data220may be local to the information handling system100executing the data center monitoring and management console118or may be located remotely. In various embodiments, the repository of data center monitoring and management data220may include certain information associated with data center asset data220, data center asset configuration rules224, data center infrastructure data226, data center remediation data228, and data center personnel data230. As used herein, data center asset data222broadly refers to information associated with a particular data center asset244, such as an information handling system100, or an associated workload, that can be read, measured, and structured into a usable format. For example, data center asset data222associated with a particular server may include the number and type of processors it can support, their speed and architecture, minimum and maximum amounts of memory supported, various storage configurations, the number, type, and speed of input/output channels and ports, and so forth. In various embodiments, the data center asset data222may likewise include certain performance and configuration information associated with a particular workload, as described in greater detail herein. In various embodiments, the data center asset data222may include certain public or proprietary information related to data center asset244configurations associated with a particular workload. In certain embodiments, the data center asset data222may include information associated with data center asset244types, quantities, locations, use types, optimization types, workloads, performance, support information, and cost factors, or a combination thereof, as described in greater detail herein. In certain embodiments, the data center asset data222may include information associated with data center asset244utilization patterns, likewise described in greater detail herein. In certain embodiments, the data center asset data222may include information associated with the allocation of certain data center asset resources, described in greater detail herein, to a particular workload. As likewise used herein, a data center asset configuration rule224broadly refers to a rule used to configure a particular data center asset244. In certain embodiments, one or more data center asset configuration rules224may be used to verify that a particular data center asset244configuration is the most optimal for an associated location, or workload, or to interact with other data center assets244, or a combination thereof, as described in greater detail herein. In certain embodiments, the data center asset configuration rule224may be used in the performance of a data center asset configuration verification operation, a data center remediation operation, or a combination of the two. In certain embodiments, the data center asset configuration verification operation, or the data center remediation operation, or both, may be performed by an asset configuration system250. In certain embodiments, the asset configuration system250may be used in combination with the data center monitoring and management console118to perform a data center asset configuration operation, or a data center remediation operation, or a combination of the two. As used herein, data center infrastructure226data broadly refers to any data associated with a data center infrastructure component. As likewise used herein, a data center infrastructure component broadly refers to any component of a data center monitoring and management environment200that may be involved, directly or indirectly, in the procurement, deployment, implementation, configuration, operation, monitoring, management, maintenance, or remediation of a particular data center asset244. In certain embodiments, data center infrastructure components may include physical structures, such as buildings, equipment racks and enclosures, network and electrical cabling, heating, cooling, and ventilation (HVAC) equipment and associated ductwork, electrical transformers and power conditioning systems, water pumps and piping systems, smoke and fire suppression systems, physical security systems and associated peripherals, and so forth. In various embodiments, data center infrastructure components may likewise include the provision of certain services, such as network connectivity, conditioned airflow, electrical power, and water, or a combination thereof. Data center remediation data228, as used herein, broadly refers to any data associated with the performance of a data center remediation operation, described in greater details herein. In certain embodiments, the data center remediation data228may include information associated with the remediation of a particular data center issue, such as the date and time an alert was received indicating the occurrence of the data center issue. In certain embodiments, the data center remediation data228may likewise include the amount of elapsed time before a corresponding data center remediation operation was begun after receiving the alert, and the amount of elapsed time before it was completed. In various embodiments, the data center remediation data228may include information related to certain data center issues, the frequency of their occurrence, their respective causes, error codes associated with such data center issues, the respective location of each data center asset244associated with such data center issues, and so forth. In various embodiments, the data center remediation data228may include information associated with data center asset244replacement parts, or upgrades, or certain third party services that may need to be procured in order to perform the data center remediation operation. Likewise, in certain embodiments, related data center remediation data228may include the amount of elapsed time before the replacement parts, or data center asset244upgrades, or third party services were received and implemented. In certain embodiments, the data center remediation data228may include information associated with data center personnel who may have performed a particular data center remediation operation. Likewise, in certain embodiments, related data center remediation data228may include the amount of time the data center personnel actually spent performing the operation, issues encountered in performing the operation, and the eventual outcome of the operation that was performed. In certain embodiments, the data center remediation data228may include remediation documentation associated with performing a data center asset remediation operation associated with a particular data center asset244. In various embodiments, such remediation documentation may include information associated with certain attributes, features, characteristics, functional capabilities, operational parameters, and so forth, of a particular data center asset244. In certain embodiments, such remediation documentation may likewise include information, such as step-by-step procedures and associated instructions, video tutorials, diagnostic routines and tests, checklists, and so forth, associated with remediating a particular data center issue. In certain embodiments, the data center remediation data228may include information associated with any related remediation dependencies, such as other data center remediation operations that may need to be performed beforehand. In certain embodiments, the data center remediation data228may include certain time restrictions when a data center remediation operation, such as rebooting a particular server, may be performed. In various embodiments, the data center remediation data228may likewise include certain autonomous remediation rules, described in greater detail herein. In various embodiments, certain of these autonomous remediation rules may be used in the performance of an autonomous remediation operation, described in greater detail herein. Those of skill in the art will recognize that many such examples of data center remediation data228are possible. Accordingly, the foregoing is not intended to limit the spirit, scope, or intent of the invention. Data center personnel data230, as used herein, broadly refers to any data associated with data center personnel who may be directly, or indirectly, involved in the procurement, deployment, configuration, implementation, operation, monitoring, management, maintenance, or remediation of a particular data center asset244. In various embodiments, the data center personnel data230may include job title, work assignment, or responsibility information corresponding to certain data center personnel. In various embodiments, the data center personnel data230may include information related to the type, and number, of data center remediation operations currently being, or previously, performed by certain data center personnel. In various embodiments, the data center personnel data230may include historical information, such as success metrics, associated with data center remediation operations performed by certain data center personnel, such as data center administrators, operators, and technicians. In these embodiments, the data center personnel data230may be updated as individual data center personnel complete each data center remediation task, described in greater detail herein, they are assigned. In various embodiments, the data center personnel data230may likewise include education, certification, and skill level information corresponding to certain data center personnel. Likewise, in various embodiments, the data center personnel data230may include security-related information, such as security clearances, user IDs, passwords, security-related biometrics, authorizations, and so forth, corresponding to certain data center personnel. Those of skill in the art will recognize that many such examples of data center personnel data230are possible. Accordingly, the foregoing is not intended to limit the spirit, scope, or intent of the invention. In certain embodiments, various data center assets244within a data center monitoring and management environment200may have certain interdependencies. As an example, a data center monitoring and management environment200may have multiple servers interconnected by a storage area network (SAN) providing block-level access to various disk arrays and tape libraries. In this example, the servers, various physical and operational elements of the SAN, as well the disk arrays and tape libraries, are interdependent upon one another. In certain embodiments, each data center asset244in a data center monitoring and management environment200may be treated as a separate data center asset244and depreciated individually according to their respective attributes. As an example, a particular rack of servers in a data center monitoring and management environment200may be made up of a variety of individual servers, each of which may have a different depreciation schedule. To continue the example, certain of these data center assets244may be implemented in different combinations to produce an end result. To further illustrate the example, a particular server in the rack of servers may initially be implemented to query a database of customer records. As another example, the same server may be implemented at later time perform an analysis of sales associated with those same customer records. In certain embodiments, each data center asset244in a data center monitoring and management environment200may have an associated maintenance schedule and service contract. For example, a data center monitoring and management environment200may include a wide variety of servers and storage arrays, which may respectively be manufactured by a variety of manufacturers. In this example, the frequency and nature of scheduled maintenance, as well as service contract terms and conditions, may be different for each server and storage array. In certain embodiments, the individual data center assets244in a data center monitoring and management environment200may be configured differently, according to their intended use. To continue the previous example, various servers may be configured with faster or additional processors for one intended workload, while other servers may be configured with additional memory for other intended workloads. Likewise, certain storage arrays may be configured as one RAID configuration, while others may be configured as a different RAID configuration. In certain embodiments, the data center monitoring and management environment200may likewise be implemented to include an asset configuration system250, a product configuration system252, a product fabrication system254, and a supply chain system256, or a combination thereof. In various embodiments, the asset configuration system250may be implemented to perform certain data center asset244configuration operations. In certain embodiments, the data center asset244configuration operation may be performed to configure a particular data center asset244for a particular purpose. In certain embodiments, the data center monitoring and management console118may be implemented to interact with the asset configuration system250to perform a particular data center asset244configuration operation. In various embodiments, the asset configuration system250may be implemented to generate, manage, and provide, or some combination thereof, data center asset configuration rules224. In certain of these embodiments, the data center asset configuration rules224may be used to configure a particular data center asset244for a particular purpose. In certain embodiments, a user202may use a user device204to interact with the data center monitoring and management console118. As used herein, a user device204refers to an information handling system such as a personal computer, a laptop computer, a tablet computer, a personal digital assistant (PDA), a smart phone, a mobile telephone, or other device that is capable of processing and communicating data. In certain embodiments, the communication of the data may take place in real-time or near-real-time. As used herein, real-time broadly refers to processing and providing information within a time interval brief enough to not be discernable by a user202. In certain embodiments, a user device204may be implemented with a camera206, such as a video camera known to skilled practitioners of the art. In certain embodiments, the camera206may be integrated into the user device204. In certain embodiments, the camera206may be implemented as a separate device configured to interoperate with the user device204. As an example, a webcam familiar to those of skill in the art may be implemented receive and communicate various image and audio signals to a user device204via a Universal Serial Bus (USB) interface. In certain embodiments, the user device204may be configured to present a data center monitoring and management console user interface (UI)240. In certain embodiments, the data center monitoring and management console UI240may be implemented to present a graphical representation242of data center asset monitoring and management information, which is automatically generated in response to interaction with the data center monitoring and management console118. In certain embodiments, a data center monitoring and management application238may be implemented on a particular user device204. In various embodiments, the data center monitoring and management application238may be implemented on a mobile user device204, such as a laptop computer, a tablet computer, a smart phone, a dedicated-purpose mobile device, and so forth. In certain of these embodiments, the mobile user device204may be used at various locations within the data center monitoring and management environment200by the user202when performing a data center monitoring and management operation, described in greater detail herein. In various embodiments, the data center monitoring and management application238may be implemented to facilitate a user202, such as a data center administrator, operator, or technician, to perform a particular data center remediation operation. In various embodiments, such facilitation may include using the data center monitoring and management application238to receive a notification of a data center remediation task, described in greater detail herein, being assigned to the user. In certain embodiments, the data center monitoring and management console118may be implemented to generate the notification of the data center remediation task assignment, and assign it to the user, as likewise described in greater detail herein. In certain embodiments, the data center monitoring and management console118may be implemented to generate the data center remediation task, and once generated, provide it to the data center monitoring and management application238associated with the assigned user202. In certain embodiments, such facilitation may include using the data center monitoring and management application238to receive the data center remediation task from the data center monitoring and management console118. In various embodiments, such facilitation may include using the data center monitoring and management application238to confirm that the user202is at the correct physical location of a particular data center asset244associated with a corresponding data center issue. In certain of these embodiments, the data center monitoring and management application238may be implemented to include certain Global Positioning System (GPS) capabilities, familiar to those of skill in the art, which may be used to determine the physical location of the user202in relation to the physical location of a particular data center asset244. In various embodiments, such facilitation may include using the data center monitoring and management application238to ensure the user202is aware of, or is provided the location of, or receives, or a combination thereof, certain remediation resources, described in greater detail herein, that may be needed to perform a particular data center remediation operation. In various embodiments, such facilitation may include using the data center monitoring and management application238to view certain remediation documentation, or augmented instructions, related to performing a particular data center remediation operation. In various embodiments, such facilitation may include using the data center monitoring and management application238to certify that a particular data center remediation operation has been performed successfully. In certain embodiments the UI window240may be implemented as a UI window of the data center monitoring and management application238. In various embodiments, the data center monitoring and management application238may be implemented to include, in part or in whole, certain functionalities associated with the data center monitoring and management console118. In certain embodiments, the data center monitoring and management application238may be implemented to interact in combination with the data center monitoring and management console118, and other components of the data center monitoring and management environment200, to perform a data center monitoring and management operation. In certain embodiments, the user device204may be used to exchange information between the user202and the data center monitoring and management console118, the data center monitoring and management application238, the asset configuration system250, the product configuration system252, the product fabrication system254, and the supply chain system256, or a combination thereof, through the use of a network140. In various embodiments, the asset configuration system250may be implemented to configure a particular data center asset244to meet certain performance goals. In various embodiments, the asset configuration system250may be implemented to use certain data center monitoring and management data220, certain data center asset configuration rules226it may generate or manage, or a combination thereof, to perform such configurations. In various embodiments, the product configuration system252may be implemented to use certain data center monitoring and management data220to optimally configure a particular data center asset244, such as a server, for an intended workload. In various embodiments, the data center monitoring and management data220used by the product configuration system252may have been generated as a result of certain data center monitoring and management operations, described in greater detail herein, being performed by the data center monitoring and management console118. In various embodiments, the product configuration system252may be implemented to provide certain product configuration information to a product fabrication system254. In various embodiments, the product fabrication system254may be implemented to provide certain product fabrication information to a product fabrication environment (not shown). In certain embodiments, the product fabrication information may be used by the product fabrication environment to fabricate a product, such as a server, to match a particular data center asset244configuration. In various embodiments, the data center monitoring and management console UI240may be presented via a website (not shown). In certain embodiments, the website may be provided by one or more of the data center monitoring and management console118, the asset configuration system250, the product configuration system252, the product fabrication system254, or the supply chain system256. In certain embodiments, the supply chain system256may be implemented to manage the provision, fulfillment, or deployment of a particular data center asset244produced in the product fabrication environment. For the purposes of this disclosure a website may be defined as a collection of related web pages which are identified with a common domain name and is published on at least one web server. A website may be accessible via a public IP network or a private local network. A web page is a document which is accessible via a browser which displays the web page via a display device of an information handling system. In various embodiments, the web page also includes the file which causes the document to be presented via the browser. In various embodiments, the web page may comprise a static web page, which is delivered exactly as stored and a dynamic web page, which is generated by a web application that is driven by software that enhances the web page via user input208to a web server. In certain embodiments, the data center monitoring and management console118may be implemented to interact with the asset configuration system250, the product configuration system252, the product fabrication system254, and the supply chain or fulfillment system256, or a combination thereof, each of which in turn may be executing on a separate information handling system100. In certain embodiments, the data center monitoring and management console118may be implemented to interact with the asset configuration system250, the product configuration system252, the product fabrication system254, and the supply chain or fulfillment system256, or a combination thereof, to perform a data center monitoring and management operation, as described in greater detail herein. FIG.3shows a functional block diagram of the performance of certain data center monitoring and management operations implemented in accordance with an embodiment of the invention. In various embodiments, a data center monitoring and management environment200, described in greater detail herein, may be implemented to include one or more data centers, such as data centers ‘1’346through ‘n’348. As likewise described in greater detail herein, each of the data centers ‘1’346through ‘n’348may be implemented to include one or more data center assets244, likewise described in greater detail herein. In certain embodiments, a data center asset244may be implemented to process an associated workload360. A workload360, as used herein, broadly refers to a measure of information processing that can be performed by one or more data center assets244, individually or in combination with one another, within a data center monitoring and management environment200. In certain embodiments, a workload360may be implemented to be processed in a virtual machine (VM) environment, familiar to skilled practitioners of the art. In various embodiments, a workload360may be implemented to be processed as a containerized workload360, likewise familiar to those of skill in the art. In certain embodiments, as described in greater detail herein, the data center monitoring and management environment200may be implemented to include a data center monitoring and management console118. In certain embodiments, the data center monitoring and management console118may be implemented to include a monitoring module120, a management module122, an analysis engine124, and a connectivity management system (CMS)126, and a telemetry aggregation system (TAS)130, or a combination thereof, as described in greater detail herein. In various embodiments, a CMS client136, described in greater detail herein may be implemented on certain user devices ‘A’304through ‘x’314, or certain data center assets244, or within data centers ‘1’346through ‘n’348, or a combination thereof. In certain embodiments, the CMS126may be implemented in combination with a particular CMS client136to perform a connectivity management operation, as likewise described in greater detail herein. As described in greater detail herein, the data center monitoring and management console118may be implemented in certain embodiments to perform a data center monitoring and management operation. In certain embodiments, the data center monitoring and management console118may be implemented to provide a unified framework for the performance of a plurality of data center monitoring and management operations, by a plurality of users, within a common user interface (UI). In certain embodiments, the data center monitoring and management console118, and other components of the data center monitoring environment200, such as the asset configuration system250, may be implemented to be used by a plurality of users, such as users ‘A’302through ‘x’312shown inFIG.3. In various embodiments, certain data center personnel, such as users ‘A’302through ‘x’312, may respectively interact with the data center monitoring and management console118, and other components of the data center monitoring and management environment200, through the use of an associated user device ‘A’304through ‘x’314. In certain embodiments, such interactions may be respectively presented to users ‘A’302through ‘x’312within a user interface (UI) window306through316, corresponding to user devices ‘A’304through ‘x’314. In certain embodiments the UI window306through316may be implemented in a window of a web browser, familiar to skilled practitioners of the art. In certain embodiments, a data center monitoring and management application (MMA)310through320, described in greater detail herein, may be respectively implemented on user devices ‘A’304through ‘x’314. In certain embodiments the UI window306through316may be respectively implemented as a UI window of the data center MMA310through320. In certain embodiments, the data center MMA310through320may be implemented to interact in combination with the data center monitoring and management console118, and other components of the data center monitoring and management environment200, to perform a data center monitoring and management operation. In certain embodiments, the interactions with the data center monitoring and management console118, and other components of the data center monitoring and management environment200, may respectively be presented as a graphical representation308through318within UI windows306through316. In various embodiments, such interactions may be presented to users ‘A’302through ‘x’312via a display device324, such as a projector or large display screen. In certain of these embodiments, the interactions may be presented to users ‘A’302through ‘x’312as a graphical representation348within a UI window336. In certain embodiments, the display device324may be implemented in a command center350, familiar to those of skill in the art, such as a command center350typically found in a data center or a network operations center (NOC). In various embodiments, one or more of the users ‘A’302through ‘x’312may be located within the command center350. In certain of these embodiments, the display device324may be implemented to be generally viewable by one or more of the users ‘A’302through ‘x’312. In certain embodiments, the data center monitoring and management operation may be performed to identify the location350of a particular data center asset244. In certain embodiments, the location350of a data center asset244may be physical, such as the physical address of its associated data center, a particular room in a building at the physical address, a particular location in an equipment rack in that room, and so forth. In certain embodiments, the location350of a data center asset244may be non-physical, such as a network address, a domain, a Uniform Resource Locator (URL), a file name in a directory, and so forth. Certain embodiments of the invention reflect an appreciation that it is not uncommon for large organization to have one or more data centers, such as data centers ‘1’346through ‘n’348. Certain embodiments of the invention reflect an appreciation that it is likewise not uncommon for such data centers to have multiple data center system administrators and data center technicians. Likewise, various embodiments of the invention reflect an appreciation that it is common for a data center system administrator to be responsible for planning, initiating, and overseeing the execution of certain data center monitoring and management operations. Certain embodiments of the invention reflect an appreciation that it is common for a data center system administrator, such as user ‘A’302, to assign a particular data center monitoring and management operation to a data center technician, such as user ‘x’312, as a task to be executed. Certain embodiments of the invention reflect an appreciation that it is likewise common for a data center administrator, such as user ‘A’302, to assume responsibility for performing a particular data center monitoring and management operation. As an example, a data center administrator may receive a stream of data center alerts, each of which is respectively associated with one or more data center issues. To continue the example, several of the alerts may have an initial priority classification of “critical.” However, the administrator may notice that one such alert may be associated with a data center issue that is more critical, or time sensitive, than the others and should be remediated as quickly as possible. Accordingly, the data center administrator may elect to assume responsibility for remediating the data center issue, and as a result, proceed to perform an associated data center remediation operation at that time instead of assigning it to other data center personnel. Certain embodiments of the invention reflect an appreciation that the number of data center assets244in a particular data center ‘1’346through ‘n’348may be quite large. Furthermore, it is not unusual for such data center assets244to be procured, deployed, configured, and implemented on a scheduled, or as needed, basis. It is likewise common for certain existing data center assets244to be replaced, upgraded, reconfigured, maintained, or remediated on a scheduled, or as-needed, basis. Likewise, certain embodiments of the invention reflect an appreciation that such replacements, upgrades, reconfigurations, maintenance, or remediation may be oriented towards hardware, firmware, software, connectivity, or a combination thereof. For example, a data center system administrator may be responsible for the creation of data center asset244procurement, deployment, configuration, and implementation templates, firmware update bundles, operating system (OS) and software application stacks, and so forth. Likewise, a data center technician may be responsible for receiving a procured data center asset244, transporting it to a particular data asset location350in a particular data center ‘1’346through ‘n’348, and implementing it in that location350. The same, or another, data center technician may then be responsible for configuring the data center asset244, establishing network connectivity, applying configuration files, and so forth. To continue the example, the same, or another, data center administrator or technician may be responsible for remediating hardware issues, such as replacing a disc drive in a server or Redundant Array of Independent Disks (RAID) array, or software issues, such as updating a hardware driver or the version of a server's operating system. Accordingly, certain embodiments of the invention reflect an appreciation that a significant amount of coordination may be needed between data center system administrators and data center technicians to assure efficient and reliable operation of a data center. In various embodiments, certain data center monitoring and management operations may include a data center remediation operation, described in greater detail herein. In certain embodiments, a data center remediation operation may be performed to remediate a particular data asset244issue at a particular data asset location350in a particular data center ‘1’346through ‘n’348. In certain embodiments, the data center remediation operation may be performed to ensure that a particular data center asset location350in a particular data center ‘1’346through ‘n’348is available for the replacement or upgrade of an existing data center asset244. As an example, a data center remediation operation may involve deployment of a replacement server that occupies more rack space than the server it will be replacing. In various embodiments, the data center monitoring and management console118, or the data center monitoring and management application310through320, or a combination of the two, may be implemented in a failure tracking mode to capture certain data center asset244telemetry. In various embodiments, the data center asset244telemetry may include data associated with the occurrence of certain events, such as the failure, or anomalous performance, of a particular data center asset244, or an associated workload360, in whole, or in part. In certain embodiments, the data center asset244telemetry may be captured incrementally to provide a historical perspective of the occurrence, and evolution, of an associated data center issue. In various embodiments, the data center monitoring and management console118may likewise be implemented generate certain remediation operation notes. For example, the data center monitoring and management console118may enter certain data center asset244remediation instructions in the data center remediation operation notes. In various embodiments, the data center remediation operation notes may be implemented to contain information related to data center asset244replacement or upgrade parts, data center asset244files that may be needed, installation and configuration instructions related to such files, the physical location350of the data center asset244, and so forth. In certain embodiments, a remediation task344may be generated by associating the previously-generated data center remediation operation notes with the remediation documentation, data center asset files, or other remediation resources342most pertinent to the data center issue, and the administrator, and any data center personnel selected or its remediation. As used herein, a data center remediation task344broadly refers to one or more data center remediation operations, described in greater detail herein, that can be assigned to one or more users ‘A’302through ‘x’312. Certain embodiments of the invention reflect an appreciation that a group of data center personnel, such as users ‘A’302through ‘x’312, will likely possess different skills, certifications, levels of education, knowledge, experience, and so forth. As a result, remediation documentation that is suitable for certain data center personnel may not be suitable for others. For example, a relatively inexperienced data center administrator may be overwhelmed by a massive volume of detailed and somewhat arcane minutiae related to the configuration and administration of multiple virtual machines (VMs) on a large server. However, such remediation documentation may be exactly what a highly skilled and experienced data center administrator needs to remediate subtle server and VM configuration issues. Conversely, the same highly skilled and experienced data center administrator may be hampered, or slowed down, by being provided remediation documentation that is too simplistic, generalized, or high-level for the data center issue they may be attempting to remediate. Likewise, an administrator who is moderately skilled in configuring VMs may benefit from having step-by-step instructions, and corresponding checklists, when remediating a VM-related data center issue. Accordingly, as used herein, pertinent remediation documentation broadly refers to remediation documentation applicable to a corresponding data center issue that is most suited to the skills, certifications, level of education, knowledge, experience, and so forth of the data center personnel assigned to its remediation. In various embodiments, the data center monitoring and management console118may be implemented to generate a corresponding notification of the remediation task344. In certain embodiments, the resulting notification of the remediation task344assignment may be provided to the one or more users ‘A’302through ‘x’312assigned to perform the remediation task344. In certain embodiments, the notification of the remediation task344assignment may be respectively provided to the one or more users ‘A’302through ‘x’312within the UI306through316of their respective user devices ‘A’304through ‘x’314. In certain embodiments, the notification of the remediation task344assignment, and the remediation task344itself, may be implemented such that they are only visible to the users ‘A’302through ‘x’312to which it is assigned. In certain embodiments, the data center monitoring and management console118may be implemented to operate in a monitoring mode. As used herein, monitoring mode broadly refers to a mode of operation where certain monitoring information provided by the monitoring and management console118is available for use by one or more users ‘A’302through ‘x’312. In certain embodiments, one or more of the users ‘A’302through ‘x’312may be command center350users. In certain embodiments, the data center monitoring and management console118may be implemented to operate in a management mode. As used herein, management mode broadly refers to a mode of operation where certain operational functionality of the data center monitoring and management console118is available for use by a user, such as users ‘A’302through ‘x’312. FIG.4shows a block diagram of a connectivity management system implemented in accordance with an embodiment of the invention. In various embodiments, a data center monitoring and management console118, described in greater detail herein, may be implemented to include a connectivity management system (CMS)126, a telemetry aggregation system (TAS)130, and one or more data center services432, or a combination thereof. In various embodiments, the CMS126may be implemented individually, or in combination with a particular CMS client136to perform a connectivity management operation, likewise described in greater detail herein. In various embodiments, one or more connectivity management operations may be performed to initiate, and manage, secure, bi-directional, real-time connectivity between a data center monitoring and management console118and a particular data center asset244, each of which are likewise described in greater detail herein. In various embodiments, the TAS130may likewise be implemented, individually or in combination with a particular TAS plug-in410, to perform a telemetry aggregation operation. In certain embodiments, the CMS126and the TAS130may likewise be implemented in combination with one another to perform a particular connectivity management operation, or a particular telemetry aggregation operation, or a combination of the two. As used herein, a telemetry aggregation operation broadly refers to any function, operation, procedure, or process performed, directly or indirectly, to monitor, collect, aggregate, and analyze, or a combination thereof, certain telemetry and other information associated with the operational status of one or more data center assets244. Skilled practitioners of the art will be familiar with the concept of telemetry, which in general usage refers to the automated measurement and collection of data from remote sources. In various embodiments, collecting telemetry information associated with a particular data center asset244may involve the measurement, and subsequent analysis, of certain electrical data (e.g., voltage, current, etc.), physical data (e.g. temperature, pressure, etc.), computational data, (e.g., processing throughput, utilization of processor, memory, and network resources, etc.), the status and efficiency of certain workloads, and so forth. Those of skill in the art will recognize that many such examples of data center asset244telemetry information are possible. Accordingly, the foregoing is not intended to limit the spirit, scope, or intent of the invention. In various embodiments, the data center monitoring and management console118may be implemented in a cloud environment familiar to skilled practitioners of the art. In certain of these embodiments, the operator of the data center monitoring and management console118may offer its various functionalities and capabilities in the form of one or more or more cloud-based data center services432, described in greater detail herein. In various embodiments, one or more data center assets244may be implemented within a data center402, likewise described in greater detail herein. In certain of these embodiments, the data center402may reside on the premises of a user of one or more data center services432provided by the operator of the data center monitoring and management console118. In various embodiments, the connectivity management system126may be implemented to include one or more CMS aggregators128, one or more CMS services422, and a service mesh proxy434, or a combination thereof. In various embodiments, the CMS aggregator128may be implemented to interact with one or more of the CMS services422, as described in greater detail herein. In various embodiments, the data center services432may likewise be implemented to interact with one or more of the CMS services422, and the service mesh proxy434, or a combination thereof. In certain embodiments, the CMS services422may be implemented to include a CMS discovery424service, a CMS authentication426service, a CMS inventory428service, and a CMS authorization430service, or a combination thereof. In various embodiments, one or more data center assets244may be implemented within a data center402, described in greater detail herein. In certain embodiments, the data center402may be implemented to include an associated data center firewall416. In certain embodiments, a CMS client136, or a TAS plug-in module410, or both, may be implemented on one or more data center assets244. In various embodiments, the TAS plug-in module410 may be implemented to collect certain telemetry information associated with the data center asset244asset. In various embodiments, the telemetry information collected by the TAS plug-in module410may be used by the TAS plug-in module410, or the TAS130, or both, to perform a telemetry aggregation operation. In various embodiments, the telemetry information collected by the TAS plug-in module410may be provided to the TAS130via the CMS client136. In various embodiments, a CMS client136or a TAS plug-in module410, or both, implemented on one data center asset244may likewise be implemented to enable one or more connectivity management operations, or one or more telemetry aggregation operations, or a combination thereof, associated with one or more other data center assets444that are not respectively implemented with their own CMS client136or TAS plug-in module410. In certain of these embodiments, the CMS client136, or the TAS plug-in module410, or both, may be implemented to assume the identity, and attributes, of a particular data center asset it is directly, or indirectly, associated with. In various embodiments, the CMS client136may be implemented with a proxy management module406. In certain of these embodiments, the proxy management module406may be implemented to manage the CMS client's136connectivity to an external network140through an intermediary proxy server, or the data center firewall416, or both. Those of skill in the art will be familiar with a proxy server, which as typically implemented, is a server application that acts as an intermediary between a client, such as a web browser, requesting a resource, such as a web page, from a provider of that resource, such as a web server. In certain embodiments, the client of a proxy server may be a particular data center asset244requesting a resource, such as a particular data center service432, from the data center monitoring and management console118. Skilled practitioners of the art will likewise be aware that in typical proxy server implementations, a client may direct a request to a proxy server, which evaluates the request and performs the network transactions needed to forward the request to a designated resource provider. Accordingly, the proxy server functions as a relay between the client and a server, and as such acts as an intermediary. Those of skill in the art will be aware that proxy servers also assist in preventing an attacker from invading a private network, such as one implemented within a data center402to provide network connectivity to, and between, certain data center assets244. Skilled practitioners of the art will likewise be aware that server proxies are often implemented in combination with a firewall, such as the data center firewall416. In such implementations, the proxy server, due to it acting as an intermediary, effectively hides an internal network from the Internet, while the firewall prevents unauthorized access by blocking certain ports and programs. Accordingly, a firewall may be configured to allow traffic emanating from a proxy server to pass through to an external network140, while blocking all other traffic from an internal network. Conversely, a firewall may likewise be configured to allow network140traffic emanating from a trusted source to pass through to an internal network, while blocking traffic from unknown or untrusted external sources. As an example, the data center firewall416may be configured in various embodiments to allow traffic emanating from the CMS client136to pass, while the service provider firewall420may be configured to allow traffic emanating from the CMS aggregator128to pass. Likewise, the service provider firewall420may be configured in various embodiments to allow incoming traffic emanating from the CMS client136to be received, while the data center firewall416may be configured to allow incoming network traffic emanating from the CMS aggregator128to be received. In various embodiments, a particular CMS aggregator128may be implemented in combination with a particular CMS client136to provide a split proxy that allows an associated data center asset244to securely communicate with a data center monitoring and management console118. In various embodiments, the split proxy may be implemented in a client/server configuration. In certain of these embodiments, the CMS client136may be implemented as the client component of the client/server configuration and the CMS aggregator128may be implemented as the server component. In certain of these embodiments, one or more connectivity management operations may be respectively performed by the CMS aggregator128and the CMS client136to establish a secure tunnel connection418through a particular network140, such as the Internet. In various embodiments, the secure tunnel connection418may be initiated by the CMS client136first determining the address of the CMS aggregator128it intends to connect to. In these embodiments, the method by which the address of the CMS aggregator128is determined is a matter of design choice. Once the address of the CMS aggregator128is determined, the CMS client136uses it to establish a secure Hypertext Transport Protocol (HTTPS) connection with the CMS aggregator128itself. In response, the CMS aggregator128sets its HTTPS Transport Layer Security (TLS) configuration to “request TLS certificate” from the CMS client136, which triggers the CMS client136to provide its requested TLS certificate408. In certain embodiments, the CMS authentication426service may be implemented to generate and provision the TLS certificate408for the CMS client136. In certain embodiments, the CMS client136may be implemented to generate a self-signed TLS certificate if it has not yet been provisioned with one from the CMS authentication426service. In various embodiments, the CMS client136may then provide an HTTP header with a previously-provisioned authorization token. In certain embodiments, the authorization token may have been generated and provisioned by the CMS authentication426service once the CMS client has been claimed. As used herein, a claimed CMS client136broadly refers to a particular CMS client136that has been bound to an account associated with a user, such as a customer, of one or more data center services432provided by the data center monitoring and management console118. In certain embodiments, a CMS client136may be implemented to maintain its claimed state by renewing its certificate408and being provided an associated claim token. In these embodiments, the frequency, or conditions under which, a CMS client's certificate408is renewed, or the method by which it is renewed, or both, is a matter of design choice. Likewise, in these same embodiments, the frequency, or conditions under which, an associated claim token is generated, or the method by which it is provided to a CMS client136, or both, is a matter of design choice. In various embodiments, the CMS client136may be implemented to have a stable, persistent, and unique identifier (ID) after it is claimed. In certain of these embodiments, the CMS client's136unique ID may be stored within the authorization token. In these embodiments, the method by the CMS client's136unique ID is determine, and the method by which it is stored within an associated authorization token, is a matter of design choice. Once the CMS client136has been claimed, it may be implemented to convert the HTTPS connection to a Websocket connection, familiar to those of skill in the art. After the HTTP connection has been converted to a Websocket connection, tunnel packet processing is initiated and the CMS aggregator128may then perform a Representational State Transfer (REST) request the CMS client136to validate its certificate408. In certain embodiments, the validation of the CMS client's136certificate408is performed by the CMS authorization430service. In various embodiments, the validation of the CMS client's136certificate408is performed to determine a trust level for the CMS client136. In certain of these embodiments, if the CMS client's136certificate408is validated, then it is assigned a “trusted” classification. Likewise, if CMS client's136certificate408fails to be validated, then it is assigned an “untrusted” classification. Accordingly, certain embodiments of the invention reflect an appreciation that “trusted” and “claimed,” as used herein as they relate to a CMS client136are orthogonal. More specifically, “trust” means that the channel of communication can be guaranteed. Likewise, “claimed” the CMS client136can be authenticated and bound to a user, or customer, of one or more data center services432provided by the data center monitoring and management console118. In various embodiments, the resulting secure tunnel connection418may be implemented to provide a secure channel of communication through a data center firewall416associated with a particular data center402and a service provider firewall420associated with a particular data center monitoring and management console118. In various embodiments, the CMS client136, the secure tunnel connection418, and the CMS aggregator128may be implemented to operate at the application level of the Open Systems Interconnection (OSI) model, familiar to those of skill in the art. Skilled practitioners of the art will likewise be aware that known approaches to network tunneling typically use the network layer of the OSI model. In certain embodiments, the CMS client136and the CMS aggregator128may be implemented to end logical events over the secure tunnel connection418to encapsulate and multiplex individual connection streams and associated metadata. In various embodiments, the CMS discovery424service may be implemented to identify certain data center assets244to be registered and managed by the data center monitoring and management console118. In various embodiments, the CMS discovery424service may be implemented to detect certain events published by a CMS aggregator128. In certain embodiments, the CMS discovery424service may be implemented to maintain a database (not shown) of the respective attributes of all CMS aggregators128and CMS clients136. In certain embodiments, the CMS discovery424service may be implemented to track the relationships between individual CMS clients136and the CMS aggregators128they may be connected to. In various embodiments, the CMS discovery424service may be implemented to detect CMS client136connections and disconnections with a corresponding CMS aggregator128. In certain of these embodiments, a record of such connections and disconnections is stored in a database (not shown) associated with the CMS inventory428service. In various embodiments, the CMS discovery424service may be implemented to detect CMS aggregator128start-up and shut-down events. In certain of these embodiments, a record of related Internet Protocol (IP) addresses and associated state information may is stored in a database (not shown) associated with the CMS inventory428service. In various embodiments, the CMS authentication426service may be implemented to include certain certificate authority (CA) capabilities. In various embodiments, the CMS authentication426service may be implemented to generate a certificate408for an associated CMS client136. In various embodiments, the CMS authentication426service may be implemented to use a third party CA for the generation of a digital certificate for a particular data center asset244. In certain embodiments, the CMS inventory428service may be implemented to maintain an inventory of each CMS aggregator128by an associated unique ID. In certain embodiments, the CMS inventory428service may likewise be implemented to maintain an inventory of each CMS client136by an associated globally unique identifier (GUID). In various embodiments, the CMS authorization430service may be implemented to authenticate a particular data center asset244by requesting certain proof of possession information, and then processing it once it is received. In certain of these embodiments, the proof of possession information may include information associated with whether or not a particular CMS client136possesses the private keys corresponding to an associated certificate408. In various embodiments, the CMS authorization430service may be implemented to authenticate a particular CMS client136associated with a corresponding data center asset244. In certain of these embodiments, the CMS authorization430service may be implemented to perform the authentication by examining a certificate408associated with the CMS client136to ensure that it has been signed by the CMS authentication426service. In various embodiments, the service mesh proxy434may be implemented to integrate knowledge pertaining to individual data center assets244into a service mesh such that certain data center services432have a uniform method of transparently accessing them. In various embodiments, the service mesh proxy434may be implemented with certain protocols corresponding to certain data center assets244. In certain embodiments, the service mesh proxy434may be implemented to encapsulate and multiplex individual connection streams and metadata over the secure tunnel connection418. In certain embodiments, these individual connection streams and metadata may be associated with one or more data center assets244, one or more data center services432, one or more CMS clients136, and one or more CMS aggregators128, or a combination thereof. FIGS.5athrough5dare a sequence diagram showing the performance of certain connectivity management operations implemented in accordance with an embodiment of the invention. In this embodiment, the CMS client136establishes a secure Hypertext Transfer Protocol (HTTPS) connection with the CMS aggregator128in step502, as described in greater detail herein, followed by the provision of its temporary client ID and its previously-provisioned digital certificate to the CMS aggregator in step504. The CMS aggregator128then provides the CMS client136ID and (self-signed) digital certificate to the CMS authorization430service for authentication in step506. Once the CMS client's136credentials have been validated in step508, notification of their validation is provided to the CMS aggregator128by the CMS authorization430service in step510. In response, the CMS aggregator128announces a new CMS client136to the CMS inventory428service in step512, followed by the CMS aggregator128notifying the CMS client136that its digital certificate has been validated in step514. The CMS client136then collects certain information from the data center asset244and in step516, followed by establishing a secure tunnel connection with the CMS aggregator128in step518, which is then multiplexed in step520, as described in greater detail herein. Thereafter, the CMS client136announces itself to the CMS aggregator128and provides it the collected data center asset information in step522. In turn, the CMS aggregator128announces the CMS client136as being in an untrusted/unclaimed state, first to the CMS inventory428service in step524, and then to the CMS authorization430service in step526. In turn, the CMS authorization430service then requests the CMS aggregator128to provide proof of possession in step528. In response, the CMS aggregator128authenticates the proof of possession request in step530and the CMS authentication426service generates a CMS-signed digital certificate in step530. The resulting CMS-signed digital certificate is then provided by the CMS authentication service426to the CMS aggregator128in step534. In turn, the CMS aggregator128respectively provides the proof of possession and the CMS-signed digital certificate to the CMS client136in steps536and538. In response, the CMS client136announces itself to be in a trusted/unclaimed state to the CMS aggregator128in step540. In turn, the CMS aggregator128announces the CMS client136to be in a trusted/unclaimed state to the CMS authorization430service in step542and to the CMS inventory428service in step544. The CMS authentication426service then determines ownership of the CMS client136in step546, followed by the CMS aggregator128providing certain location information associated with the management server to the CMS client136in step548. In turn, the CMS client136requests an ownership voucher from the CMS authentication426service in step550. In response, the CMS authorization430generates an ownership voucher in step552and provides it to the CMS client136in step554. Once it receives the ownership voucher, the CMS client136respectively announces itself as trusted/claimed to the CMS authorization service430and the CMS inventory428service in steps556and558. FIG.6is a simplified block diagram showing certain telemetry aggregation operations performed in accordance with an embodiment of the invention to improve the collection and aggregation of backlogged telemetry information from one or more data center assets. Various embodiments of the invention reflect an appreciation that a telemetry aggregation system (TAS)130, described in greater detail herein, may be implemented to collect and aggregate certain telemetry information from one or more data center assets, likewise described in greater detail herein. Various embodiments of the invention reflect an appreciation that a TAS may likewise be implemented to provide the telemetry information it may collect and aggregate to certain other applications and data center services, as described in greater detail herein, for predictive analysis and further processing. However, various embodiments of the invention likewise reflect an appreciation that the provision of such telemetry information may be backlogged due a variety of reasons, such as loss in network connectivity, or a particular data center asset being unavailable due to routine maintenance, and so forth. Accordingly, various embodiments of the invention reflect an appreciation that current approaches to collecting, aggregating, and providing backlogged telemetry information to meeting service level agreement (SLA) parameters and objectives while minimizing data loss may pose certain challenges. For example, such approaches are primarily oriented towards concepts such as time-to-live, where data is active only until a particular time, discarding it completely, or applying backpressure on its producer. As a result, the limitations of such approaches include the inability to deliver telemetry information based upon prior history and not being able to dynamically readjust its processing. In various embodiments, a TAS130may be implemented to perform one or more periodic telemetry aggregation operations to collect and aggregate certain telemetry information from one or more data center assets. As used herein, a periodic telemetry aggregation operation broadly refers to a telemetry aggregation operation, described in greater detail herein, performed on a periodic basis to collect and aggregate pertinent telemetry information associated with certain data center assets. In various embodiments, the TAS130may likewise be implemented to provide the aggregated telemetry information it has collected on a periodic basis to a data center monitoring and management console, likewise described in greater detail herein, for monitoring, management, and analysis. In various embodiments, backlogged telemetry information may be divided into segments based upon certain criteria, such as time series or standardized data segment sizes (e.g., 1 MB, 50 MB, etc.). Likewise, in various embodiments, an average data transfer rate (x) may be calculated from the past ‘n’ successful telemetry information segment transfers. In various embodiments, known time series forecasting approaches, such as autoregressive integrated moving average (ARIMA) model, or the open source Prophet library, may be used to process the past ‘n’ number of telemetry information segment transfers. In certain of these embodiments, forecasted data transfer rates may be selected to avoid data loss. In various embodiments, the mean of the last ‘n’ number of telemetry information segment transfers, or certain network Quality of Service (QoS) information, may be used until enough data is generated to forecast the associated data transfer rate. Certain embodiments of the invention reflect an appreciation that applying such time series approaches may prove to be of value when network speed is variable due to time of day, bandwidth utilization, and so forth. Accordingly: ForecastedDataTransferRate(x) =ForecastingAlgorithm(Past Data Transfers) in Mbps, then calculate average time to transfer the backlogged telemetry information segments as follows: AverageSegmentTransferRate(y) in seconds= 1n⁢∑i=0nsix=1n⁢(s1x+s2x+s3x⁢…⁢s⁢nx) where sirepresents the size of each segment in MBs For example: Total Time (T)=(i⁢X)+∑j=1i,aj<paj=⌊(aj-1p)⌋⁢X_ where i=1,…,n,a0=⌊(i*X_p)⌋⁢X_⁢and⁢p>X_, and:backlogged data segments n=10,x=20 seconds, p=40 (interval),and SLA=120 seconds To continue the example: Total Time (T1) =(1*20)+[α0=└(1*20/40)┘*20]=20+0=20 seconds,where α0<p and T1≤SLA Total Time (T2) =(2*20)+[α0=└(2*20/40)┘*20+=└(α0/40)┘*20]where α0≥p =40+[α0=20+α1=└(20/40)┘*20]where α1<p =40+[20+0]=40+20=60 secondswhere T2≤SLA Total Time (T3) =(3*20)+[α0=└(3*20/40)*20+α1=└α0/40)┐*20]where α0≥p =60+[α0=20+α1=└(20/40)┐*20]where α1<p =60+[20+0]=60+20=80 secondswhere T3≤SLA Total Time (T4) =(4*20)+[α0=└(3*80/40)┐*20+α1=└(α0/40)┐*20]where α0≥p =80+[α0=40+α1=└(40/40)┐*20]where α1<p =80+[40+20]=80+60=140 secondswhere T3>SLA Accordingly, the optimal number of telemetry information segments to process is n-1 which is 3, and the oldest 7 segments are marked to consider in the next iteration within an associated SLA limit. Referring now toFIG.6, a calculation may be performed in various embodiments to recursively determine the amount of time it will take to process backlogged telemetry information segments ‘1’614, ‘2’616, and ‘3’618through ‘4’620. In various embodiments, as shown inFIG.6, a TAS130may be implemented to receive and process new telemetry information segments602, as described in greater detail herein. However, as also described in greater detail herein, the TAS130may not be able to process such new telemetry information segments602within a time interval to satisfy the parameters and objectives of an associated SLA. Accordingly, certain of those new telemetry information segments620may then become backlogged telemetry information segments612. For example, as likewise shown inFIG.6, new telemetry information segment ‘1’604may not be able to be processed within a predetermined SLA time interval. Accordingly, it becomes backlogged telemetry information segment ‘1’614when new telemetry information segment ‘2’606is received for processing. As a result, backlogged telemetry information segment ‘1’614is given processing precedence over new telemetry information segment ‘2’606, which then becomes backlogged telemetry information segment ‘2’616. Likewise, new telemetry information segment ‘3’608may not be able to be processed within an associated predetermined SLA time interval. Accordingly, it becomes backlogged telemetry information segment ‘3’618when new telemetry information segment ‘3’608is received for processing. As a result, backlogged telemetry information segments ‘1’614and ‘2’616are given processing precedence over new telemetry information segment ‘3’608, which then becomes backlogged telemetry information segment ‘3’618. The process is continued, through new telemetry information segment ‘n’610becoming backlogged telemetry information segment ‘n’620. In various embodiments, data associated with certain new telemetry information segments602and backlogged telemetry information segments612may be processed in step622to determine the average data handling rate (x) In various embodiments, calculation of the average data handling rate ‘x’ include the average data transfer rate ‘y’ and data handling rate ‘z’, which is mostly constant for querying and preparing the telemetry information. Accordingly: Average data handling rate(x)=y+z(Constant) In various embodiments, the time required to transfer new telemetry information segments602and backlogged telemetry information segments612for processing may exceed the parameters and objectives of an associated SLA. If so, then older backlogged telemetry information segments612may be ignored in preference to more recent backlogged telemetry information segments612within the parameters of an associated SLA as follows: Total Time (T)=(i⁢X)+∑j=1i,aj<paj=⌊(aj-1p)⌋⁢X_where i=1,…,n,a0=⌊(i*X_p)⌋⁢X_⁢and⁢p>X_,and:n is the number of backlog data segments to process,p is the data segment transfer interval on seconds, where:if T>SLA, then n-1 is the optimal number of data segments to process,if T=SLA, then n is the optimal number of data segments to process In various embodiments, the telemetry information segments may be dynamically adjusted within the parameters of an associated SLA. For example, though it may be determined to initially process i′ telemetry information segments, the same algorithm may be applied after processing each segment in i′ by considering ‘n’ segments to be in the SLA limit. To illustrate the example, suppose the value of ‘n’ is ‘10’. In this example, the algorithm may initially be applied for “i”=5 segments. To continue the example, after the first telemetry information segment is processed, the algorithm is then processed to see whether the value of i′ can be increased or decreased while staying within the parameters of the SLA. Accordingly, if it is determined in step624that: i⊆nor ifi=0then i is the optimal number of segments and ‘i’ telemetry information segments are dynamically readjusted after each transfer by reapplying the same algorithm to meet the objectives and parameters of any associated SLAs, followed by their provision to one or data center services432. FIGS.7aand7bare a generalized flowchart showing certain telemetry aggregation operations performed in accordance with an embodiment of the invention to enhance the collection and aggregation of backlogged telemetry information from one or more data center assets. In this embodiment, backlog telemetry information processing operations are begun in step702, followed by ongoing operations being performed in step704to monitor for backlogged telemetry information. A determination is then made in step706whether backlogged telemetry information has been detected. If not, then a determination is made in step726whether or not to end backlog telemetry information processing operations. If so, then backlog telemetry information processing operations are ended in step728. Otherwise, the process is continued, proceeding with step704. However, if it was determined in step706that backlogged telemetry information was detected, then the average data segment handling rate ‘x’ (e.g., in seconds) is determined in step708. The number of backlogged telemetry data segments ‘n’ is then determined in step710. A determination is then made in step712whether the number of backlogged telemetry data segments ‘i’ is between ‘1’ and ‘n’. If not, then the process is continued, proceeding with step726. However, if it is determined in step712that the number of backlogged telemetry data segments is between ‘1’ and ‘n’, then the time to process backlogged telemetry data segments is determined in step714, where bpt(i)=ex. The time to process new backlogged telemetry data segments (nbpt) is then recursively determined in step716, as described in greater detail herein. Then, in step718, the total time to process existing and new backlogged telemetry data segments is determined, where TotalTime (i)==(bpt(i)+nbpt(i). A determination is then made in step720whether TotalTime (i)>SLA parameters and objectives. If so, then the final optimal number of telemetry data segments to be processed is determined in step724, which is ‘i’−1 if ‘i’>1, otherwise, it is ‘0’. The process is then continued, proceeding with step726. However, if it was determined in step720that ‘i’ is not greater than SLA parameters and objectives, then a determination is made in step722whether ‘1’>‘n’. If not, then the process is continued, proceeding with step712. Otherwise, the process is continued, proceeding with step724. As will be appreciated by one skilled in the art, the present invention may be embodied as a method, system, or computer program product. Accordingly, embodiments of the invention may be implemented entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in an embodiment combining software and hardware. These various embodiments may all generally be referred to herein as a “circuit,” “module,” or “system.” Furthermore, the present invention may take the form of a computer program product on a computer-usable storage medium having computer-usable program code embodied in the medium. Any suitable computer usable or computer readable medium may be utilized. The computer-usable or computer-readable medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, or a magnetic storage device. In the context of this document, a computer-usable or computer-readable medium may be any medium that can contain, store, communicate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. Computer program code for carrying out operations of the present invention may be written in an object oriented programming language such as Java, Smalltalk, C++ or the like. However, the computer program code for carrying out operations of the present invention may also be written in conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). Embodiments of the invention are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. The present invention is well adapted to attain the advantages mentioned as well as others inherent therein. While the present invention has been depicted, described, and is defined by reference to particular embodiments of the invention, such references do not imply a limitation on the invention, and no such limitation is to be inferred. The invention is capable of considerable modification, alteration, and equivalents in form and function, as will occur to those ordinarily skilled in the pertinent arts. The depicted and described embodiments are examples only, and are not exhaustive of the scope of the invention. Consequently, the invention is intended to be limited only by the spirit and scope of the appended claims, giving full cognizance to equivalents in all respects.
94,819
11943121
DETAILED DESCRIPTION The following detailed description of example embodiments refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Before explaining at least one embodiment of the disclosure in detail, it is to be understood that the disclosure is not limited in its application to the details of construction, experiments, exemplary data, and/or the arrangement of the components set forth in the following description or illustrated in the drawings unless otherwise noted. The disclosure is capable of other embodiments or of being practiced or carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein is for purposes of description and should not be regarded as limiting. As used in the description herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having,” or any other variations thereof, are intended to cover a non-exclusive inclusion. For example, unless otherwise noted, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may also include other elements not expressly listed or inherent to such process, method, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive and not to an exclusive “or”. For example, a condition A or B is satisfied by one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present). In addition, use of the “a” or “an” are employed to describe elements and components of the embodiments herein. This is done merely for convenience and to give a general sense of the inventive concept. This description should be read to include one or more, and the singular also includes the plural unless it is obvious that it is meant otherwise. Further, use of the term “plurality” is meant to convey “more than one” unless expressly stated to the contrary. As used herein, qualifiers like “substantially,” “about,” “approximately,” and combinations and variations thereof, are intended to include not only the exact amount or value that they qualify, but also some slight deviations therefrom, which may be due to computing tolerances, computing error, manufacturing tolerances, measurement error, wear and tear, stresses exerted on various parts, and combinations thereof, for example. As used herein, any reference to “one embodiment,” “an embodiment,” “some embodiments,” “one example,” “for example,” or “an example” means that a particular element, feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment and may be used in conjunction with other embodiments. The appearance of the phrase “in some embodiments” or “one example” in various places in the specification is not necessarily all referring to the same embodiment, for example. The use of ordinal number terminology (i.e., “first”, “second”, “third”, “fourth”, etc.) is solely for the purpose of differentiating between two or more items and, unless explicitly stated otherwise, is not meant to imply any sequence or order of importance to one item over another. The use of the term “at least one” or “one or more” will be understood to include one as well as any quantity more than one. In addition, the use of the phrase “at least one of X, Y, and Z” will be understood to include X alone, Y alone, and Z alone, as well as any combination of X, Y, and Z. Where a range of numerical values is recited or established herein, the range includes the endpoints thereof and all the individual integers and fractions within the range, and also includes each of the narrower ranges therein formed by all the various possible combinations of those endpoints and internal integers and fractions to form subgroups of the larger group of values within the stated range to the same extent as if each of those narrower ranges was explicitly recited. Where a range of numerical values is stated herein as being greater than a stated value, the range is nevertheless finite and is bounded on its upper end by a value that is operable within the context of the invention as described herein. Where a range of numerical values is stated herein as being less than a stated value, the range is nevertheless bounded on its lower end by a non-zero value. It is not intended that the scope of the invention be limited to the specific values recited when defining a range. All ranges are inclusive and combinable. When values are expressed as approximations, e.g., by use of the antecedent “about,” it will be understood that the particular value forms another embodiment. Reference to a particular numerical value includes at least that particular value, unless the context clearly dictates otherwise. The term “about” when used in reference to numerical ranges, cutoffs, or specific values is used to indicate that the recited values may vary by up to as much as 10% from the listed value. Thus, the term “about” is used to encompass variations of ±10% or less, variations of ±5% or less, variations of ±1% or less, variations of ±0.5% or less, or variations of ±0.1% or less from the specified value. Circuitry, as used herein, may be analog and/or digital components, or one or more suitably programmed processors (e.g., microprocessors) and associated hardware and software, or hardwired logic. Also, “components” may perform one or more functions. The term “component,” may include hardware, such as a processor (e.g., microprocessor), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a combination of hardware and software, and/or the like. The term “processor” as used herein means a single processor or multiple processors working independently or together to collectively perform a task. Software may include one or more computer readable instruction that when executed by one or more component, e.g., a processor, causes the component to perform a specified function. It should be understood that the algorithms described herein may be stored on one or more non-transitory computer-readable medium. Exemplary non-transitory computer-readable mediums may include random access memory (RAM), a read only memory (ROM), a CD-ROM, a hard drive, a solid-state drive, a flash drive, a memory card, a DVD-ROM, a BluRay Disk, a disk, an optical drive, combinations thereof, and/or the like. Such non-transitory computer-readable mediums may be electrically based, optically based, magnetically based, and/or the like. Further, the messages described herein may be generated by the components and result in various physical transformations. As used herein, the terms “network—based,” “cloud-based,” and any variations thereof, are intended to include the provision of configurable computational resources on demand via interfacing with a computer and/or computer network, with software and/or data at least partially located on a computer and/or computer network. As used herein, a “route” and/or an “optical route” may correspond to an optical path and/or an optical lightpath. For example, an optical route may specify a path along which light is carried between two or more network entities. Users of optical networks may want to determine one or more performance information associated with the optical network. Over time, the user may want to change what performance information is desirable thereby changing the performance metrics requested from a network element of the optical network. Performance information may be difficult to obtain, aggregate, and display. Implementations described herein assist a user in changing the performance metrics and obtaining performance information, such as network information associated with network entities and optical links between the network entities. As used herein, an optical link may be an optical fiber, an optical channel, an optical super-channel, a super-channel group, an optical carrier group, a set of spectral slices, an optical control channel (e.g., sometimes referred to herein as an optical supervisory channel, or an “OSC”), an optical data channel (e.g., sometimes referred to herein as “BAND”), and/or any other optical signal transmission link. In some implementations, an optical link may be an optical super-channel. A super-channel may include multiple channels multiplexed together using wavelength-division multiplexing in order to increase transmission capacity. Various quantities of channels may be combined into super-channels using various modulation formats to create different super-channel types having different characteristics. Additionally, or alternatively, an optical link may be a super-channel group. A super-channel group may include multiple super-channels multiplexed together using wavelength-division multiplexing in order to increase transmission capacity. Additionally, or alternatively, an optical link may be a set of spectral slices. A spectral slice (a “slice”) may represent a spectrum of a particular size in a frequency band (e.g., 12.5 gigahertz (“GHz”), 6.25 GHz, etc.). For example, a 4.8 terahertz (“THz”) frequency band may include 384 spectral slices, where each spectral slice may represent 12.5 GHz of the 4.8 THz spectrum. A super-channel may include a different quantity of spectral slices depending on the super-channel type. The generation of laser beams for use as optical data carrier signals is explained, for example, in U.S. Pat. No. 8,155,531, entitled “Tunable Photonic Integrated Circuits”, issued Apr. 10, 2012, and U.S. Pat. No. 8,639,118, entitled “Wavelength division multiplexed optical communication system having variable channel spacings and different modulation formats,” issued Jan. 28, 2014, which are hereby fully incorporated in their entirety herein by reference. Referring now to the drawings, and in particular toFIG.1, shown therein is a diagram of an exemplary embodiment of a system10for performance information collection constructed in accordance with the present disclosure. A user14may interact with the system10using a user device18that may be used to request performance information from one or more network element22, such as a first node22aand/or a second node22bof an optical network26. The user device18may communicate with the optical network26and/or a cloud-based server30via a network34. In some embodiments, the cloud-based server30may comprise a processor and a memory having a data lake that may store copies of performance information such as sensor data, system data, metrics, logs, tracing, etc. in a raw format as well as transformed information that may be used for tasks such as reporting, visualization, analytics etc. The data lake storing the information (either performance information or transformed information) may include structured data from relational databases, semi-structured data, unstructured data, time-series data, and binary data. The data lake may be a data base, a remote accessible storage, or a distributed file system. In some embodiments, the network34may be the Internet and/or other network. For example, if the network34is the Internet, a primary user interface of the system10may be delivered through a series of web pages or private internal web pages of a company or corporation, which may be written in hypertext markup language, and accessible by the user device18. It should be noted that the primary user interface of the system10may be another type of interface including, but not limited to, a Windows-based application, a tablet-based application, a mobile web interface, an application running on a mobile device, and/or the like. The network34may be almost any type of network. For example, in some embodiments, the network34may be a version of an Internet network (e.g., exist in a TCP/IP-based network). In one embodiment, the network34is the Internet. It should be noted, however, that the network34may be almost any type of network and may be implemented as the World Wide Web (or Internet), a local area network (LAN), a wide area network (WAN), a metropolitan network, a wireless network, a cellular network, a Bluetooth network, a Global System for Mobile Communications (GSM) network, a code division multiple access (CDMA) network, a 3G network, a 4G network, an LTE network, a 5G network, a satellite network, a radio network, an optical network, a cable network, a public switched telephone network, an Ethernet network, combinations thereof, and/or the like. It is conceivable that in the near future, embodiments of the present disclosure may use more advanced networking topologies. Optical network26may include any type of network that uses light as a transmission medium. For example, optical network26may include a fiber-optic based network, an optical transport network, a light-emitting diode network, a laser diode network, an infrared network, combinations thereof, and/or other types of optical networks. The number of devices and/or networks illustrated inFIG.1is provided for explanatory purposes. In practice, there may be additional devices and/or networks, fewer devices and/or networks, different devices and/or networks, or differently arranged devices and/or networks than are shown inFIG.1. Furthermore, two or more of the devices illustrated inFIG.1may be implemented within a single device, or a single device illustrated inFIG.1may be implemented as multiple, distributed devices. Additionally, or alternatively, one or more of the devices of system10may perform one or more functions described as being performed by another one or more of the devices of the system10. Devices of the system10may interconnect via wired connections, wireless connections, or a combination thereof. Referring now toFIG.2, shown therein is a diagram of an exemplary embodiment of the user device18of the system10constructed in accordance with the present disclosure. In some embodiments, the user device18may include, but is not limited to, implementations as a personal computer, a cellular telephone, a smart phone, a network-capable television set, a tablet, a laptop computer, a desktop computer, a network-capable handheld device, a server, a digital video recorder, a wearable network-capable device, a virtual reality/augmented reality device, and/or the like. In some embodiments, the user device18may include one or more input device50(hereinafter “input device50”), one or more output device54(hereinafter “output device54”), one or more processor58(hereinafter “processor58”), one or more communication device62(hereinafter “communication device62”) capable of interfacing with the network34, one or more non-transitory computer-readable memory66(hereinafter “memory66”) storing processor-executable code and/or software application(s), for example including, a web browser capable of accessing a website and/or communicating information and/or data over a wireless or wired network (e.g., the network34), and/or the like. The input device50, output device54, processor58, communication device62, and memory66may be connected via a path70such as a data bus that permits communication among the components of user device18. The memory66may store an application74that, when executed by the processor58causes the user device18to perform an action such as communicate with or control one or more component of the user device18and/or the network34. The input device50may be capable of receiving information input from the user14and/or processor58, and transmitting such information to other components of the user device18and/or the network34. The input device50may include, but is not limited to, implementation as a keyboard, a touchscreen, a mouse, a trackball, a microphone, a camera, a fingerprint reader, an infrared port, a slide-out keyboard, a flip-out keyboard, a cell phone, a PDA, a remote control, a fax machine, a wearable communication device, a network interface, combinations thereof, and/or the like, for example. The output device54may be capable of outputting information in a form perceivable by the user14and/or processor58. For example, implementations of the output device54may include, but are not limited to, a computer monitor, a screen, a touchscreen, a speaker, a website, a television set, a smart phone, a PDA, a cell phone, a fax machine, a printer, a laptop computer, a haptic feedback generator, combinations thereof, and the like, for example. It is to be understood that in some exemplary embodiments, the input device50and the output device54may be implemented as a single device, such as, for example, a touchscreen of a computer, a tablet, or a smartphone. It is to be further understood that as used herein the term user (e.g., the user14) is not limited to a human being, and may comprise a computer, a server, a website, a processor, a network interface, a user terminal, a virtual computer, combinations thereof, and/or the like, for example. The network34may permit bi-directional communication of information and/or data between the user device18, the cloud-based server30, and/or the network element22. The network34may interface with the cloud-based server30, the user device18, and/or the network element22in a variety of ways. For example, in some embodiments, the network34may interface by optical and/or electronic interfaces, and/or may use a plurality of network topographies and/or protocols including, but not limited to, Ethernet, TCP/IP, circuit switched path, combinations thereof, and/or the like. The network34may utilize a variety of network protocols to permit bi-directional interface and/or communication of data and/or information between the cloud-based server30, the user device18and/or the network element20. Referring now toFIG.3, shown therein is a diagram of an exemplary embodiment of cloud-based server30constructed in accordance with the present disclosure. The cloud-based server30may include one or more devices that execute one or more microservice in a manner described herein. In the illustrated embodiment, the cloud-based server30is provided with one or more database82(hereinafter “database82”), a cloud server software84, and one or more processor88(hereinafter “processor88”). The cloud server software84and the database82are stored on a non-transitory computer-readable storage memory86(hereinafter “memory86”) accessible by the processor88of the cloud-based server30. In some embodiments, the cloud-based server30may comprise one or more processor88working together, or independently to, execute processor-executable code, such as the cloud server software84, stored on the memory86. Additionally, each cloud-based server30may include at least one input device90(hereinafter “input device90”) and at least one output device92(hereinafter “output device92”). Each element of the cloud-based server30may be partially or completely network-based or cloud-based, and may or may not be located in a single physical location. The processor88may be implemented as a single processor or multiple processors working together, or independently, to execute the cloud server software84as described herein. It is to be understood, that in certain embodiments using more than one processor88, the processors88may be located remotely from one another, located in the same location, or comprising a unitary multi-core processor. The processors88may be capable of reading and/or executing processor-executable code and/or capable of creating, manipulating, retrieving, altering, and/or storing data structures into the memory86such as in the database82. Exemplary embodiments of the processor88may include, but are not limited to, a digital signal processor (DSP), a central processing unit (CPU), a field programmable gate array (FPGA), a microprocessor, a multi-core processor, an application specific integrated circuit (ASIC), combinations, thereof, and/or the like, for example. The processor88may be capable of communicating with the memory86via a path94(e.g., data bus). The processor88may be capable of communicating with the input device90and/or the output device92. The processor88may be further capable of interfacing and/or communicating with the user device18and/or the network elements22via the network34using a communication device96. For example, the processor88may be capable of communicating via the network34by exchanging signals (e.g., analog, digital, optical, and/or the like) via one or more ports (e.g., physical or virtual ports) using a network protocol to provide updated information to the application74executed on the user device18. The memory86may be implemented as a conventional non-transitory memory, such as for example, random access memory (RAM), CD-ROM, a hard drive, a solid-state drive, a flash drive, a memory card, a DVD-ROM, a disk, an optical drive, combinations thereof, and/or the like, for example. In some embodiments, the memory86may be located in the same physical location as the cloud-based server30, and/or one or more memory86may be located remotely from the cloud-based server30. For example, the memory86may be located remotely from the cloud-based server30and communicate with the processor88via the network34. Additionally, when more than one memory86is used, a first memory86may be located in the same physical location as the processor88, and additional memory86may be located in a location physically remote from the processor88. Additionally, the memory86may be implemented as a “cloud” non-transitory computer-readable storage memory (i.e., one or more memory86may be partially or completely based on or accessed using the network34). The input device90of the cloud-based server30may transmit data to the processor88and may be similar to the input device50of the user device18. The input device90may be located in the same physical location as the processor88, or located remotely and/or partially or completely network-based. The output device92of the cloud-based server30may transmit information from the processor88to the user12, and may be similar to the output device54of the user device18. The output device92may be located with the processor88, or located remotely and/or partially or completely network-based. The memory86may store processor-executable code and/or information comprising the database82and cloud server software84. In some embodiments, the cloud server software84may be stored as a data structure, such as the database82and/or data table, for example, or in non-data structure format such as in a non-compiled text file. In one embodiment, the cloud server software84is a nocode/lowcode analytics tool to analyze data. The data may be retrieved from one or more timeseries database. In some embodiments, the cloud server software84may receive a datastream and store the datastream into a timeseries database. In some embodiments, both live data and archival data is stored in the timeseries database. Exemplary nocode/lowcode analytics tools may include Grafana (Raintank Inc., dba Grafana Labs, New York, NY) and/or Prometheus database. Grafana may be used to analyze data based on performance metrics, labels, and components associated with nodes22. Network elements22may include one or more devices that gather, process, store, and/or provide information in response to a request in a manner described herein. For example, Network elements22may include one or more optical data processing and/or traffic transfer devices, such as an optical node, an optical amplifier (e.g., a doped fiber amplifier, an erbium doped fiber amplifier, a Raman amplifier, etc.), an optical add-drop multiplexer (“OADM”), a reconfigurable optical add-drop multiplexer (“ROADM”), a flexibly reconfigurable optical add-drop multiplexer module (“FRM”), an optical source component (e.g., a laser source, or optical laser), an optical source destination (e.g., a laser sink), an optical multiplexer, an optical demultiplexer, an optical transmitter, an optical receiver, an optical transceiver, a photonic integrated circuit, an integrated optical circuit, a computer, a server, a router, a bridge, a gateway, a modem, a firewall, a switch, a network interface card, a hub, and/or any type of device capable of processing and/or transferring optical traffic. In some implementations, Network elements22may include OADMs and/or ROADMs capable of being configured to add, drop, multiplex, and demultiplex optical signals. Network elements22may process and transmit optical signals to other network elements22throughout optical network26in order to deliver optical transmissions. Layer 1 specific embodiments of the network element22may optionally be provided with additional elements that are not shown in the Figures such as an optical transceiver, a digital signal processor (DSP), and additional high-speed integrated circuit (ASIC or FPGA) that is specialized to handle high-speed data frames/packets. Layer 0 specific embodiments of network element22may optionally be provided with additional elements that are not shown in the Figures such as a Wavelength Selective Switch (WSS), Variable Optical Attenuator (VOA), Erbium Doped Fiber Amplifier (EDFA), or Raman amplifiers, and optical channel monitors, for instance. Referring now toFIG.4, shown therein is a diagram of an exemplary embodiment of a node22, such as the first node22aand/or the second node22bofFIG.1, constructed in accordance with the present disclosure. The node22generally comprises an embedded device100(shown as embedded device100aand embedded device100b), a communication device104to allow one or more component of the node22to communicate to one or more other component of the node22or to another node22in the system10via the network34, and a controller card108. In one embodiment, the embedded device100includes one or more digital coherent optics module having one or more coherent optical transceiver operable to receive a client data from an electrical signal and transmit the client data in an optical signal and/or receive the client data from an optical signal and transmit the client data in an electrical signal, or a combination thereof. In one embodiment, the embedded device100may include one or more of the Layer 1 elements and/or Layer 0 elements as detailed above. The embedded optical device may have one or more property affecting a function of the embedded device and one or more status indicative of a current state of at least one component of the embedded device. In accordance with the present disclosure, the network element22may be a holder, like a chassis, or a contained/logical equipment, like an optical line card within the chassis. In one embodiment, the network element22may be a logical entity comprising one or more chassis having one or more pluggable cards (such as one or more embedded device100and/or one or more controller card108) that form the network element22. For instance, pluggable cards may include traffic carrying (“data plane”) cards (e.g., embedded device100) that may have customized silicon such as ASICs or FPGAs that process the data plane frames/packets, based on the functionality of the card. Another exemplary traffic carrying card is a router line-card which has packet processing ASICs or other specialized silicon. Another exemplary embedded device100is an optical line card that includes a DSP module and/or optical photonic circuits. Pluggable cards may also refer to control cards (“control and management plane”) that don't process data packets but run all the software that implement the control plane (routing protocols) and management plane (management interfaces such as CLI, NETCONF, gRPC, DHCP etc.) such as the controller card108. The control card108typically has an off-the-shelf CPU (such as Intel or ARM) and run some variant of an operating system (more recently, Linux or QNX or BSD), described below in more detail. Other embedded devices100include common cards that may also be added such as fan trays, power entry modules, and others that provide auxiliary functions of the chassis. It should be noted that the diagram of the node22inFIG.4is simplified to include one controller card108in communication with multiple embedded devices100. It is understood that the node22may include more than one controller card108, and each controller card108may be in communication with one or more embedded device100via the same or a different communication device104. The number of devices illustrated inFIG.4is provided for explanatory purposes. In practice, there may be additional devices, fewer devices, different devices, or differently arranged devices than are shown inFIG.4. Furthermore, two or more of the devices illustrated inFIG.4may be implemented within a single device, or a single device illustrated inFIG.4may be implemented as multiple, distributed devices. Additionally, one or more of the devices illustrated inFIG.4may perform one or more functions described as being performed by another one or more of the devices illustrated inFIG.4. Devices illustrated inFIG.4may interconnect via wired connections (e.g., fiber-optic connections). Referring now toFIG.5, shown therein is an exemplary embodiment of the embedded device100constructed in accordance with the present disclosure. In some embodiments, the embedded device100may include, but is not limited to, one or more input device120(hereinafter “input device120”), one or more output device124(hereinafter “output device124”), one or more processor128(hereinafter “processor128”), one or more communication device132(hereinafter “communication device132”) operable to interface with the communication device104, one or more non-transitory computer-readable medium136(hereinafter “memory136”) storing processor-executable code and/or software application(s) (such as a performance monitoring domain agent, PMDA140, described below in more detail) and a database148. The input device120, output device124, processor128, communication device132, and memory136may be connected via a path144such as a data bus that permits communication among the components of the embedded device100. The input device120may be capable of receiving client data and transmitting the client data to other components of the system10. The input device120may include, but is not limited to, implementation as an optical network interface, an electrical network interface, combinations thereof, and/or the like, for example. The output device124may be capable of outputting client data. For example, implementations of the output device124may include, but are not limited to, implementation as an optical network interface, an electrical network interface, combinations thereof, and/or the like, for example. In some embodiments, the database148may be a time series database. The database148may be a relational database or a non-relational database. Examples of such databases comprise, DB2®, Microsoft® Access, Microsoft® SQL Server, Oracle®, mySQL, PostgreSQL, MongoDB, Apache Cassandra, InfluxDB, Prometheus, Redis, Elasticsearch, TimescaleDB, and/or the like. It should be understood that these examples have been provided for the purposes of illustration only and should not be construed as limiting the presently disclosed inventive concepts. The database148can be centralized or distributed across multiple systems. Referring now toFIG.6, shown therein is an exemplary embodiment of the controller card108constructed in accordance with the present disclosure. In some embodiments, the controller card108may include, but is not limited to, one or more input device150(hereinafter “input device150”), one or more output device154(hereinafter “output device154”), one or more processor158(hereinafter “processor158”), one or more communication device162(hereinafter “communication device162”) operable to interface with the communication device104, one or more non-transitory processor-readable memory166(hereinafter “memory166”) storing processor-executable code and/or software application(s) (such as the PMDA140, described below in more detail) and the database148. The input device150, output device154, processor158, communication device162, and memory166may be connected via a path170such as a data bus that permits communication among the components of the controller card108. The input device150may be capable of receiving client data and transmitting the client data to other components of the system10. The input device150may include, but is not limited to, implementation as an optical network interface, an electrical network interface, combinations thereof, and/or the like, for example. The output device154may be capable of outputting client data. For example, implementations of the output device154may include, but are not limited to, implementation as an optical network interface, an electrical network interface, combinations thereof, and/or the like, for example. The disclosure generally includes the PMDA140distributed between various components of the system10. The PMDA140may be stored in the memory136and/or the memory166and be executed by either the processor128and/or the processor158, respectively. In one embodiment, the PMDA140may be considered a fungible software. For example, the PMDA140is compatible with standard container orchestration tools such as Docker Compose and Kubernetes and may be implemented with Docker and/or Kubernetes as a containerized service. In this manner, the design of the PMDA140allows for horizontal scaling in a distributed system. In some embodiments, the PMDA140uses secure containers, secure (TLS) connections, and/or the like. In some embodiments, the database148is a datastore, such as a data lake (described above), a database, a block of memory, and/or the like. Referring now toFIG.7, shown therein is a process flow diagram of an exemplary embodiment of a telemetry process200in accordance with the present disclosure. The telemetry process200generally comprises the steps of: obtaining one or more performance information (step204); storing the one or more performance information (step208); and publishing stream data (step212). In one embodiment, obtaining one or more performance information (step204) comprises collecting, by the PMDA140, performance information from one or more other application, for example, another application being executed by either the processor128and/or the processor158. For example, the PMDA140may request the one or more performance information from one or more other application and, in response to the request, the other application may respond with the one or more performance information. In one embodiment, obtaining one or more performance information (step204) comprises receiving the one or more performance information from one or more other application wherein the other application periodically raises one or more event comprising data indicative of the one or more performance information. In one embodiment, obtaining one or more performance information (step204) comprises subscribing, by the PMDA140, to one or more message transmitted by one or more other application where the one or more message transmitted by the one or more other application includes one or more performance information. The one or more message may utilize, for example, a messaging protocol. The messaging protocol may include gRPC (Google, LLC, Mountain View, CA). Additionally, in some embodiments, the one or more message may be encoded based on the protobuf format, i.e., the protobuf encoding standard established by Google, LLC (Mountain View, CA). The protobuf is one example of an open, metadata-based format to encode data in archival logs or streams. In one embodiment, obtaining one or more performance information (step204) comprises retrieving the one or more performance information from the database148. For example, in some embodiments, other applications may store one or more performance information in the database148. The PMDA140, executed by one or more processor, may be in communication with the database148and retrieve the one or more performance information from the database148. In one embodiment, obtaining one or more performance information (step204) comprises receiving, by a performance monitoring collector daemon (PMCD142), the one or more performance information from the PMDA140. In other embodiments, the PMCD142orchestrates with the PMDA140and causes the PMDA140to transmit the one or more performance information to the PMCD142. In one embodiment, obtaining one or more performance information (step204) further comprises, the PMCD142in communication with one or more performance monitoring logger (PMLogger174) and transmits to the PMLogger174, the one or more performance information. The PMLogger174may be processor-executable code and/or a software application stored in the memory166and executed by the processor158. In one embodiment, the PMLogger174may communicate, via a performance monitoring proxy (PMProxy178) and the network34to the cloud-based server30. In one embodiment, the PMCD142may obtain the one or more performance information (step204) from more than one PMDA140. In one embodiment, obtaining one or more performance information (step204) may be performed periodically. For example, obtaining one or more performance information (step204) may be performed every second, every minute, every 5 minutes, every hour, every 12 hours, every day, every week, or every month, or some combination thereof. In one embodiment, obtaining one or more performance information (step204) may be performed as requested, e.g., as requested from a user or from one or more other application. In one embodiment, obtaining one or more performance information (step204) comprises obtaining the one or more performance information based at least in part on a selected performance metrics as described below in relation to a metrics configuration process250shown inFIG.8. In one embodiment, storing the one or more performance information (step208) comprises storing the one or more performance information in the memory136if the PMCD142is executed by the processor128and/or storing the one or more performance information in the memory166if the PMCD142is executed by the processor158. In one embodiment, storing the one or more performance information (step208) comprises storing the one or more performance information from each PMDA140received by each PMCD142into the memory166by the PMLogger174. In one embodiment, storing the one or more performance information (step208) comprises storing the one or more performance information in the database148. In some embodiments, the database148is a timeseries database and the one or more performance information is stored as time-series data in the database148. In one embodiment, storing the one or more performance information (step208) comprises storing the one or more performance information as an archive file. The archive file may be stored in the memory166by the processor158executing the PMLogger174. In one embodiment, storing the one or more performance information (step208) comprises storing, by the processor158, the one or more performance information in the database148wherein the database148is a nosql database. In one embodiment, storing the one or more performance information (step208) further comprises using the protobuf schema to publish data in the database148, for example, when the database148is a nosql database. In one embodiment, storing the one or more performance information (step208) comprises formatting, by the processor158, the one or more performance information into a predetermined format that is known by both the controller card108and each embedded device100. The predetermined format may be known by each of the PMDA140, PMCD142, and PMLogger178. In one embodiment, the performance information may be stored in the database148or in an archive file as performance information formatted with the predetermined format. In one embodiment, publishing stream data (step212) comprises providing access to the stored one or more performance information from the network34via the communication device162, for example. In some embodiments, publishing stream data (step212) includes opening or enabling, by the processor158executing the PMProxy178, one or more port on the communication device162to allow the communication device162to communicate with the cloud-based server30via the communication device96or user device18via the communication device62, for example. In one embodiment, publishing stream data (step212) comprises utilizing one or more hardened performance co-pilot telemetry pipeline, performance co-pilot collector, and/or distribution pipeline. In this manner, the PMCD142and the PMProxy178are part of a performance co-pilot pipeline. In one embodiment, publishing stream data (step212) comprises sending a message to a subscriber using the messaging protocol, such as gRPC. The message may be formatted utilizing the protobuf format, for example. Referring now toFIG.8, shown therein is a process flow diagram for a metrics configuration process250constructed in accordance with the present disclosure. The metrics configuration process250generally comprises: receiving a metrics metadata file (step254); parsing the metrics metadata file into selected metrics (step258); and storing the selected metrics (step262). In one embodiment, receiving a metrics metadata file (step254) comprises receiving, via the network34, the metrics metadata file. The metrics metadata file comprises one or more performance metric desired to be published. For example, a user may desire an additional performance metric regarding performance information of a component of the node22such as a temperature. The user may thus update the metrics metadata file to include a temperature metric, e.g., a particular performance metric. In one embodiment, receiving a metrics metadata file (step254) comprises receiving a metrics metadata file in a CSV format. The metrics metadata file may include a header identifying each column in the CSV file. In one embodiment, receiving a metrics metadata file (step254) comprises receiving the metrics metadata file in the CSV format having the header identifying each column in the CSV file. The header may comprise a name for each column in the CSV file and may include, for example, an index ID, a namespace ID, a device ID, an Attribute Name, a message header, a message class name, a value location, and an instance formula. In one embodiment, parsing the metrics metadata file into selected metrics (step258) comprises transmitting, by the processor158, to the processor128of the embedded device100, the performance metrics identified in the metrics metadata file as selected metrics. The processor128of the embedded device100, executing the PMCD142may select one or more performance information based on the selected metrics, e.g., if the one or more performance information corresponds to at least one of the selected metrics. For example, if one of the selected metrics included a temperature of the processor128, the PMCD142would collect a temperature value from the processor128as a performance information and transmit the performance information to the PMLogger174. In one embodiment, distributing the selected metrics (step262) comprises transmitting to each PMDA140the selected metrics. In one embodiment, distributing the selected metrics (step262) comprises updating the selected metrics on each embedded device100and the controller card108. In one embodiment, distributing the selected metrics (step262) further comprises restarting the PMDA140. When the PMDA140is a containerized service, restarting the PMDA140may include destroying, of bringing down, the container executing the PMDA140and bringing the container back up, e.g., with docker-compose. In this way, selected metrics may be modified without recompiling software stored in the memory136and executed by the processor128or stored in the memory166and executed by the processor158. In one embodiment, distributing the selected metrics (step262) comprises storing the selected metrics in the memory166. In other embodiments, distributing the selected metrics (step262) comprises storing the metrics metadata file in the memory166. In one embodiment, distributing the selected metrics (step262) comprises transmitting the selected metrics to only one embedded device100, e.g., to the embedded device100a, for example. In one embodiment, when the controller card108boots up, the controller card108will generate the selected metrics from a metrics metadata file stored in the memory166, or from the selected metrics stored in the memory166, and transmit the selected metrics to each embedded device100. In one embodiment, when the controller card108boots up, the controller card108will request the metrics metadata file, e.g., from the cloud-based server30, for example. Referring now toFIG.9, shown therein is a functional diagram of an exemplary embodiment of the system10constructed in accordance with the present disclosure. As shown inFIG.9, the PMLogger174receives performance information from each PMCD142of the embedded devices100aand100bas well as from the PMCD142stored in the memory166of the controller card108. The PMLogger174further communicates the performance information to the cloud-based server30via the PMProxy178. Further, when the processor158of the controller card108generates the selected metrics, the processor158may distribute the selected metrics to each PMDA140of the controller card108and each embedded device100. As described herein, by utilizing the metrics metadata file to generate performance information, the metrics metadata is defined using a standardized format. From the above description, it is clear that the inventive concept(s) disclosed herein are well adapted to carry out the objects and to attain the advantages mentioned herein, as well as those inherent in the inventive concept(s) disclosed herein. While the embodiments of the inventive concept(s) disclosed herein have been described for purposes of this disclosure, it will be understood that numerous changes may be made and readily suggested to those skilled in the art which are accomplished within the scope and spirit of the inventive concept(s) disclosed herein.
46,499
11943122
DETAILED DESCRIPTION 5G networks have the capability to support a variety of communication services, such as IoT and eMBB. The increasing flexibility of the networks to support services with diverse requirements may present operational and management challenges. 5G networks management system can therefore benefit from Management Data Analytics (MDA) for improving networks performance and efficiency to accommodate and support the diversity of services and requirements. MDA Service(s) (MDAS) can potentially be consumed by various Management Functions (Ws) (e.g., Management Service (MnS) producers/consumers) and Network Functions (NFs) including, for example, Network Data Analytics Function (NWDAF), Self-Organizing Network (SON) functionalities, Network Function Management Function (NFMF), Communication Service Management Function (CSMF), etc. However, the MDA has not yet been completely specified, for example, it is still unclear what and how MDAS is provided and/or consumed. According to 3GPP TS 28.550 v16.2.0 (2019-09) (“[1]”), MDA can be performed to diagnose ongoing issues impacting the performance of the mobile network and predict any potential issues (e.g., potential failure and/or performance degradation). For diagnosis of network issues, the root cause(s) need to be figured out precisely. One network issue may result in multiple symptoms, such as alarms, performance degradation, user complaints, etc., and the MDAS can analyse these symptoms and provide (or be part of process of providing, such as SON) the precise root cause indication to the consumer. For instance, repeated coverage holes and interference spots in the radio coverage of a network may result in low data throughput, high packet loss rate, high power consumption and potentially leading to RRC connection setup failures affecting the quality of end user experience. For prevention of potential network issues (e.g., potential resource shortage), the MDAS can analyse the network status (e.g., measurements related to load and resource usage) in connection with other related data from the past and provide indications of potential issues, so the consumer of analytics service may take preventative actions to avoid the issues from happening. For the critical ongoing and potential network issues, the MDAS output can be used to alert the consumer to take immediate actions. The network demand changes more dynamically in 5G networks to support diverse services, and need to be fulfilled as soon as possible to shorten the time to market for the new services while assuring the performance of the existing services. The MDAS can make the prediction of the network demand, and the consumer can take necessary actions to fulfil the network demand, for instance the action could be network capacity upgrade, deployment of new NF instances or re-deployment (e.g., relocation) of existing NF instances from demand decreasing location to the demand increasing location. The MDAS may also provide recommendations on the actions besides the indications of the issues or predictions. In addition, the MDAS can also support the optimization of networks, services and functionalities. Furthermore, the MDAS can support the automation of network and service management. However the scenarios, use cases and potential solutions for the aspects mentioned above have not yet been specified. It is unclear that how the output data produced by MDAS can be consumed by the consumer(s). The present disclosure provides the overview, process, MDA relation with SON, and MDA coverage issue analysis use cases. These embodiments may be added to 3GPP technical reference (TR) 28.809 (“[6]”). 1. MDA Overview MDA is discussed in [1], 3GPP TR 28.861 v1.0.0 (2019-09) (“[2]”), 3GPP TS 28.530 v16.0.0 (2019-09) (“[3]”), 3GPP TS 28.533 v16.1.0 (2019-09) (“[4]”), and 3GPP TR 28.805 v16.0.0 (2019-09) (“[5]”). The MDA provides capabilities of analysing and processing raw data related to network and service events and status (e.g., performance measurements, Trace, Minimization of Drive Tests (MDT), radio link failure (RLF) reports, RRC Connection Establishment Failure (RCEF) reports, quality of experience (QoE) reports, alarms, configuration data, network analytical data, and service experience data from Application Functions (AFs), etc.) to provide analytics report (including recommended actions) to instruct the necessary actions for network and service operations. The MDA, in conjunction with Artificial Intelligence (AI) and Machine Learning (ML) techniques, brings intelligence and automation to the network service management & orchestration. MDA can help to perform management tasks in preparation, commissioning, operation as well as in the termination phases. For example, MDA can support service provisioning by preparing service catalogues, evaluating network requirements for a new service and carrying out feasibility check. During operation phase, the MDA can identify ongoing issues impacting the performance of the network and service, and discover in advance the potential issues that would cause potential failure and/or performance degradation. The MDA can also assist to predict the network and service demand to enable the timely resource provisioning and deployments which would allow fast time-to-market network and service deployment. The MDAS can be consumed by various consumers, for instance, the MFs (e.g., MnS producers/consumers for network and service management), NFs (e.g., NWDAF), SON functions (e.g., Coverage and Capacity Optimization (CCO), etc.), network and service optimization tools/functions, Service Level Specification (SLS) assurance functions, human operators, AFs, etc. The MDA is an enabler for the automation and cognition of the network and service management & orchestration. MDAS provides data analytics of different network related parameters including, for example, load level and/or resource utilisation. For example, the MDAS for an NF can collect the NF's load related performance data, e.g., resource usage status of the NF. The analysis of the collected data may provide forecast of resource usage information in a predefined future time. This analysis may also recommend appropriate actions e.g., scaling of resources, admission control, load balancing of traffic, etc. An MDAS for a network slice subnet instance (NSSI) provides NSSI related data analytics. The service may consume the corresponding MDAS of its constituent NFs'. The NSSI MDAS may further classify or shape the data in different useful categories and analyse them for different network slice subnet management needs (e.g., scaling, admission control of the constituent NFs, etc.). If an NSSI is composed of multiple other NSSIs, the NSSI MDAS acts as a consumer of MDAS of the constituent NSSIs for further analysis e.g., resource usage prediction, failure prediction for an NSSI, etc. An MDAS for a network slice instance (NSI) provides NSI related data analytics. The service may consume the corresponding MDAS of its constituent NSSI(s). The NSI MDAS may further classify or shape the data in different useful categories according to different customer needs, e.g., slice load, constituent NSSI load, communication service loads. This data can be used for further analysis (e.g., resource usage prediction, failure prediction for an NSI, etc.). In a Service Based Management Architecture (SBMA), the MDAS is an MnS provided by an MnS producer. The SBMA may include one or more MnSs. An MnS is a set of offered capabilities for management and orchestration of network and services. The entity producing an MnS is called MnS producer. The entity consuming an MnS is called MnS consumer. An MnS provided by an MnS producer can be consumed by any entity with appropriate authorization and authentication. An MnS producer offers its services via a standardized service interface composed of individually specified MnS components. For purposes of the present disclosure, the MDAF is used to represent the MDAS producer (e.g., the MDAF performs MDA and provides MDAS to its consumers) 2. MDA Functionality FIG.1illustrates the functionality provided by MDA according to various embodiments. Depending on the scenario, the MDA may collect data for analysis by acting as an MDAS Consumer (consuming MDASs from other MDAS producers), as an MnS Consumer (consuming MnSs from one or more MnS producers), and/or as an NWDAF subscriber. After analysis, MDA acts as an MDAS Producer to expose the analysis results to MDAS Consumers. The MnS producer shall have the capability allowing its authorized consumer to obtain information about MnS capabilites. The MDA utilizes the network management data collected from the network including, for example, service, slicing, and/or network functions related data and make the corresponding analytics based on the collected information. For example, the information provided by Performance Management (PM) data analytics services can be used to optimize network performance, and the information provided by Fault Management (FM) data analytics services can be used to predict and prevent failures of the network. The MDAS can be deployed at different levels, for example, at a domain level (e.g., Radio Access Network (RAN), core network (CN), Network Slice and/or Network Slice Subnet Instance (NSSI), etc.) and/or in a centralized manner (e.g., at the Public Land Mobile Network (PLMN) level). As examples, the PM data includes measurements collected from NFs, NSSIs and NSIs. Additionally or alternatively, the PM data includes one or more of Radio Resource Control (RRC) connection related measurements (e.g., Attempted RRC connection establishments, RRC connection re-establishments, mean/maximum/minimum number of RRC connections, mean/maximum/minimum RRC connection setup times, UE CONTEXT Release data, Number of successful RRC connection setups in relation to the time between successful RRC connection setup and last RRC connection release, etc.), Radio Access Bearer (RAB) related measurements (e.g., number of initial E-RABs attempted to setup, releases, modifications, activity, numbers, etc.), handover (HO) releated measurements (e.g., intra-RAT/inter-RAT HO measurement reports, number of intra-RAT/inter-RAT HO failures, number of intra-RAT/inter-RAT HO successes, etc.), cell level QoS related measurements (e.g., Cell PDCP SDU bit-rate, active UEs, Packet Delay and Drop Rate, packet loss rate, IP latency measurements, IP throughput measurements, etc.), radio resource utilization related measurements (e.g., uplink (UL) and/or downlink (DL) physical resource block (PRB) usage, RACH usage, cell unavailable time, transport block (TB) related measurements, power utilization measurements, etc.), paging related measurements, RAN node-related measurements (e.g., RAN node peak/mean processor usage, memory utilization, etc.), radiofrequency (RF) measurements (e.g., Channel Quality Indicator (CQI) Distribution, Timing Advance Distribution, RSRP measurements and/or statistics, RSRQ measurements and/or statistics, SINR measurements and/or statistics, RLF measurements of Synchronization Signal Block (SSB) beams, etc.), and/or Charging Data Record (CDR) data. As examples, the CDR data includes one or more of voice service results including voice call drops, Short Message Service (SMS) results, UL/DL Data Volume, Report Time, Presence Reporting Area Status, and/or other like CDR data. 3. MDA Process and Role 3.1. MDA Role in Management Loop The MDA forms a part of the management loop (which can be open loop or closed loop (see e.g., 3GPP TS 32.500 v15.0.0 (2018-06-27) (“[32500]”)), and it brings intelligence and generates value by processing and analysis of management and network data, where the AI and ML techniques may be utilized. The MDA plays the role of the Analytics element in the management loop illustrated byFIG.2. InFIG.2, “observation” refers to the observation of the managed networks and services. The observation involves monitoring and collection of events, status and performance of the managed networks and services, and providing the observed/collected data (e.g., performance measurements, Trace/MDT/RLF/RCEF reports, network analytics reports, QoE reports, alarms, etc). “Analytics” refers to the data analytics for the managed networks and services. The MDA described in the TR plays the role of Analytics in the management loop. The MDA prepares, processes and analyzes the data related to the managed networks and services, and provides the analytics reports for root cause analysis of ongoing issues, prevention of potential issues and prediction of network or service demands. The analytics report contains the description of the issues or predictions with optionally a degree of confidence indicator, the possible causes for the issue and the recommended actions. Techniques such as AI and ML (e.g., ML model) may be utilized with the input data including not only the observed data of the managed networks and services, but also the execution reports of actions (taken by the execution step). The MDA classifies and correlates the input data (current and historical data), learn and recognize the data patterns, and makes analysis to derive inference, insight and predictions. “Decision” refers to the decision making for the management actions for the managed networks and services. The management actions are decided based on the analytics reports (provided by MDA) and other management data (e.g., historical decisions made) if necessary. The decision may be made by the consumer of MDAS (in the closed management loop), or a human operator (in the open management loop). The decision includes what actions to take, and when to take the actions. “Execution” refers to the execution of the management actions according to the decisions. During the execution step, the actions are carried out to the managed networks and services, and the reports (e.g., notifications, logs) of the executed actions are provided. 3.2. Management Interaction with NWDAF and gNB There are two types of data analytics services, one is the network data analytics service provided by NWDAF, another is the MDAS provided by 3GPP management system. The MDAS producer provides the analytics data for management purposes based on the data related to different types of NFs, e.g., data reported from gNB and other core network functions. The MDAS producer may use the analytics result of NWDAF as input. FIG.3illustrates an example of coordination between a Network Data Analytics Function (NWDAF), next generation NodeB (gNB), and a Management Data Analytics Service (MDAS) producer for data analytics, according to various embodiments. InFIG.3, the NWDAF may consume the MDAS for identified scenarios and provide analytics service for 5GC NF for control purpose. The CN Domain MDAS producer may consume the service provided by NWDAF and other 5GC NFs and provide analytics data for management purpose. The gNB many consume the MDAS for identified scenarios for RAN control purpose. The RAN Domain MDAS producer may consume the service provided by gNB and provide analytics data for management purpose. FIG.4shows another example of the coordination between NWDAF and MDAS producer for data analytics purpose. Referring toFIG.4, the NWDAF may consume the MDAS for identified scenarios and provide analytics service for 5GC NF for control purpose. The gNB may consume the MDAS for identified scenarios for RAN control purpose, and the Domain MDAS producer may consume the service provided by NWDAF, other 5GC NFs and gNB, provide analytic data for management purpose. 3.3. MDA Process The MDA brings intelligence and generates value by processing and analysis of large amounts of data, where the AI and ML techniques are widely used. This clause describes the MDA process in conjunction with AI and ML techniques. The MDA process needs to be trackable by the MDAS consumer, so that the consumer can build confidence by supervising and when necessary intervening the MDA process.FIG.5illustrates an example MDA process. InFIG.5the data preparation entity prepares the (raw) data for analysis. The Data Preparation involves collection of the required data and preparation (e.g., formatting, classification, correlation, etc.) of the collected data to make them ready for analysis. The data includes 1) the raw data (e.g., the data without processing) for the network events and/or status (e.g., performance measurements, Trace/MDT/RLF/RCEF reports, QoE reports, alarms, etc.), 2) the execution reports provided by the MDAS consumer for the actions executed under the instruction of MDAS, and 3) the evaluation results (e.g., KPIs) of the actions executed (by the MDAS consumer). The execution reports and evaluation results are useful for the MDA to train the data analysis model (e.g., ML model) to optimize the accuracy of the output (e.g., analytics report) in conjunction with AI and ML techniques. The analysis entity analyzes the input data to provide the analytics report. The Analysis is supported by data analysis model (e.g., ML model) in conjunction with AI and ML techniques. The MDA analyses not only the prepared data (see e.g., Data Preparation above), but also takes into account the execution reports and the evaluation results of the actions (executed by the MDAS consumer) to fine-tune the analysis result. The Analysis may be for root cause analysis of ongoing issues, for prevention of potential issues and for prediction of network or service demands. The analytics report provides sufficient information to enable the MDAS consumer to take corresponding actions. The analytics report may include recommended actions for the MDAS consumer to take into account. The execution entity executes actions (e.g., by the MDAS consumer) corresponding to the analytics report. The executed actions may be generated by the MDAS consumer based on the analytics report, or may be recommended by the analytics report. The execution report is provided back to MDA so that the MDA can take it into account to fine-tune the analysis result. The MDAS consumer could be a SON function, a MF or a human operator. The evaluation entity is evaluates the result of the execution. The execution result can be evaluated by performance measurements, KPIs, QoE reports and/or a human involved test result (e.g., drive test). The MDA uses the evaluation result in conjunction with AI and ML techniques to train the data analysis model to optimize the accuracy of the analysis result. In various embodiments, the MDA may rely on ML technologies, which may need the consumer to be involved to optimize the accuracy of the MDA results. The MDA process in terms of the interaction with the consumer, when utilizing ML technologies, is shown byFIG.6. FIG.6illustrates another example MDA process according to various embodiments. There are two kinds of processes for MDA, the process for ML model training and the process for management data analysis. In the process for ML model training, the MDA producer classifies the input data for training purpose, trains the ML model and provides the ML training report. The process for ML model training may also get the consumer involved, i.e., allowing the consumer to provide input for ML model training. The ML model training may be performed on an un-trained ML model or a trained ML model. In the process for management data analysis, the MDA producer classifies the analytics input for management data analysis purpose, analyses the data by the trained ML model, and provides the analytics report to the consumer. InFIG.6, the data classification entity is the data input to the MDA producer could be used for ML model training or for the actual management data analysis. The MDA producer classifies the input data into the category for ML data training and the category for management data analysis, and passes the classified data along to corresponding step for further processing. The ML model training entity is the MDAS producer trains the ML model, (i.e., to train the algorithm of the ML model) to be able to provide the expected training output by analysis of the training input. For training the ML model of the MDA process, the consumer provides the training data including training input and the desired output to the MDAS producer. The data for ML model training may be the training data (including the training input and the expected output) and/or the validation data provided by the consumer. After the ML model training, the MDAS producer provides an ML model training report. The MDAS producer provides the ML model training report as one kind of output data to the consumer. With a trained ML model for MDA, the MDAS producer can analyze the analytics input and generate the analytics report as output data of the analysis to the consumer. The management data analysis entity is the trained ML model analyses the classified data and generates the management data analytics report(s). The consumer (validation entity) may validate the output data provided by the MDAS producer. The output data to be validated may be the analytics report and/or the ML model training report. The consumer may provide the validation data as feedback to the MDAS producer, and the MDAS producer will use the validation data for further ML model training for MDA with the historical data that are used to generate the validated output data 3.4. Relation Between MDA and SON MDA for 5G networks has been defined in [1] and is also discussed in [3] and [4]. It utilizes both management and network data collected through management services and from network functions (including e.g., service, slicing and/or network functions related data) and makes the corresponding analytics based on the collected information. These analytics services (e.g., MDAS) can be made available and consumed by other management and SON functions.FIG.7gives a high level illustration of potential interaction and utilization of the MDAS, including interactions between MFs, SON functions, and NFs via a Management Data Analytics Function (MDAF). In this example, SON functions may utilise the services provided by the management data analytics (e.g., MDAS) to conduct their functionalities and control actions. Other potential interactions between the entities are not shown byFIG.7. The MDA and SON functions/elements both contribute to the automation and cognition of network and service management & orchestration. With interworking and interaction between MDA and SON, they can jointly bring seamless automation of network and service operations. The relation between MDA and SON is depicted byFIG.8. InFIG.8, the MDAF provides analytics reports which SON Function can consume. The analytics report may be for an ongoing network or service issue, for prevention of potential issue and for prediction of network or service demands. The analytics report may also include recommended actions for the corresponding ongoing/potential issue and/or prediction. Additionally or alternatively, the MDAF may provide analytical report on capacity planning, resource requirements, resource utilization, resource availability and resource reservation proposals to assist the feasibility check by network and network slice management system before the provisioning of the communication service. The SON Function may consume the analytics reports provided by the MDAF. The SON Function determines the necessary actions for the corresponding ongoing/potential issue and/or prediction provided in the analytics reports consumed from the MDAF. If the analytics reports include the recommended actions, the SON Function also takes these recommended actions into account. The SON Function executes the actions, and provides the execution reports (containing the description/logs of the taken actions) for the corresponding ongoing/potential issue and/or prediction. The MDAF consumes the execution reports from the SON Function. The MDAF evaluates (e.g., by performance measurements) the result of the executed actions by the SON Function. The MDAF takes the execution reports consumed from the SON Function and the evaluation results of the executed actions into account to train the analysis model with AI/ML techniques to optimize the accuracy of the next analytics reports. 4. Coverage Related Issues and Root Cause Analysis 4.1. Coverage Issue Analysis 4.1.1. Use Case The coverage issue may cause various UE and network failures and degrade the network performance offered the UEs. The coverage issue could be a weak coverage, a coverage hole, a pilot pollution, an overshoot coverage, or a DL and UL channel coverage mismatch as described in clause 5.1.1 of 3GPP TS 37.816 v16.0.0 (2019-07-23). The weak coverage may result in low success rate of random access, paging, RRC connection establishment and handover, low data throughput, more abnormal releases of RRC connection, DRB retainability, QoS flow and/or PDU session resources, and dissatisfied QoE. The coverage hole is a more severe problem and would further lead to the UE out of service in the area. The 5G related coverage issue may exist only in 5G (e.g., 5G issue only with good coverage provided by other RATs) or exist in all RATs (e.g., no RAT provides good coverage in the area). Coverage performance should be assured to guarantee user service experience. It is desirable that the coverage issue can be detected by MDA from the various symptoms, together with the geographical and terrain data and the configuration parameters of the RAN. Once the coverage issue is detected, the MDAF provides the analytics report that precisely describes the coverage issue, and the analytics report needs to contain sufficient information to enable the MDAS consumer (e.g., SON CCO function) to take the remedy actions. The MDAS producer (e.g., MDAF) may also provide the recommended actions to solve the identified coverage issue in the analytics report, so that the MDAS consumer can execute the actions accordingly or by taking the recommended actions into account. The MDAS producer (e.g., MDAF) is informed when the actions are taken by the MDAS consumer (or MDAS producer) to solve the coverage issue described in the analytics report, so that the MDAF can start evaluating the result of the executed actions. The MDAS producer (e.g., MDAF) gets the execution reports describing the actions taken by the MDAS consumer, and takes the execution reports into account to fine-tune the accuracy of the future (new or updated) analytics report. The MDAF also provides update(s) of the analytics report to indicate the status change (e.g., solved, mitigated, or deteriorated) of the coverage issue. 4.1.2. Potential Requirements REQ-COV_ANA-CON-1: The MDAS producer should have a capability to provide the analytics report describing the coverage issue. REQ-COV_ANA-CON-2: The analytics report describing the coverage issue should contain the following information:The identifier of the coverage issue described in the analytics report;Indication of whether the coverage issue is weak coverage or coverage hole, a pilot pollution, an overshoot coverage, or a DL and UL channel coverage mismatch;The start time and end time of the coverage issue;The geographical area and location where the coverage issue exists;Root cause of the coverage issue;Whether the coverage issue exists in 5G only or in all RATs;The cells affected by the coverage issue;The severity level (e.g., critical, medium, or cleared) of the coverage issue;The recommended actions to solve the coverage issue. REQ-RCA_COV-CON-1: The MDAS producer (e.g., MDAF) should be able to collect the following management data related to coverage issue with indication of location (see Note 1): Performance measurements; KPIs; MDT/RLC/RCEF reports; QoE reports.Note 1: The indication of location may vary for different kinds of data. REQ-RCA_COV-CON-2: The MDAS producer (e.g., MDAF) should be able to get the geographical data and terrain data for a deployed RAN (e.g., NG-RAN). REQ-RCA_COV-CON-3: The MDAS producer (e.g., MDAF) should be able to get the configuration parameters related to coverage of RAN (e.g., NG-RAN). REQ-RCA_COV-CON-4: The MDAS producer (e.g., MDAF) should be able to provide the analytics report describing the coverage issue. REQ-RCA_COV-CON-5: The MDAS producer (e.g., MDAF) should be able to provide the following information in the analytics report to describe the coverage issue: The identifier of the coverage issue described in the analytics report; Indication of whether the coverage issue is weak coverage or coverage hole; The start time of the coverage issue; The geographical area and location where the coverage issue exists; Whether the coverage issue exists in 5G only or in all RATs; The cells affected by the coverage issue; The severity level (e.g., critical, medium) of the coverage issue; The recommended actions to solve the coverage issue. REQ-RCA_COV-CON-6: The MDAF should be able to be informed when the actions are taken by the MDAS consumer (e.g., SON CCO function) to solve the coverage issue described in the analytics report. REQ-RCA_COV-CON-7: The MDAF should be able to get the execution reports of the actions taken (by the MDAS consumer, e.g., SON CCO function) to solve the coverage issue described in the analytics report. REQ-RCA_COV-CON-8: The MDAF should be able to provide the update(s) of analytics report to indicate the status change (e.g., solved, mitigated or deteriorated) of the coverage issue. 4.1.3. Possible Solutions 4.1.3.1. Solution Description The MDAS producer correlates, processes and analyzes the data described in the following subclause within a time period on a regular basis or trigged by events (e.g., the RLF reports) to identify the coverage issue, and provide the analytics reports to describe the identified coverage issues (which could be new issues or the updates of existing issues). 4.1.3.2. Data Required for Coverage Issue Analysis Table 6.1.1.3.2 describes the data required for coverage issue analysis. In embodiments, the data described by Table 6.1.1.3.2 is the input data used by the MDAS. TABLE 6.1.1.3.2Data categoryRequired dataPerformanceAverage/distribution of UE reported RSRPs/RSRQs/SINRs of the servingmeasurementscell when the TA (Timing Advance) or UE rx-tx applied to the UEs is in aspecific range;Average/distribution of UE reported RSRPs/RSRQs/SINRs of eachneighbour cell when the UE reported RSRPs/RSROs of the serving cell is ina specific range, measured per NCR (neighbour cell relation), per SSB indexand per CSI-RS index of each NCR;Number of abnormal releases of DRBs, QoS flows, PDU sessions, and UEcontexts in the serving cell measured per SSB index and per CSI-RS indexMDT reportsMDT reports containing RSRPs and RSRQs of the serving cell and neighbourcells reported by each UE with anonymous id (e.g., C-RNTI) and locationinformation.RLF reportsRLF reports containing RSRP (s) and RSRQ (s) of the serving cell andneighbour cells reported by each UE with anonymous id (e.g., C-RNTI) andlocation information.RECF reportsRCEF reports containing RSRP (s) and RSRQ (s) of the serving cell andneighbour cells reported by each UE with anonymous id (e.g., C-RNTI) andlocation information.UE location reportsUE location information provided by the LCS with the anonymous id (e.g., C-RNTI) which can be used to correlate with the MDT/RLF/RCEF reports.QoE reportsThe level of details of QoE reports required in this solution for future study(FFS).Geographical dataThe geographical information (longitude, latitude, altitude) of the deployedand terrain data ofRAN (gNBs and eNodeBs, antennas, sector carrier equipments, etc.).the RANThe terrain data for the area of the deployed RAN.Editor's note: which MnS provides this kind of data is FFS.Configuration dataThe current NRMs containing the attributes affecting the RAN coverage,such as maximum transmission power of the cell, directions and tilts of theantennas or beams, etc.The NRM update reports (notifications or logs) containing the creations orchanges of the MOIs (Managed Object Instance) affecting the RANcoverage. 4.1.3.3. Analytics Report for Coverage Issue The analytics report describing the coverage issue contains the following information:Coverage issue identifier: The identifier of the coverage issue;Coverage issue type indication: Indication that the coverage issue is weak coverage or coverage hole, pilot pollution, overshoot coverage, or DL and UL channel coverage mismatch;Start time: The start time of the coverage issue;Stop time: The stop time of the coverage issue;Location: The geographical area and location where the coverage issue exists;Root cause: Root cause of the coverage issue, e.g., weak transmission power, blocked by constructions, restricted by terrain, etc;RAT indication: Indication that the coverage issue exists in 5G only or in all RATs;Affected objects: The MOIs of the cells affected by the coverage issue;Severity level: The severity level (e.g., critical, medium, cleared) of the coverage issue (Quantification of severity levels is FFS);Recommended actions: The recommended actions to solve the coverage issue. The recommended action could be re-configurations of coverage related attributes, creation of new cells or beams, or manual operations to add or change the physical units. 5. Example Systems and Device Configurations and Arrangements Referring now toFIG.9, which illustrates a network900in accordance with various embodiments. The network900may operate in a manner consistent with 3GPP technical specifications for Long Term Evolution (LTE) or 5G/NR systems. However, the example embodiments are not limited in this regard and the described embodiments may apply to other networks that benefit from the principles described herein, such as future 3GPP systems, or the like. The network900includes a UE902, which is any mobile or non-mobile computing device designed to communicate with a RAN904via an over-the-air connection. The UE902is communicatively coupled with the RAN904by a Uu interface, which may be applicable to both LTE and NR systems. Examples of the UE902include, but are not limited to, a smartphone, tablet computer, wearable computer, desktop computer, laptop computer, in-vehicle infotainment system, in-car entertainment system, instrument cluster, head-up display (HUD) device, onboard diagnostic device, dashtop mobile equipment, mobile data terminal, electronic engine management system, electronic/engine control unit, electronic/engine control module, embedded system, sensor, microcontroller, control module, engine management system, networked appliance, machine-type communication device, machine-to-machine (M2M), device-to-device (D2D), machine-type communication (MTC) device, Internet of Things (IoT) device, and/or the like. The network900may include a plurality of UEs902coupled directly with one another via a D2D, ProSe, PC5, and/or sidelink (SL) interface. These UEs902may be M2M/D2D/MTC/IoT devices and/or vehicular systems that communicate using physical SL channels such as, but not limited to, Physical Sidelink Broadcast Channel (PSBCH), Physical Sidelink Discovery Channel (PSDCH), Physical Sidelink Shared Channel (PSSCH), Physical Sidelink Control Channel (PSCCH), Physical Sidelink Feedback Channel (PSFCH), etc. In some embodiments, the UE902may additionally communicate with an AP906via an over-the-air (OTA) connection. The AP906manages a WLAN connection, which may serve to offload some/all network traffic from the RAN904. The connection between the UE902and the AP906may be consistent with any IEEE 802.11 protocol. Additionally, the UE902, RAN904, and AP906may utilize cellular-WLAN aggregation/integration (e.g., LWA/LWIP). Cellular-WLAN aggregation may involve the UE902being configured by the RAN904to utilize both cellular radio resources and WLAN resources. The UE902may be configured to perform signal and/or cell measurements based on a configuration obtain from the network (e.g., RAN904). The UE902derives cell measurement results by measuring one or multiple beams per cell as configured by the network. For all cell measurement results, the UE902applies layer 3 (L3) filtering before using the measured results for evaluation of reporting criteria and measurement reporting. For cell measurements, the network can configure Reference Signal Received Power (RSRP), Reference Signal Received Quality (RSRQ), and/or Signal-to-Interference plus Noise Ratio (SINR) as a trigger quantity. Reporting quantities can be the same as the trigger quantity or combinations of quantities (e.g., RSRP and RSRQ; RSRP and SINR; RSRQ and SINR; RSRP, RSRQ and SINR). In other embodiments, other measurements and/or combinations of measurements may be used as a trigger quantity such as those discussed in 3GPP TS 36.214 v15.3.0 (2018-09-27) (hereinafter “[T536214]”), 3GPP TS 38.215 v15.5.0 (2019-06-24) (hereinafter “[TS38215]”), Institute of Electrical and Electronics Engineers (IEEE) Standards Association, “IEEE Computer Society: “Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications”, IEEE Std 802.11™-2012 (2012) (hereinafter “[IEEE80211]”), and/or the like. The RAN904includes one or more access network nodes (ANs)908. The ANs908terminate air-interface(s) for the UE902by providing access stratum protocols including Radio Resource Control (RRC), Packet Data Convergence Protocol (PDCP), Radio Link Control (RLC), Medium Access Control (MAC), and physical (PHY/L1) layer protocols. In this manner, the AN908enables data/voice connectivity between CN920and the UE902. The UE902and can be configured to communicate using OFDM communication signals with other UEs902or with any of the AN908over a multicarrier communication channel in accordance with various communication techniques, such as, but not limited to, an OFDMA communication technique (e.g., for DL communications) or a SC-FDMA communication technique (e.g., for UL and SL communications), although the scope of the embodiments is not limited in this respect. The OFDM signals comprise a plurality of orthogonal subcarriers. The ANs908may be a macrocell base station or a low power base station for providing femtocells, picocells or other like cells having smaller coverage areas, smaller user capacity, or higher bandwidth compared to macrocells; or some combination thereof. In these implementations, an AN908be referred to as a BS, gNB, RAN node, eNB, ng-eNB, NodeB, RSU, TRxP, etc. One example implementation is a “CU/DU split” architecture where the ANs908are embodied as a gNB-Central Unit (CU) that is communicatively coupled with one or more gNB-Distributed Units (DUs), where each DU may be communicatively coupled with one or more Radio Units (RUs) (also referred to as RRHs, RRUs, or the like) (see e.g., 3GPP TS 38.401 v15.7.0 (2020-01-09)). In some implementations, the one or more RUs may be individual RSUs. In some implementations, the CU/DU split may include an ng-eNB-CU and one or more ng-eNB-DUs instead of, or in addition to, the gNB-CU and gNB-DUs, respectively. The ANs908employed as the CU may be implemented in a discrete device or as one or more software entities running on server computers as part of, for example, a virtual network including a virtual Base Band Unit (BBU) or BBU pool, cloud RAN (CRAN), Radio Equipment Controller (REC), Radio Cloud Center (RCC), centralized RAN (C-RAN), virtualized RAN (vRAN), and/or the like (although these terms may refer to different implementation concepts). Any other type of architectures, arrangements, and/or configurations can be used. The plurality of ANs may be coupled with one another via an X2 interface (if the RAN904is an LTE RAN or Evolved Universal Terrestrial Radio Access Network (E-UTRAN)910) or an Xn interface (if the RAN904is a NG-RAN914). The X2/Xn interfaces, which may be separated into control/user plane interfaces in some embodiments, may allow the ANs to communicate information related to handovers, data/context transfers, mobility, load management, interference coordination, etc. The ANs of the RAN904may each manage one or more cells, cell groups, component carriers, etc. to provide the UE902with an air interface for network access. The UE902may be simultaneously connected with a plurality of cells provided by the same or different ANs908of the RAN904. For example, the UE902and RAN904may use carrier aggregation (CA) to allow the UE902to connect with a plurality of component carriers, each corresponding to a PCell or SCell. A PCell is an MCG cell, operating on a primary frequency, in which the UE902performs an initial connection establishment procedure and/or initiates a connection re-establishment procedure. An SCell is a cell providing additional radio resources on top of a Special Cell (SpCell) when the UE902is configured with CA. In CA, two or more Component Carriers (CCs) are aggregated. The UE902may simultaneously receive or transmit on one or multiple CCs depending on its capabilities. A UE902with single timing advance capability for CA can simultaneously receive and/or transmit on multiple CCs corresponding to multiple serving cells sharing the same timing advance (multiple serving cells grouped in one timing advance group (TAG)). A UE902with multiple timing advance capability for CA can simultaneously receive and/or transmit on multiple CCs corresponding to multiple serving cells with different timing advances (multiple serving cells grouped in multiple TAGs). The NG-RAN914ensures that each TAG contains at least one serving cell; A non-CA capable UE902can receive on a single CC and transmit on a single CC corresponding to one serving cell only (one serving cell in one TAG). CA is supported for both contiguous and non-contiguous CCs. When CA is deployed frame timing and SFN are aligned across cells that can be aggregated, or an offset in multiples of slots between the PCell/PSCell and an SCell is configured to the UE902. In some implementations, the maximum number of configured CCs for a UE902is 16 for DL and 16 for UL. In Dual Connectivity (DC) scenarios, a first AN908may be a master node that provides a Master Cell Group (MCG) and a second AN908may be secondary node that provides an Secondary Cell Group (SCG). The first and second ANs908may be any combination of eNB, gNB, ng-eNB, etc. The MCG is a subset of serving cells comprising the PCell and zero or more SCells. The SCG is a subset of serving cells comprising the PSCell and zero or more SCells. As alluded to previously, DC operation involves the use of PSCells and SpCells. A PSCell is an SCG cell in which the UE902performs random access (RA) when performing a reconfiguration with Sync procedure, and an SpCell for DC operation is a PCell of the MCG or the PSCell of the SCG; otherwise the term SpCell refers to the PCell. Additionally, the PCell, PSCells, SpCells, and the SCells can operate in the same frequency range (e.g., FR1 or FR2), or the PCell, PSCells, SpCells, and the SCells can operate in different frequency ranges. In one example, the PCell may operate in a sub-6 GHz frequency range/band and the SCell can operate at frequencies above 24.25 GHz (e.g., FR2). The RAN904may provide the air interface over a licensed spectrum or an unlicensed spectrum. To operate in the unlicensed spectrum, the nodes may use LAA, eLAA, and/or feLAA mechanisms based on CA technology with PCells/Scells. Prior to accessing the unlicensed spectrum, the nodes may perform medium/carrier-sensing operations based on, for example, a listen-before-talk (LBT) protocol. In some embodiments, the RAN904may be an E-UTRAN910with one or more eNBs912. The E-UTRAN910provides an LTE air interface (Uu) with the following characteristics: subcarrier spacing (SCS) of 15 kHz; cyclic prefix (CP)-OFDM waveform for DL and SC-FDMA waveform for UL; turbo codes for data and TBCC for control; etc. The LTE air interface may rely on channel state information reference signals (CSI-RS) for channel state information (CSI) acquisition and beam management; Physical Downlink Shared Channel (PDSCH)/Physical Downlink Control Channel (PDCCH) Demodulation Reference Signal (DMRS) for PDSCH/PDCCH demodulation; and cell-specific reference signals (CRS) for cell search and initial acquisition, channel quality measurements, and channel estimation for coherent demodulation/detection at the UE. The LTE air interface may operating on sub-6 GHz bands. In some embodiments, the RAN904may be an next generation (NG)-RAN914with one or more gNB916and/or on or more ng-eNB918. The gNB916connects with 5G-enabled UEs902using a 5G NR interface. The gNB916connects with a 5GC940through an NG interface, which includes an N2 interface or an N3 interface. The ng-eNB918also connects with the 5GC940through an NG interface, but may connect with a UE902via the Uu interface. The gNB916and the ng-eNB918may connect with each other over an Xn interface. In some embodiments, the NG interface may be split into two parts, an NG user plane (NG-U) interface, which carries traffic data between the nodes of the NG-RAN914and a UPF (e.g., N3 interface), and an NG control plane (NG-C) interface, which is a signaling interface between the nodes of the NG-RAN914and an AMF (e.g., N2 interface). The NG-RAN914may provide a 5G-NR air interface (which may also be referred to as a Uu interface) with the following characteristics: variable SCS; CP-OFDM for DL, CP-OFDM and DFT-s-OFDM for UL; polar, repetition, simplex, and Reed-Muller codes for control and LDPC for data. The 5G-NR air interface may rely on CSI-RS, PDSCH/PDCCH DMRS similar to the LTE air interface. The 5G-NR air interface may not use a CRS, but may use Physical Broadcast Channel (PBCH) DMRS for PBCH demodulation; Phase Tracking Reference Signals (PTRS) for phase tracking for PDSCH; and tracking reference signal for time tracking. The 5G-NR air interface may operating on FR1 bands that include sub-6 GHz bands or FR2 bands that include bands from 24.25 GHz to 52.6 GHz. The 5G-NR air interface may include an Synchronization Signal Block (SSB) that is an area of a DL resource grid that includes Primary Synchronization Signal (PSS)/Secondary Synchronization Signal (SSS)/PBCH. The 5G-NR air interface may utilize bandwidth parts (BWPs) for various purposes. For example, BWP can be used for dynamic adaptation of the SCS. A BWP is a subset of contiguous common resource blocks defined in clause 4.4.4.3 of 3GPP TS 38.211 or a given numerology in a BWP on a given carrier. For example, the UE902can be configured with multiple BWPs where each BWP configuration has a different SCS. When a BWP change is indicated to the UE902, the SCS of the transmission is changed as well. Another use case example of BWP is related to power saving. In particular, multiple BWPs can be configured for the UE902with different amount of frequency resources (e.g., PRBs) to support data transmission under different traffic loading scenarios. A BWP containing a smaller number of PRBs can be used for data transmission with small traffic load while allowing power saving at the UE902and in some cases at the gNB916. A BWP containing a larger number of PRBs can be used for scenarios with higher traffic load. The RAN904is communicatively coupled to CN920, which includes network elements and/or network functions (NFs) to provide various functions to support data and telecommunications services to customers/subscribers (e.g., UE902). The network elements and/or NFs may be implemented by one or more servers921,941. The components of the CN920may be implemented in one physical node or separate physical nodes. In some embodiments, NFV may be utilized to virtualize any or all of the functions provided by the network elements of the CN920onto physical compute/storage resources in servers, switches, etc. A logical instantiation of the CN920may be referred to as a network slice, and a logical instantiation of a portion of the CN920may be referred to as a network sub-slice. The CN920may be an LTE CN922(also referred to as an Evolved Packet Core (EPC)922). The EPC922may include MME, SGW, SGSN, HSS, PGW, PCRF, and/or other NFs coupled with one another over various interfaces (or “reference points”) (not shown). The CN920may be a 5GC940including an AUSF, AMF, SMF, UPF, NSSF, NEF, NRF, PCF, UDM, AF, and/or other NFs coupled with one another over various service-based interfaces and/or reference points (see e.g.,FIGS.10and11). The 5GC940may enable edge computing by selecting operator/3rd party services to be geographically close to a point that the UE902is attached to the network. This may reduce latency and load on the network. In edge computing implementations, the 5GC940may select a UPF close to the UE902and execute traffic steering from the UPF to DN936via the N6 interface. This may be based on the UE subscription data, UE location, and information provided by the AF, which allows the AF to influence UPF (re)selection and traffic routing. The data network (DN)936may represent various network operator services, Internet access, or third party services that may be provided by one or more servers including, for example, application (app)/content server938. The DN936may be an operator external public, a private PDN, or an intra-operator packet data network, for example, for provision of IMS services. In this embodiment, the server938can be coupled to an IMS via an S-CSCF or the I-CSCF. In some implementations, the DN936may represent one or more local area DNs (LADNs), which are DNs936(or DN names (DNNs)) that is/are accessible by a UE902in one or more specific areas. Outside of these specific areas, the UE902is not able to access the LADN/DN936. Additionally or alternatively, the DN936may be an Edge DN936, which is a (local) Data Network that supports the architecture for enabling edge applications. In these embodiments, the app server938may represent the physical hardware systems/devices providing app server functionality and/or the application software resident in the cloud or at an edge compute node that performs server function(s). In some embodiments, the app/content server938provides an edge hosting environment that provides support required for Edge Application Server's execution. In some embodiments, the 5GS can use one or more edge compute nodes to provide an interface and offload processing of wireless communication traffic. In these embodiments, the edge compute nodes may be included in, or co-located with one or more RAN910,914. For example, the edge compute nodes can provide a connection between the RAN914and UPF in the 5GC940. The edge compute nodes can use one or more NFV instances instantiated on virtualization infrastructure within the edge compute nodes to process wireless connections to and from the RAN914and a UPF1002. In some implementations, the system900may include an SMSF, which is responsible for SMS subscription checking and verification, and relaying SM messages to/from the UE902to/from other entities, such as an SMS-GMSC/IWMSC/SMS-router. The SMS may also interact with AMF and UDM for a notification procedure that the UE902is available for SMS transfer (e.g., set a UE not reachable flag, and notifying UDM when UE902is available for SMS). Self-Organizing Network (SON) is introduced to reduce the operating expenditure (OPEX) associated with the management of deployments with a relatively large number of nodes. Automation of some network planning, configuration and optimisation processes via the use of SON functions can help the network operator to reduce OPEX by reducing manual involvement in such tasks. There are four different architectures that are possible for implementing various SON use cases including: Centralised SON (where SON algorithms are executed in the OAM system), and Distributed SON (where SON algorithms are executed at the Network Element level. Centralised SON has two variants: NM-Centralised SON (where SON algorithms are executed at the Network Management (NM) level), and EM-Centralised SON (where SON algorithms are executed at the Element Management (EM) level). The SON functions may include, for example, Load Balancing Optimization (LBO), Handover (HO) Parameter Optimization, Capacity and Coverage Optimization (CCO) including Centralized CCO and Distributed CCO, Random Access (RACH) Optimization, SON coordination, self-establishment and self-configuration of 3GPP NFs including automated software management, Automatic Neighbour Relation (ANR) management (including automatic X2 and Xn setup), PCI Configuration, automatic radio configuration data (ARCF), interference coordination including inter-cell interference coordination, self-healing, SON for Active Antenna System (AAS)-based Deployments, trace and MDT reporting, Mobility Robustness Optimization (MRO), Energy Saving Management (ESM) (including intra-LTE Energy Saving in LTE networks, intra-5G Energy Saving in 5G networks, and Inter-RAT Energy Saving), NSI resource allocation optimization, Automatic NSI creation, Multi-vendor Plug and Play of NFs, optimization of the quality of communication services, cross-slice network resource optimization, multi-aspect/multi-domain resource optimization, automatic CSI creation as discussed in [2], [32500], 3GPP TS 32.522 11.7.0 (2013-09-20), and/or the like. FIGS.10and11illustrate example system architectures1000and1100(collectively “5GC1000”) of a 5GC such as CN940ofFIG.9, in accordance with various embodiments. In particular,FIG.10shows an exemplary 5G system architecture1000in a reference point representation where interactions between NFs are represented by corresponding point-to-point reference points Ni, andFIG.11illustrates an exemplary 5G system architecture1100in a service-based representation where interactions between NFs are represented by corresponding service-based interfaces. The system1000is shown to include a UE1001, which may be the same or similar to the UEs902discussed previously; a (R)AN1010, which may be the same or similar to the AN908discussed previously; and a DN1003, which may be, for example, operator services, Internet access or 3rd party services, and may correspond with a Packet Data Network in LTE systems; and a 5GC1020. The 5GC1020may include an Access and Mobility Management Function (AMF)1021; an Authentication Server Function (AUSF)1022; a Session Management Function (SMF)1024; a Network Exposure Function (NEF)1023; a Policy Control Function (PCF)1026; an NF Repository Function (NRF)1025; a Unified Data Management (UDM)1027; an Application Function (AF)1028; a User Plane Function (UPF)1002; a Network Slice Selection Function (NSSF)1029; and a Service Communication Proxy (SCP)1030. The reference point representation ofFIG.10shows various interactions between corresponding NFs. For example,FIG.10illustrates the following reference points: N1 (between the UE1001and the AMF1021), N2 (between the RAN1010and the AMF1021), N3 (between the RAN1010and the UPF1002), N4 (between the SMF1024and the UPF1002), N5 (between the PCF1026and the AF1028), N6 (between the UPF1002and the DN1003), N7 (between the SMF1024and the PCF1026), N8 (between the UDM1027and the AMF1021), N9 (between two UPFs1002), N10 (between the UDM1027and the SMF1024), N11 (between the AMF1021and the SMF1024), N12 (between the AUSF1022and the AMF1021), N13 (between the AUSF1022and the UDM1027), N14 (between two AMFs1021), N15 (between the PCF1026and the AMF1021in case of a non-roaming scenario, or between the PCF1026and a visited network and AMF1021in case of a roaming scenario), N16 (between two SNIFs; not shown), and N22 (between AMF1021and NSSF1025). Other reference point representations not shown inFIG.10can also be used. The service-based representation ofFIG.11represents NFs within the control plane that enable other authorized NFs to access their services. In this regard, 5G system architecture 300 can include the following service-based interfaces: Namf (a service-based interface exhibited by the AMF1021), Nsmf (a service-based interface exhibited by the SMF1024), Nnef (a service-based interface exhibited by the NEF1023), Npcf (a service-based interface exhibited by the PCF1026), Nudm (a service-based interface exhibited by the UDM1027), Naf (a service-based interface exhibited by the AF1028), Nnrf (a service-based interface exhibited by the NRF1025), Nnssf (a service-based interface exhibited by the NSSF1029), Nausf (a service-based interface exhibited by the AUSF1022). Other service-based interfaces (e.g., Nudr, N5g-eir, and Nudsf) not shown inFIG.11can also be used. In embodiments, the NEF1023can provide an interface to Edge node1036, which can be used to process wireless connections with the RAN1010. The AUSF1022stores data for authentication of UE1001and handle authentication-related functionality. The AUSF1022may facilitate a common authentication framework for various access types. The AUSF1022may communicate with the AMF1021via an N12 reference point between the AMF1021and the AUSF1022; and may communicate with the UDM1027via an N13 reference point between the UDM1027and the AUSF1022. Additionally, the AUSF1022may exhibit an Nausf service-based interface. The AMF1021allows other functions of the 5GC1000to communicate with the UE1001and the RAN1010and to subscribe to notifications about mobility events with respect to the UE1001. The AMF1021is also responsible for registration management (e.g., for registering UE1001), connection management, reachability management, mobility management, lawful interception of AMF-related events, and access authentication and authorization. The AMF1021provides transport for SM messages between the UE1001and the SMF1024, and acts as a transparent proxy for routing SM messages. AMF1021also provides transport for SMS messages between UE1001and an SMSF. AMF944interacts with the AUSF1022and the UE1001to perform various security anchor and context management functions. Furthermore, AMF1021is a termination point of a RAN-CP interface, which includes the N2 reference point between the RAN1010and the AMF1021. The AMF1021is also a termination point of Non-Access Stratum (NAS) (N1) signaling, and performs NAS ciphering and integrity protection. The AMF1021also supports NAS signaling with the UE1001over an N3IWF interface. The N3IWF provides access to untrusted entities. N3IWF may be a termination point for the N2 interface between the (R)AN1010and the AMF1021for the control plane, and may be a termination point for the N3 reference point between the (R)AN1010and the UPF1002for the user plane. As such, the AMF1021handles N2 signalling from the SMF1024and the AMF1021for PDU sessions and QoS, encapsulate/de-encapsulate packets for IP Sec and N3 tunnelling, marks N3 user-plane packets in the uplink, and enforces QoS corresponding to N3 packet marking taking into account QoS requirements associated with such marking received over N2. N3IWF may also relay UL and DL control-plane NAS signalling between the UE1001and AMF1021via an N1 reference point between the UE100land the AMF1021, and relay uplink and downlink user-plane packets between the UE1001and UPF1002. The N3IWF also provides mechanisms for IPsec tunnel establishment with the UE1001. The AMF1021may exhibit an Namf service-based interface, and may be a termination point for an N14 reference point between two AMFs1040and an N17 reference point between the AMF1021and a 5G-EIR (not shown byFIG.9). The SMF1024is responsible for SM (e.g., session establishment, tunnel management between UPF1002and (R)AN1010); UE IP address allocation and management (including optional authorization); selection and control of UP function; configuring traffic steering at UPF1002to route traffic to proper destination; termination of interfaces toward policy control functions; controlling part of policy enforcement, charging, and QoS; lawful intercept (for SM events and interface to LI system); termination of SM parts of NAS messages; downlink data notification; initiating AN specific SM information, sent via AMF1021over N2 to (R)AN1010; and determining SSC mode of a session. SM refers to management of a PDU session, and a PDU session or “session” refers to a PDU connectivity service that provides or enables the exchange of PDUs between the UE1001and the DN1003. The UPF1002acts as an anchor point for intra-RAT and inter-RAT mobility, an external PDU session point of interconnect to data network1003, and a branching point to support multi-homed PDU session. The UPF1002also performs packet routing and forwarding, packet inspection, enforces user plane part of policy rules, lawfully intercept packets (UP collection), performs traffic usage reporting, perform QoS handling for a user plane (e.g., packet filtering, gating, UL/DL rate enforcement), performs uplink traffic verification (e.g., SDF-to-QoS flow mapping), transport level packet marking in the uplink and downlink, and performs downlink packet buffering and downlink data notification triggering. UPF1002may include an uplink classifier to support routing traffic flows to a data network. The NSSF1029selects a set of network slice instances serving the UE1001. The NSSF1029also determines allowed NSSAI and the mapping to the subscribed S-NSSAIs, if needed. The NSSF1029also determines an AMF set to be used to serve the UE1001, or a list of candidate AMFs1021based on a suitable configuration and possibly by querying the NRF1025. The selection of a set of network slice instances for the UE1001may be triggered by the AMF1021with which the UE1001is registered by interacting with the NSSF1029; this may lead to a change of AMF1021. The NSSF1029interacts with the AMF1021via an N22 reference point; and may communicate with another NSSF in a visited network via an N31 reference point (not shown). The NEF1023securely exposes services and capabilities provided by 3GPP NFs for third party, internal exposure/re-exposure, AFs1028, edge computing or fog computing systems (e.g., edge compute node1036, etc. In such embodiments, the NEF1023may authenticate, authorize, or throttle the AFs. NEF1023may also translate information exchanged with the AF1028and information exchanged with internal network functions. For example, the NEF1023may translate between an AF-Service-Identifier and an internal 5GC information. NEF1023may also receive information from other NFs based on exposed capabilities of other NFs. This information may be stored at the NEF1023as structured data, or at a data storage NF using standardized interfaces. The stored information can then be re-exposed by the NEF1023to other NFs and AFs1028, or used for other purposes such as analytics. The NRF1025supports service discovery functions, receives NF discovery requests from NF instances, and provides information of the discovered NF instances to the requesting NF instances. NRF1025also maintains information of available NF instances and their supported services. The NRF1025also supports service discovery functions, wherein the NRF1025receives NF Discovery Request from NF instance or an SCP (not shown), and provides information of the discovered NF instances to the NF instance or SCP. The PCF1026provides policy rules to control plane functions to enforce them, and may also support unified policy framework to govern network behavior. The PCF1026may also implement a front end to access subscription information relevant for policy decisions in a UDR of the UDM1027. In addition to communicating with functions over reference points as shown, the PCF1026exhibit an Npcf service-based interface. The UDM1027handles subscription-related information to support the network entities' handling of communication sessions, and stores subscription data of UE1001. For example, subscription data may be communicated via an N8 reference point between the UDM1027and the AMF1021. The UDM1027may include two parts, an application front end and a UDR. The UDR may store subscription data and policy data for the UDM1027and the PCF1026, and/or structured data for exposure and application data (including PFDs for application detection, application request information for multiple UEs1001) for the NEF1023. The Nudr service-based interface may be exhibited by the UDR221to allow the UDM1027, PCF1026, and NEF1023to access a particular set of the stored data, as well as to read, update (e.g., add, modify), delete, and subscribe to notification of relevant data changes in the UDR. The UDM may include a UDM-FE, which is in charge of processing credentials, location management, subscription management and so on. Several different front ends may serve the same user in different transactions. The UDM-FE accesses subscription information stored in the UDR and performs authentication credential processing, user identification handling, access authorization, registration/mobility management, and subscription management. In addition to communicating with other NFs over reference points as shown, the UDM1027may exhibit the Nudm service-based interface. The AF1028provides application influence on traffic routing, provide access to NEF1023, and interact with the policy framework for policy control. The AF1028may influence UPF1002(re)selection and traffic routing. Based on operator deployment, when AF1028is considered to be a trusted entity, the network operator may permit AF1028to interact directly with relevant NFs. Additionally, the AF1028may be used for edge computing implementations. The 5GC1000may enable edge computing by selecting operator/3rd party services to be geographically close to a point that the UE1001is attached to the network. This may reduce latency and load on the network. In edge computing implementations, the 5GC1000may select a UPF1002close to the UE902and execute traffic steering from the UPF1002to DN1003via the N6 interface. This may be based on the UE subscription data, UE location, and information provided by the AF1028, which allows the AF1028to influence UPF (re)selection and traffic routing. The DN1003may represent various network operator services, Internet access, or third party services that may be provided by one or more servers including, for example, application (app)/content server1040. The DN1003may be an operator external public, a private PDN, or an intra-operator packet data network, for example, for provision of IMS services. In this embodiment, the app server1040can be coupled to an IMS via an S-CSCF or the I-CSCF. In some implementations, the DN1003may represent one or more local area DNs (LADNs), which are DNs1003(or DN names (DNNs)) that is/are accessible by a UE1001in one or more specific areas. Outside of these specific areas, the UE1001is not able to access the LADN/DN1003. In some implementations, the DN1003may be, or include, one or more edge compute nodes1036. Additionally or alternatively, the DN1003may be an Edge DN1003, which is a (local) Data Network that supports the architecture for enabling edge applications. In these embodiments, the app server1040may represent the physical hardware systems/devices providing app server functionality and/or the application software resident in the cloud or at an edge compute node1036that performs server function(s). In some embodiments, the app/content server1040provides an edge hosting environment that provides support required for Edge Application Server's execution. In some embodiments, the 5GS can use one or more edge compute nodes1036to provide an interface and offload processing of wireless communication traffic. In these embodiments, the edge compute nodes1036may be included in, or co-located with one or more RANs1010. For example, the edge compute nodes1036can provide a connection between the RAN1010and UPF1002in the 5GC1000. The edge compute nodes1036can use one or more NFV instances instantiated on virtualization infrastructure within the edge compute nodes1036to process wireless connections to and from the RAN1010and UPF1002. In embodiments, the edge node1036may include or be part of an edge system (or edge network). The edge node1036may also be referred to as “edge hosts1036” or “edge servers1036.” The edge system includes a collection of edge servers1036and edge management systems (not shown) necessary to run edge computing applications within an operator network or a subset of an operator network. The edge servers1036are physical computer systems that may include an edge platform and/or virtualization infrastructure, and provide compute, storage, and network resources to edge computing applications. Each of the edge servers1036are disposed at an edge of a corresponding access network, and are arranged to provide computing resources and/or various services (e.g., computational task and/or workload offloading, cloud-computing capabilities, IT services, and other like resources and/or services as discussed herein) in relatively close proximity to UEs901,1001The VI of the edge servers1036provide virtualized environments and virtualized resources for the edge hosts, and the edge computing applications may run as VMs and/or application containers on top of the VI. Various edge computing/networking technologies in various combinations and layouts of devices located at the edge of a network may be used. Examples of such edge computing/networking technologies that may implement the embodiments herein include ETSI MEC; CDNs Mobility Service Provider (MSP) edge computing and/or Mobility as a Service (MaaS) provider systems (e.g., used in AECC architectures); Nebula edge-cloud systems; Fog computing systems; Cloudlet edge-cloud systems; Mobile Cloud Computing (MCC) systems; Central Office Re-architected as a Datacenter (CORD), mobile CORD (M-CORD) and/or Converged Multi-Access and Core (COMAC) systems; and/or the like. Further, the techniques disclosed herein may relate to other IoT edge network systems and configurations, and other intermediate processing entities and architectures may also be used to practice the embodiments herein. The SCP1030(or individual instances of the SCP1030) supports indirect communication (see e.g., 3GPP TS 23.501 section 7.1.1); delegated discovery (see e.g., 3GPP TS 23.501 section 7.1.1); message forwarding and routing to destination NF/NF service(s), communication security (e.g., authorization of the NF Service Consumer to access the NF Service Producer API) (see e.g., 3GPP TS 33.501), load balancing, monitoring, overload control, etc.; and discovery and selection functionality for UDM(s), AUSF(s), UDR(s), PCF(s) with access to subscription data stored in the UDR based on UE's SUPI, SUCI or GPSI (see e.g., 3GPP TS 23.501 section 6.3). Load balancing, monitoring, overload control functionality provided by the SCP may be implementation specific. The SCP230may be deployed in a distributed manner. More than one SCP1030can be present in the communication path between various NF Services. The SCP1030, although not an NF instance, can also be deployed distributed, redundant, and scalable. The system architecture1000/300may also include other elements that are not shown byFIG.10or3, such as a Data Storage system/architecture, a 5G-EIR, a SEPP, and the like. The Data Storage system may include a SDSF, an UDSF, and/or the like. Any NF may store and retrieve unstructured data into/from the UDSF (e.g., UE contexts), via N18 reference point between any NF and the UDSF (not shown byFIG.2). Individual NFs may share a UDSF for storing their respective unstructured data or individual NFs may each have their own UDSF located at or near the individual NFs. Additionally, the UDSF may exhibit an Nudsf service-based interface (not shown byFIG.2). The 5G-EIR may be an NF that checks the status of PEI for determining whether particular equipment/entities are blacklisted from the network; and the SEPP may be a non-transparent proxy that performs topology hiding, message filtering, and policing on inter-PLMN control plane interfaces. In another example, the 5G system architecture1000includes an IP multimedia subsystem (IMS) as well as a plurality of IP multimedia core network subsystem entities, such as call session control functions (CSCFs) (not shown byFIG.10or3). More specifically, the IMS includes a CSCF, which can act as a proxy CSCF (P-CSCF), a serving CSCF (S-CSCF), an emergency CSCF (E-CSCF), or interrogating CSCF (I-CSCF). The P-CSCF can be configured to be the first contact point for the UE1001within the IMS. The S-CSCF can be configured to handle the session states in the network, and the E-CSCF can be configured to handle certain aspects of emergency sessions such as routing an emergency request to the correct emergency center or public safety answering point (PSAP). The I-CSCF can be configured to function as the contact point within an operator's network for all IMS connections destined to a subscriber of that network operator, or a roaming subscriber currently located within that network operator's service area. In some aspects, the I-CSCF can be connected to another IP multimedia network, for example, an IMS operated by a different network operator. Additionally, there may be many more reference points and/or service-based interfaces between the NF services in the NFs; however, these interfaces and reference points have been omitted fromFIGS.10and11for clarity. In one example, the CN1020may include an Nx interface, which is an inter-CN interface between the MME and the AMF1021in order to enable interworking between system200and an EPC. Other example interfaces/reference points may include an N5g-EIR service-based interface exhibited by a 5G-EIR, an N27 reference point between the NRF in the visited network and the NRF in the home network; and an N31 reference point between the NSSF in the visited network and the NSSF in the home network. FIG.12illustrates an example of infrastructure equipment1200in accordance with various embodiments. The infrastructure equipment1200(or “system1200”) may be implemented as a base station, radio head, RAN node such as the AN908shown and described previously, application server(s)938, and/or any other element/device discussed herein. In other examples, the system1200could be implemented in or by a UE901. The system1200includes application circuitry1205, baseband circuitry1210, one or more radio front end modules (RFEMs)1215, memory circuitry1220, power management integrated circuitry (PMIC)1225, power tee circuitry1230, network controller circuitry1235, network interface connector1240, satellite positioning circuitry1245, and user interface1250. In some embodiments, the device1200may include additional elements such as, for example, memory/storage, display, camera, sensor, or input/output (I/O) interface. In other embodiments, the components described below may be included in more than one device. For example, said circuitries may be separately included in more than one device for CRAN, vBBU, or other like implementations. Application circuitry1205includes circuitry such as, but not limited to one or more processors (or processor cores), cache memory, and one or more of low drop-out voltage regulators (LDOs), interrupt controllers, serial interfaces such as SPI, I2C or universal programmable serial interface module, real time clock (RTC), timer-counters including interval and watchdog timers, general purpose input/output (I/O or IO), memory card controllers such as Secure Digital (SD) MultiMediaCard (MMC) or similar, Universal Serial Bus (USB) interfaces, Mobile Industry Processor Interface (MIPI) interfaces and Joint Test Access Group (JTAG) test access ports. The processors (or cores) of the application circuitry x05 may be coupled with or may include memory/storage elements and may be configured to execute instructions stored in the memory/storage to enable various applications or operating systems to run on the system1200. In some implementations, the memory/storage elements may be on-chip memory circuitry, which may include any suitable volatile and/or non-volatile memory, such as DRAM, SRAM, EPROM, EEPROM, Flash memory, solid-state memory, and/or any other type of memory device technology, such as those discussed herein. The processor(s) of application circuitry1205may include, for example, one or more processor cores (CPUs), one or more application processors, one or more graphics processing units (GPUs), one or more reduced instruction set computing (RISC) processors, one or more Acorn RISC Machine (ARM) processors, one or more complex instruction set computing (CISC) processors, one or more digital signal processors (DSP), one or more FPGAs, one or more PLDs, one or more ASICs, one or more microprocessors or controllers, or any suitable combination thereof. In some embodiments, the application circuitry1205may comprise, or may be, a special-purpose processor/controller to operate according to the various embodiments herein. As examples, the processor(s) of application circuitry1205may include one or more Intel Pentium®, Core®, or Xeon® processor(s); Advanced Micro Devices (AMD) Ryzen® processor(s), Accelerated Processing Units (APUs), or Epyc® processors; ARM-based processor(s) licensed from ARM Holdings, Ltd. such as the ARM Cortex-A family of processors and the ThunderX2® provided by Cavium™, Inc.; a MIPS-based design from MIPS Technologies, Inc. such as MIPS Warrior P-class processors; and/or the like. In some embodiments, the system1200may not utilize application circuitry1205, and instead may include a special-purpose processor/controller to process IP data received from an EPC or 5GC, for example. In some implementations, the application circuitry1205may include one or more hardware accelerators, which may be microprocessors, programmable processing devices, or the like. The one or more hardware accelerators may include, for example, computer vision (CV) and/or deep learning (DL) accelerators. As examples, the programmable processing devices may be one or more a field-programmable devices (FPDs) such as field-programmable gate arrays (FPGAs) and the like; programmable logic devices (PLDs) such as complex PLDs (CPLDs), high-capacity PLDs (HCPLDs), and the like; ASICs such as structured ASICs and the like; programmable SoCs (PSoCs); and the like. In such implementations, the circuitry of application circuitry1205may comprise logic blocks or logic fabric, and other interconnected resources that may be programmed to perform various functions, such as the procedures, methods, functions, etc. of the various embodiments discussed herein. In such embodiments, the circuitry of application circuitry1205may include memory cells (e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory, static memory (e.g., static random access memory (SRAM), anti-fuses, etc.)) used to store logic blocks, logic fabric, data, etc. in look-up-tables (LUTs) and the like. The baseband circuitry1210may be implemented, for example, as a solder-down substrate including one or more integrated circuits, a single packaged integrated circuit soldered to a main circuit board or a multi-chip module containing two or more integrated circuits. User interface circuitry1250may include one or more user interfaces designed to enable user interaction with the system1200or peripheral component interfaces designed to enable peripheral component interaction with the system1200. User interfaces may include, but are not limited to, one or more physical or virtual buttons (e.g., a reset button), one or more indicators (e.g., light emitting diodes (LEDs)), a physical keyboard or keypad, a mouse, a touchpad, a touchscreen, speakers or other audio emitting devices, microphones, a printer, a scanner, a headset, a display screen or display device, etc. Peripheral component interfaces may include, but are not limited to, a nonvolatile memory port, a universal serial bus (USB) port, an audio jack, a power supply interface, etc. The radio front end modules (RFEMs)1215may comprise a millimeter wave (mmWave) RFEM and one or more sub-mmWave radio frequency integrated circuits (RFICs). In some implementations, the one or more sub-mmWave RFICs may be physically separated from the mmWave RFEM. The RFICs may include connections to one or more antennas or antenna arrays (see e.g., antenna array5111ofFIG.5infra), and the RFEM may be connected to multiple antennas. In alternative implementations, both mmWave and sub-mmWave radio functions may be implemented in the same physical RFEM1215, which incorporates both mmWave antennas and sub-mmWave. The memory circuitry1220may include one or more of volatile memory including dynamic random access memory (DRAM) and/or synchronous dynamic random access memory (SDRAM), and nonvolatile memory (NVM) including high-speed electrically erasable memory (commonly referred to as Flash memory), phase change random access memory (PRAM), magnetoresistive random access memory (MRAM), etc., and may incorporate the three-dimensional (3D) cross-point (XPOINT) memories from Intel® and Micron®. Memory circuitry1220may be implemented as one or more of solder down packaged integrated circuits, socketed memory modules and plug-in memory cards. The PMIC1225may include voltage regulators, surge protectors, power alarm detection circuitry, and one or more backup power sources such as a battery or capacitor. The power alarm detection circuitry may detect one or more of brown out (under-voltage) and surge (over-voltage) conditions. The power tee circuitry1230provides for electrical power to be drawn from a network cable to provide both power supply and data connectivity to the infrastructure equipment1200using a single cable. The network controller circuitry1235may provide connectivity to a network using a standard network interface protocol such as Ethernet, Ethernet over GRE Tunnels, Ethernet over Multiprotocol Label Switching (MPLS), or some other suitable protocol. Network connectivity may be provided to/from the infrastructure equipment1200via network interface connector1240using a physical connection, which may be electrical (commonly referred to as a “copper interconnect”), optical, or wireless. The network controller circuitry1235may include one or more dedicated processors and/or FPGAs to communicate using one or more of the aforementioned protocols. In some implementations, the network controller circuitry1235may include multiple controllers to provide connectivity to other networks using the same or different protocols. The positioning circuitry1245includes circuitry to receive and decode signals transmitted/broadcasted by a positioning network of a global navigation satellite system (GNSS). Examples of navigation satellite constellations (or GNSS) include United States' Global Positioning System (GPS), Russia's Global Navigation System (GLONASS), the European Union's Galileo system, China's BeiDou Navigation Satellite System, a regional navigation system or GNSS augmentation system (e.g., Navigation with Indian Constellation (NAVIC), Japan's Quasi-Zenith Satellite System (QZSS), France's Doppler Orbitography and Radio-positioning Integrated by Satellite (DORIS), etc.), or the like. The positioning circuitry1245comprises various hardware elements (e.g., including hardware devices such as switches, filters, amplifiers, antenna elements, and the like to facilitate OTA communications) to communicate with components of a positioning network, such as navigation satellite constellation nodes. In some embodiments, the positioning circuitry1245may include a Micro-Technology for Positioning, Navigation, and Timing (Micro-PNT) IC that uses a master timing clock to perform position tracking/estimation without GNSS assistance. The positioning circuitry1245may also be part of, or interact with, the baseband circuitry1210and/or RFEMs1215to communicate with the nodes and components of the positioning network. The positioning circuitry1245may also provide position data and/or time data to the application circuitry1205, which may use the data to synchronize operations with various infrastructure (e.g., AN908, etc.), or the like. The components shown byFIG.12may communicate with one another using interface circuitry, which may include any number of bus and/or interconnect (IX) technologies such as ISA, extended ISA, I2C, SPI, point-to-point interfaces, power management bus (PMBus), PCI, PCIe, PCIx, Intel® UPI, Intel® IAL, Intel® CXL, CAPI, OpenCAPI, Intel® QPI, UPI, Intel® OPA IX, RapidIO™ system IXs, CCIX, Gen-Z Consortium IXs, a HyperTransport interconnect, NVLink provided by NVIDIA®, and/or any number of other IX technologies. The IX technology may be a proprietary bus, for example, used in an SoC based system. FIG.13schematically illustrates a wireless network1300in accordance with various embodiments. The wireless network1300includes a UE1302in wireless communication with an AN1304. The UE1302and AN134may be the same, similar to, and/or substantially interchangeable with, like-named components described elsewhere herein such as the UE1001and RAN904ofFIG.9, and/or system1200ofFIG.4. The UE1302may be communicatively coupled with the AN1304via connection1306. The connection1306is illustrated as an air interface to enable communicative coupling, and can be consistent with cellular communications protocols such as an LTE protocol or a 5G NR protocol operating at mmWave or sub-6 GHz frequencies. The UE1302may include a host platform1308coupled with a modem platform1310. The host platform1308may include application processing circuitry1312, which may be coupled with protocol processing circuitry1314of the modem platform1310. The application processing circuitry1312may run various applications for the UE1302that source/sink application data. The application processing circuitry1312may further implement one or more layer operations to transmit/receive application data to/from a data network. These layer operations may include transport (for example UDP) and Internet (for example, IP) operations The protocol processing circuitry1314may implement one or more of layer operations to facilitate transmission or reception of data over the connection1306. The layer operations implemented by the protocol processing circuitry1314may include, for example, MAC, RLC, PDCP, RRC and NAS operations. The modem platform1310may further include digital baseband circuitry1316that may implement one or more layer operations that are “below” layer operations performed by the protocol processing circuitry1314in a network protocol stack. These operations may include, for example, PHY operations including one or more of HARQ-ACK functions, scrambling/descrambling, encoding/decoding, layer mapping/de-mapping, modulation symbol mapping, received symbol/bit metric determination, multi-antenna port precoding/decoding, which may include one or more of space-time, space-frequency or spatial coding, reference signal generation/detection, preamble sequence generation and/or decoding, synchronization sequence generation/detection, control channel signal blind decoding, and other related functions. The modem platform1310may further include transmit circuitry1318, receive circuitry1320, RF circuitry1322, and RF front end (RFFE)1324, which may include or connect to one or more antenna panels1326. Briefly, the transmit circuitry1318may include a digital-to-analog converter, mixer, intermediate frequency (IF) components, etc.; the receive circuitry1320may include an analog-to-digital converter, mixer, IF components, etc.; the RF circuitry1322may include a low-noise amplifier, a power amplifier, power tracking components, etc.; RFFE1324may include filters (for example, surface/bulk acoustic wave filters), switches, antenna tuners, beamforming components (for example, phase-array antenna components), etc. The selection and arrangement of the components of the transmit circuitry1318, receive circuitry1320, RF circuitry1322, RFFE1324, and antenna panels1326(referred generically as “transmit/receive components”) may be specific to details of a specific implementation such as, for example, whether communication is TDM or FDM, in mmWave or sub-6 gHz frequencies, etc. In some embodiments, the transmit/receive components may be arranged in multiple parallel transmit/receive chains, may be disposed in the same or different chips/modules, etc. In some embodiments, the protocol processing circuitry1314may include one or more instances of control circuitry (not shown) to provide control functions for the transmit/receive components. A UE reception may be established by and via the antenna panels1326, RFFE1324, RF circuitry1322, receive circuitry1320, digital baseband circuitry1316, and protocol processing circuitry1314. In some embodiments, the antenna panels1326may receive a transmission from the AN1304by receive-beamforming signals received by a plurality of antennas/antenna elements of the one or more antenna panels1326. A UE transmission may be established by and via the protocol processing circuitry1314, digital baseband circuitry1316, transmit circuitry1318, RF circuitry1322, RFFE1324, and antenna panels1326. In some embodiments, the transmit components of the UE1304may apply a spatial filter to the data to be transmitted to form a transmit beam emitted by the antenna elements of the antenna panels1326. Similar to the UE1302, the AN1304may include a host platform1328coupled with a modem platform1330. The host platform1328may include application processing circuitry1332coupled with protocol processing circuitry1334of the modem platform1330. The modem platform may further include digital baseband circuitry1336, transmit circuitry1338, receive circuitry1340, RF circuitry1342, RFFE circuitry1344, and antenna panels1346. The components of the AN1304may be similar to and substantially interchangeable with like-named components of the UE1302. In addition to performing data transmission/reception as described above, the components of the AN1308may perform various logical functions that include, for example, RNC functions such as radio bearer management, uplink and downlink dynamic radio resource management, and data packet scheduling. Although not shown, the components of UE1302and/or AN1304may communicate with one another using a suitable bus or interconnect (IX) technology, which may include any number of technologies, including ISA, extended ISA, I2C, SPI, point-to-point interfaces, power management bus (PMBus), PCI, PCIe, PCIx, Intel® UPI, Intel® IAL, Intel® CXL, CAPI, OpenCAPI, Intel® QPI, UPI, Intel® OPA IX, RapidIO™ system IXs, CCIX, Gen-Z Consortium IXs, a HyperTransport interconnect, NVLink provided by NVIDIA®, a Time-Trigger Protocol (TTP) system, a FlexRay system, and/or any number of other IX technologies. The IX technology may be a proprietary bus, for example, used in an SoC based system. FIG.14is a block diagram illustrating components, according to some example embodiments, able to read instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically,FIG.14shows a diagrammatic representation of hardware resources1400including one or more processors (or processor cores)1410, one or more memory/storage devices1420, and one or more communication resources1430, each of which may be communicatively coupled via a bus1440. For embodiments where node virtualization (e.g., NFV) is utilized, a hypervisor1402may be executed to provide an execution environment for one or more network slices/sub-slices to utilize the hardware resources1400. The processors1410may include, for example, a processor1412and a processor1414. The processor(s)1410may be, for example, a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a DSP such as a baseband processor, an ASIC, an FPGA, a radio-frequency integrated circuit (RFIC), another processor (including those discussed herein), or any suitable combination thereof. The memory/storage devices1420may include main memory, disk storage, or any suitable combination thereof. The memory/storage devices1420may include, but are not limited to, any type of volatile or nonvolatile memory such as dynamic random access memory (DRAM), static random access memory (SRAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), Flash memory, solid-state storage, etc. The communication resources1430may include interconnection or network interface components or other suitable devices to communicate with one or more peripheral devices1404or one or more databases1406via a network1408. For example, the communication resources1430may include wired communication components (e.g., for coupling via USB), cellular communication components, NFC components, Bluetooth® (or Bluetooth® Low Energy) components, Wi-Fi® components, and other communication components. Instructions650may comprise software, a program, an application, an applet, an app, or other executable code for causing at least any of the processors1410to perform any one or more of the methodologies discussed herein. The instructions1450may reside, completely or partially, within at least one of the processors1410(e.g., within the processor's cache memory), the memory/storage devices1420, or any suitable combination thereof. Furthermore, any portion of the instructions650may be transferred to the hardware resources1400from any combination of the peripheral devices1404or the databases1406. Accordingly, the memory of processors1410, the memory/storage devices1420, the peripheral devices1404, and the databases1406are examples of computer-readable and machine-readable media. For one or more embodiments, at least one of the components set forth in one or more of the preceding figures may be configured to perform one or more operations, techniques, processes, and/or methods as set forth in the example section below. For example, the baseband circuitry as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below. For another example, circuitry associated with a UE, base station, network element, etc. as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below in the example section. 6. Network Function Virtualization Management and Orchestration Architecture FIG.15depicts an example management architecture of mobile networks that include virtualized network functions. The management architecture ofFIG.15is a 3GPP management and ETSI Network Function Virtualization (NFV) Management and Orchestration (MANO) architecture. The 3GPP management system ofFIG.15is based on 4G networks, but aspects of the management system may be applicable to 5G networks. The mobile network comprises physical and virtualized network elements. Application-specific aspects of both Virtualized Network Functions (VNFs) and Physical Network Functions (PNFs), corresponding to physical network elements (PNEs), are managed by the 3GPP management system. InFIG.15, the Itf-N is an interface between the Network Manager (NM) and the domain manager (DM)/element manager (EM), the Os-Ma-nfvo is the reference point between the Operations Support System (OSS)/Business Support System (BSS) (e.g., the NM) and the NFV Orchestrator (NFVO), the Ve-Vnfm-em is the reference point between the EM and the VNFM, and the Ve-Vnfm-vnf is the reference point between the VNF and the VNF Manager (VNFM). The NFV Infrastructure (NFVI) comprises hardware and software components that together provide the infrastructure resources where VNFs are deployed. The infrastructure resources contain the hardware resources, virtualization layer software and the virtualized resources which the VNF relies on. NFVI resources under consideration are both virtualised and non-virtualised resources, supporting virtualised network functions and partially virtualised network functions. Virtualised resources in-scope are those that can be associated with virtualisation containers, and have been catalogued and offered for consumption through appropriately abstracted services, for example, compute including machines (e.g., hosts or bare metal), and virtual machines, as resources that comprise both compute (CPU) and memory; storage, including: volumes of storage at either block or file-system level; and network, including: networks, subnets, ports, addresses, links and forwarding rules, for the purpose of ensuring intra- and inter-VNF connectivity. The NM plays one of the roles of OSS/BSS and is the consumer of reference point Os-Ma-nfvo. The NM provides the functions for the management of mobile network which includes virtualized network functions. The NM supports FCAPS management functions of the mobile network (e.g., IMS, EPC, 5GS, etc.) and 3GPP service (e.g., data service, voice service) and supports the management of mobile network lifecycle. NM initiates the lifecycle management of ETSI-defined NS and VNF, which are related to mobile network, through interaction with NFV-MANO. The NM provides a package of end-user functions with the responsibility for the management of a network, which may include network elements with VNFs, non-virtualized network functions, or both (management of the VNFs may occur via the EM). The EM and/or DM is responsible for FCAPS management functionality for VNFs on an application level and physical NE on a domain and element level. This mainly includes: fault management for VNFs and physical NEs; configuration management for VNF and physical NE; accounting management for VNFs and physical NEs; performance measurement and collection for VNF and physical NE; and security management for VNF and physical NEs. The EM/DM participates in lifecycle management functionality for a VNF, such as by requesting a lifecycle management operation for a VNF to VNFM, and exchanging information regarding the VNF and Virtualized Resources associated with the VNF. The DM/EM (including EM extended functionality) can manage both PNF(s) and VNF(s). The tracking of data from the VNFM and the EM may comprise, for example, PM data used by the VIM or the NFVI. Both the VNFM and the EM The tracking data from the VNFM and the EM may comprise, for example, PM data used by the VIM or the NFVI. Both the VNFM and the EM can scale up/down the quantity of VNFs of the system XY00. can scale up/down the quantity of VNFs of the system. The NFV-MANO comprises an NFV Orchestrator (NFVO), VNF Manager (VNFM), and VIM. The NFVO and VNFM share interfaces with the NM, DM/EM and VNF. The NFVO is responsible for the orchestration of NFVI resources across multiple VIMs, fulfilling the Resource Orchestration functions, and the lifecycle management of Network Services, fulfilling the Network Service Orchestration functions. The NFVO coordinates, authorizes, releases, and engages resources of the NFVI in order to provide the requested service(s) (e.g., to execute an NF, AF, component, and/or slice). The management and orchestration of virtualised resources includings handing NFVI resources (e.g., in NFVI Nodes), in NFVI Points of Presence (NFVI-PoPs). Management of non-virtualised resources is restricted to provisioning connectivity to PNFs, necessary when a NS instance includes a PNF that needs to connect to a VNF, or when the NS instance is distributed across multiple NFVI-Point of Presence (PoPs) or Network-PoPs (N-PoPs). The virtualised resources are leveraged for providing VNFs with the resources they need. Resource allocation in the NFVI is a potentially complex task because a lot of requirements and constraints may need to be met at the same time. Particularly requirements for network allocation add new complexity compared to known resource allocation strategies for computing resources in virtualised environments. For example, some VNFs require low latency or high bandwidth links to other communication endpoints. Allocation and release of resources is a dynamic process, in response to consumption of those services by other functions. While the management and orchestrations function for virtualised infrastructure are VNF-unaware, resource allocations and releases may be needed throughout the VNF lifetime. An advantage of NFV is that with increasing load VNFs can dynamically consume services that allocate additional resource when scaling-out is triggered. Services exposing virtualised resources include (non-exhaustive list): discovery of available services; management of virtualised resources availability/allocation/release; and virtualised resource fault/performance management. In the case of virtualised resources distributed across multiple NFVI-PoPs, those services could either be exposed directly by the management and orchestration functions for each individual NFVI-PoP, or via a higher-level service abstraction presenting the virtualised resources across multiple NFVI-PoPs. Both types of services could be exposed to the consuming functions. In the case of the higher level service abstraction previously mentioned, the management and orchestration of virtualised resources and non-virtualised networking resources across those NFVI-PoPs falls under the responsibility of the management and orchestration of the virtualised infrastructure that may in turn use the services exposed directly by the management and orchestration functions of a single or across multiple NFVI-PoPs. In order to provide those services, the management and orchestration of the virtualised infrastructure consumes services provided by the NFVI. The NFV management and orchestration functions that coordinate virtualised resources in a single NFVI-PoP and/or across multiple NFVI-PoPs need to ensure exposure of services that support accessing these resources in an open, well known abstracted manner. These services can be consumed by other authenticated and properly authorized NFV management and orchestration functions (e.g., functions that manage and orchestrate virtualised network functions). Network Service Licecycle management includes, inter alia, on-boarding Network Service, e.g., register a Network Service in the catalogue and ensure that all the templates describing the NS are on-boarded; instantiating Network Service, e.g., create a Network Service using the NS on-boarding artefacts; scaling Network Service, e.g., grow or reduce the capacity of the Network Service; updating Network Service by supporting Network Service configuration changes of various complexity such as changing inter-VNF connectivity or the constituent VNF instances. Performing create, delete, query, and update procedures for VNFFGs associated to a Network Service; and terminating Network Services, e.g., request the termination of constituent VNF instances, request the release of NFVI resources associated to NSs, and return them to NFVI resource pool if applicable. The deployment and operational behaviour requirements of each Network Service is captured in a deployment template, and stored during the Network Service on-boarding process in a catalogue, for future selection for instantiation. The deployment template fully describes the attributes and requirements necessary to realize such a Network Service. Network Service Orchestration coordinates the lifecycle of VNFs that jointly realize a Network Service. This includes (not limited to) managing the associations between different VNFs, and when applicable between VNFs and PNFs, the topology of the Network Service, and the VNFFGs associated with the Network Service. During the Network Service lifecycle, the Network Service Orchestration functions may monitor KPIs of a Network Service if such requirements were captured in the deployment template, and may report this information to support explicit request for such operations from other functions. The Network Service Orchestration performs its services by using the VNF Management services and by orchestrating the NFV Infrastructure that supports the interconnection between VNFs functionality, and its functions are exposed in an open, well known abstracted manner as services to other functions. In order to fulfil its responsibilities, the Network Service Orchestration functions consume services exposed by other functions (e.g., Virtualised Infrastructure Management functions). The services provided by Network Service Orchestration can be consumed by authenticated and properly authorized other functions (e.g., OSS, BSS). The VNFM is responsible for the lifecycle management of VNF instances. Each VNF instance is assumed to have an associated VNF Manager. The VNF manager may be assigned the management of a single VNF instance, or the management of multiple VNF instances of the same type or of different types. Most of the VNFM functions are assumed to be generic common functions applicable to any type of VNF. However, the NFV-MANO architectural framework needs to also support cases where VNF instances need specific functionality for their lifecycle management, and such functionality may be specified in the VNF Package. The VNFs are used to execute EPC/5GC components/functions. The VNFs are configured to perform VNF instantiation, including VNF configuration if required by the VNF deployment template (e.g., VNF initial configuration with IP addresses before completion of the VNF instantiation operation); VNF instantiation feasibility checking; VNF instance software update/upgrade; VNF instance modification; VNF instance scaling out/in and up/down; VNF instance-related collection of NFVI performance measurement results and faults/events information, and correlation to VNF instance-related events/faults; VNF instance assisted or automated healing; VNF instance termination; VNF lifecycle management change notification; management of the integrity of the VNF instance through its lifecycle; and overall coordination and adaptation role for configuration and event reporting between the VIM and the EM. The aforementioned functionalities may be exposed by means of interfaces and consumed by other NFV-MANO functional blocks or by authorised external entities: The deployment and operational behaviour of each VNF is captured in a template called Virtualised Network Function Descriptor (VNFD) that is stored in the VNF catalogue. NFV-MANO uses a VNFD to create instances of the VNF it represents, and to manage the lifecycle of those instances. A VNFD has a one-to-one correspondence with a VNF Package, and it fully describes the attributes and requirements necessary to realize such a VNF. NFVI resources are assigned to a VNF based on the requirements captured in the VNFD (containing resource allocation criteria, among others), but also taking into consideration specific requirements, constraints, and policies that have been pre-provisioned or are accompanying the request for instantiation and may override certain requirements in the VNFD (e.g., operator policies, geo-location placement, affinity/anti-affinity rules, local regulations). The information elements to be handled by the NFV-MANO, including the VNFD among others, need to guarantee the flexible deployment and portability of VNF instances on multi-vendor and diverse NFVI environments, e.g., with diverse computing resource generations, diverse virtual network technologies, etc. To achieve this, hardware resources need to be properly abstracted and the VNF requirements be described in terms of such abstraction. The VNFM has access to a repository of available VNF Packages and different versions of them, all represented via their associated VNFDs. Different versions of a VNF Package may correspond to different implementations of the same function, different versions to run in different execution environments (e.g., on different hypervisors, dependent on NFVI resources availability information, etc.), or different release versions of the same software. The repository may be maintained by the NFVO or another external entity. The Virtualised Infrastructure Manager (VIM) is responsible for controlling and managing the NFVI resources (e.g., compute, storage and network resources) usually within one operator's Infrastructure Domain (e.g., all resources within an NFVI-PoP, resources across multiple NFVI-POPs, or a subset of resources within an NFVI-PoP) as discussed previously. The VIM manages the life cycle of virtual resources with the NFVI (e.g., creation, maintenance, and tear down of VMs associated with one or more physical resources), tracks VM instances, tracks performance, fault, and security of VM instances and associated physical resources, and exposes VM instances and associated physical resources to other management systems. The VIM may be specialized in handling a certain type of NFVI resources (e.g., compute-only, storage-only, networking-only), or may be capable of managing multiple types of NFVI resources (e.g., in NFVI-Nodes). The southbound interfaces interface with a variety of hypervisors and Network Controllers in order to perform the functionality exposed through its northbound interfaces. Other VIM implementations may directly expose the interfaces exposed by such compute, storage, Network Controllers as specialized VIMs. A particular example of a specialized VIM is a WAN Infrastructure Manager (WIM), typically used to establish connectivity between PNF endpoints in different NFVI-PoPs. The VIM implementation is out of scope for NFV-MANO; however the interfaces exposed by VIMs are in-scope. The VIM is operable or configurable to: orchestrate the allocation/upgrade/release/reclamation of NFVI resources (including the optimization of such resources usage), and manage the association of the virtualised resources to the physical compute, storage, networking resources, wherein the VIM keeps an inventory of the allocation of virtual resources to physical resources, e.g., to a server pool and/or the like; support the management of VNF Forwarding Graphs (create, query, update, delete), e.g., by creating and maintaining Virtual Links, virtual networks, sub-nets, and ports, as well as the management of security group policies to ensure network/traffic access control; manage, in a repository, inventory related information of NFVI hardware resources (e.g., compute, storage, networking) and software resources (e.g., hypervisors), and discovery of the capabilities and features (e.g., related to usage optimization) of such resources; manage the virtualised resource capacity (e.g., density of virtualised resources to physical resources), and forwarding of information related to NFVI resources capacity and usage reporting; manage software images (add, delete, update, query, copy) as requested by other NFV-MANO functional blocks (e.g., NFVO)—the VIM maintains repositories for software images, in order to streamline the allocation of virtualised computing resources, and a validation step, performed by VIM, is required for software images before storing the image (e.g., VNF package on-boarding and update). Image validation operations during run-time, e.g., during instantiation or scaling, are outside the scope of the current version of the present document; collect performance and fault information (e.g., via notifications) of hardware resources (compute, storage, and networking) software resources (e.g., hypervisors), and virtualised resources (e.g., VMs); and forwarding of performance measurement results and faults/events information relative to virtualised resources; and manage catalogues of virtualised resources that can be consumed from the NFVI. The elements in the catalogue may be in the form of virtualised resource configurations (virtual CPU configurations, types of network connectivity (e.g., L2, L3), etc.), and/or templates (e.g., a virtual machine with 2 virtual CPUs and 2 GB of virtual memory). FIG.16depicts another example of a management services architecture. Note: the management services (e.g., NF instantiation services) may be named differently than as shown byFIGS.15and16, the name of the management services are not the focus of the present embodiments; the present disclosure is related to the functionality of the management services. In the examples ofFIGS.15and/or16, the NF instantiation service producer may provide a management service to a consumer, by receiving a request from a consumer to instantiate a 3GPP NF; checking whether the VNF package(s) of the VNF(s) realizing the virtualized part of the 3GPP NF have been on-boarded to the NFV MANO system, and on-boarding the VNF package(s) that have not been on-boarded yet; interacting with the NFV MANO system to instantiate the VNF(s) that are realizing the virtualized part of subject 3GPP NF; informing the consumer that the 3GPP NF has been instantiated; and creating the Managed Object Instances (MOIs) for the subject 3GPP NF. The NF configuration service producer may provide management services to a consumer, by receiving a request from a consumer to configure a 3GPP NF instance; configuring the 3GPP NF instance; and informing the consumer that the 3GPP NF instance has been configured. The request from the consumer to configure a 3GPP NF instance may include a request to create the MOI(s) for the 3GPP NF. The request from the consumer to configure the 3GPP NF instance may be a MOI attribute modification request. Configuring the 3GPP NF instance may include interaction with the ETSI NFV MANO system to update the corresponding VNF instance(s) realizing the virtualized part of the 3GPP NF. The network creation service producer may provide management services to a consumer, by receiving a request from a consumer to create a 3GPP network; preparing the NSD(s) for the NS(s) that are to realize the requested 3GPP network, and on-boarding the NSD(s) to ETSI NFV MANO system; on-boarding the VNF package(s) of the constituent VNFs to ETSI NFV MANO system, if the VNF package has not yet been on-boarded; interacting with ETSI NFV MANO system to instantiate the NS(s); consuming a management service to configure the 3GPP NF instance(s) that are constituting the subject 3GPP network; and creating the MOI(s) for the created network. The interaction with the ETSI NFV MANO system to instantiate the NS(s) may include the instantiation of constitute (constituent VNFs). The ETSI NFV MANO system may inform the management service producer about the instantiation of VNFs. The management service producer may create the MOI(s) for the newly instantiated VNFs, and may provide the NF configuration service. Additionally, the management service being consumed to configure the 3GPP NF instance(s) may be the NF configuration service. The network configuration service producer may provide management services to a consumer, by receiving a request from a consumer to configure a 3GPP network; configuring the 3GPP network; and informing the consumer that the 3GPP network has been configured. The request from the consumer to configure a 3GPP network includes a request to create the MOI(s) for the 3GPP network, and the request from the consumer to configure a 3GPP network is a MOI attribute modification request. Configuring the 3GPP network further may include interaction with the ETSI NFV MANO system to update the corresponding NS instance(s) realizing (fully or partially) the 3GPP network. The interaction with the ETSI NFV MANO system to update the NS(s) may include the instantiation of new VNFs. The ETSI NFV MANO system may inform a management service producer about the instantiation of VNFs. The management service producer may create the MOI(s) for the newly instantiated VNFs, and the management service producer may provide the NF configuration service. In addition, configuring the 3GPP network further includes configuring the 3GPP NFs, and configuring the 3GPP NFs may be via consuming the management service for NF configuration. The NSSI creation service producer (NSSMF) may create a NSSI for a consumer, by consuming a management service to configure the 3GPP network; and/or consuming a management service to configure the 3GPP NF(s). The management service consumed to configure the 3GPP network may be a network configuration service, and the management service consumed to configure the 3GPP NF(s) may be an NF configuration service. 7. Example Implementations FIG.17illustrates process1700for practicing various embodiments herein. While particular examples and orders of operations are illustratedFIG.17, the depicted orders of operations should not be construed to limit the scope of the embodiments in any way. Rather, the depicted operations may be re-ordered, broken into additional operations, combined, and/or omitted altogether while remaining within the spirit and scope of the present disclosure. Process1700may be performed by an MDAS producer. Process1700begins at operation1701wherein the MDAS producer obtains input data related to one or more managed networks and services from one or more data sources. At operation1702, the MDAS producer prepares, processes, and analyzes the input data. At operation1703, the MDAS producer generates an analytics report based on the preparation, processing, and analysis. At operation1704, the MDAS producer sends the analytics report to an MDAS consumer for root cause analysis of ongoing issues, prevention of potential issues, and prediction of network or service demands. Additional examples of the presently described embodiments include the following, non-limiting implementations. Each of the following non-limiting examples may stand on its own or may be combined in any permutation or combination with any one or more of the other examples provided below or throughout the present disclosure. Example A01 includes a service producer for MDA supported by one or more processors configured to: receive input data; prepare the received data; analyze the prepared data; provide an analytics report to an MDA consumer; and evaluate results of actions executed by the MDA consumer. Example A02 includes the service producer of example A01 and/or some other example(s) herein, wherein the MDA is for network management and orchestration and/or service management and orchestration. Example A03 includes the service producer of examples A01 and A02 and/or some other example(s) herein, wherein the input data includes at least one type of the following data: performance measurement; alarm information; configuration data; trace data; MDT reports; RLF reports; RCEF reports; data provided by NWDAF; QoE data; geographical data; terrain data; execution reports provided by the MDA consumer; and evaluation results of the actions executed by the MDA consumer. Example A04 includes the service producer of example A01 and/or some other example(s) herein, wherein the service producer of MDA is an MDAF. Example A05 includes the service producer of example A01 and/or some other example(s) herein, wherein the MDAS consumer is a SON function. Example A06 includes the service producer of examples A04 and A05 and/or some other example(s) herein, wherein the SON function takes actions based on the analytics report provided by the MDAF. Example A07 includes the service producer of examples A04 to A06 and/or some other example(s) herein, wherein the MDAF receives the execution report from the SON function. Example A08 includes the service producer of example A01 and/or some other example(s) herein, wherein the MDA is for coverage issue analysis. Example A09 includes the service producer of examples A01 and A08 and/or some other example(s) herein, wherein the analytics report describes an identified coverage issue. Example A10 includes the service producer of examples A01, A08 and A09 and/or some other example(s) herein, wherein the analytics report includes at least one of the following information describing the coverage issue: an identifier of the coverage issue described in the analytics report; an indication of whether the coverage issue is weak coverage or coverage hole; a start time of the coverage issue; a geographical area and location where the coverage issue exists; whether the coverage issue exists in 5G only or in all RATs; cells affected by the coverage issue; severity level (e.g., critical, medium) of the coverage issue; recommended actions to solve the coverage issue. Example A11 includes the service producer of examples A01, A08, A09 and A10 and/or some other example(s) herein, wherein the service producer is informed when the actions are taken by the MDAS consumer to solve the coverage issue described in the analytics report. Example A12 includes the service producer of examples A01, A08, A09 and A10 and/or some other example(s) herein, wherein the service producer gets the execution reports of the actions taken by the MDAS consumer to solve the coverage issue described in the analytics report. Example A13 includes the service producer of examples A01, A08, A09, A10 and A12 and/or some other example(s) herein, wherein the service producer provides the update(s) of analytics report to indicate if the coverage issue is solved, mitigated or deteriorated. Example B01 includes a method to be performed by a Management Data Analytics service (MDAS) producer, the method comprising: receiving input data; preparing the received input data for analysis; generating an MDAS report based on analysis of the prepared data; sending the MDAS report to an MDAS consumer; receiving an execution report from the MDAS consumer wherein the execution report indicates actions executed by the MDAS consumer based at least in part on the MDAS report; and evaluating results of the actions executed by the MDAS consumer indicated by the execution report. Example B02 includes the method of example B01 and/or some other example(s) herein, wherein the MDAS is for network management and orchestration and/or service management and orchestration. Example B03 includes the method of examples B01 and B02 and/or some other example(s) herein, wherein the input data includes one or more of performance measurements, alarm informations, configuration data, trace data, Minimization Drive Test (MDT) reports, Radio Link Failure (RLF) reports, RRC Connection Establishment Failure (RCEF) reports; data provided by Network Data Analytics Function (NWDAF); QoE data; geographical data; terrain data; execution reports provided by the MDAS consumer; evaluation results of the actions executed by the MDAS consumer, and/or Charging Data Record (CDR) data. Example B04 includes the method of example B03 and/or some other example(s) herein, wherein the performance measurements includes one or more of RSRP measurements and/or statistics, RSRQ measurements and/or statistics, SINR measurements and/or statistics, RLF measurements of Synchronization Signal Block (SSB) beams, and/or handover failures, and the CDR data includes one or more of voice service results including voice call drops, SMS service results, Data Volume Uplink and Downlink, Report Time, Presence Reporting Area Status, and/or other like CDR data. Example B05 includes the method of example B01 and/or some other example(s) herein, wherein the MDAS producer is an Management Data Analytics Function (MDAF), and the MDAS consumer is a Self-Organizing Network (SON) function. Example B06 includes the method of examples B01-B05 and/or some other example(s) herein, wherein the SON function is a centralized SON function, distributed SON function, or hybrid SON function. Example B07 includes the method of examples B05-B06 and/or some other example(s) herein, wherein the SON function is an Automatic Neighbor Relation (ANR) function, network function (NF) self-establishment function, PCI configuration function, automatic radio configuration data function (ARCF), inter-cell interference coordination (ICIC) function, RACH optimization function, Centralized or Distributed Capacity and Coverage Optimization (CCO) function, a self-healing function, a trace and MDT function, mobility robustness optimization function, energy savings management (ESM) function, cross-slice network resource optimization function, Example B08 includes the method of examples B05-B07 and/or some other example(s) herein, wherein the SON function takes actions based on the MDAS report provided by the MDAF. Example B09 includes the method of examples B05-B08 and/or some other example(s) herein, wherein the MDAF receives the execution report from the SON function, and the actions executed by the SON function include one or more SON function actions. Example B10 includes the method of examples B01-B09 and/or some other example(s) herein, wherein the MDAS is for coverage issue analysis. Example B11 includes the method of examples B01-B10 and/or some other example(s) herein, wherein the MDAS report describes an identified coverage issue. Example B12 includes the method of examples B01-B11 and/or some other example(s) herein, wherein the MDAS report includes one or more of: information describing the coverage issue: an identifier of the coverage issue described in the MDAS report; an indication of whether the coverage issue is a weak coverage or a coverage hole; a start time of the coverage issue; a geographical area and/or location where the coverage issue exists or is detected; whether the coverage issue exists in 5G only or in all RATs; number and/or identity of cells affected by the coverage issue; a severity level of the coverage issue; and one or more recommended actions to solve the coverage issue. Example B13 includes the method of examples B01-B12 and/or some other example(s) herein, wherein the actions executed by the MDAS consumer indicated by the execution report includes actions executed by the MDAS consumer to solve the coverage issue described in the MDAS report. Example B14 includes the method of example B13 and/or some other example(s) herein, wherein the execution report indicates success or failure of the actions executed by the MDAS consumer to solve the coverage issue described in the MDAS report. Example B15 includes the method of examples B01-B14 and/or some other example(s) herein, further comprising: in response to receipt of the execution report, generating or causing to generate an updated MDAS report to indicate whether the coverage issue is solved, mitigated, or deteriorated. Example Z01 may include an apparatus comprising means to perform one or more elements of a method described in or related to any of examples A01-A13, B01-B15, or any other method or process described herein. Example Z02 may include one or more non-transitory computer-readable media comprising instructions to cause an electronic device, upon execution of the instructions by one or more processors of the electronic device, to perform one or more elements of a method described in or related to any of examples A01-A13, B01-B15, or any other method or process described herein. Example Z03 may include an apparatus comprising logic, modules, or circuitry to perform one or more elements of a method described in or related to any of examples A01-A13, B01-B15, or any other method or process described herein. Example Z04 may include a method, technique, or process as described in or related to any of examples A01-A13, B01-B15, or portions or parts thereof. Example Z05 may include an apparatus comprising: one or more processors and one or more computer-readable media comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples A01-A13, B01-B15, or portions thereof. Example Z06 may include a signal as described in or related to any of examples A01-A13, B01-B15, or portions or parts thereof. Example Z07 may include a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples A01-A13, B01-B15, or portions or parts thereof, or otherwise described in the present disclosure. Example Z08 may include a signal encoded with data as described in or related to any of examples A01-A13, B01-B15, or portions or parts thereof, or otherwise described in the present disclosure. Example Z09 may include a signal encoded with a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples A01-A13, B01-B15, or portions or parts thereof, or otherwise described in the present disclosure. Example Z10 may include an electromagnetic signal carrying computer-readable instructions, wherein execution of the computer-readable instructions by one or more processors is to cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples A01-A13, B01-B15, or portions thereof. Example Z11 may include a computer program comprising instructions, wherein execution of the program by a processing element is to cause the processing element to carry out the method, techniques, or process as described in or related to any of examples A01-A13, B01-B15, or portions thereof. Example Z12 may include a signal in a wireless network as shown and described herein. Example Z13 may include a method of communicating in a wireless network as shown and described herein. Example Z14 may include a system for providing wireless communication as shown and described herein. Example Z15 may include a device for providing wireless communication as shown and described herein. Any of the above-described examples may be combined with any other example (or combination of examples), unless explicitly stated otherwise. The foregoing description of one or more implementations provides illustration and description, but is not intended to be exhaustive or to limit the scope of embodiments to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of various embodiments. 8. Terminology The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a,” “an” and “the” are intended to include plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specific the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operation, elements, components, and/or groups thereof. For the purposes of the present disclosure, the phrase “A and/or B” means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase “A, B, and/or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C). The description may use the phrases “in an embodiment,” or “In some embodiments,” which may each refer to one or more of the same or different embodiments. Furthermore, the terms “comprising,” “including,” “having,” and the like, as used with respect to embodiments of the present disclosure, are synonymous. The terms “coupled,” “communicatively coupled,” along with derivatives thereof are used herein. The term “coupled” may mean two or more elements are in direct physical or electrical contact with one another, may mean that two or more elements indirectly contact each other but still cooperate or interact with each other, and/or may mean that one or more other elements are coupled or connected between the elements that are said to be coupled with each other. The term “directly coupled” may mean that two or more elements are in direct contact with one another. The term “communicatively coupled” may mean that two or more elements may be in contact with one another by a means of communication including through a wire or other interconnect connection, through a wireless communication channel or ink, and/or the like The term “circuitry” refers to a circuit or system of multiple circuits configured to perform a particular function in an electronic device. The circuit or system of circuits may be part of, or include one or more hardware components, such as a logic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group), an Application Specific Integrated Circuit (ASIC), a field-programmable gate array (FPGA), programmable logic device (PLD), complex PLD (CPLD), high-capacity PLD (HCPLD), System-on-Chip (SoC), System-in-Package (SiP), Multi-Chip Package (MCP), digital signal processor (DSP), etc., that are configured to provide the described functionality. In addition, the term “circuitry” may also refer to a combination of one or more hardware elements with the program code used to carry out the functionality of that program code. Some types of circuitry may execute one or more software or firmware programs to provide at least some of the described functionality. Such a combination of hardware elements and program code may be referred to as a particular type of circuitry. The term “processor circuitry” as used herein refers to, is part of, or includes circuitry capable of sequentially and automatically carrying out a sequence of arithmetic or logical operations, or recording, storing, and/or transferring digital data. The term “processor circuitry” may refer to one or more application processors, one or more baseband processors, a physical central processing unit (CPU), a single-core processor, a dual-core processor, a triple-core processor, a quad-core processor, and/or any other device capable of executing or otherwise operating computer-executable instructions, such as program code, software modules, and/or functional processes. The terms “application circuitry” and/or “baseband circuitry” may be considered synonymous to, and may be referred to as, “processor circuitry.” The term “memory” and/or “memory circuitry” as used herein refers to one or more hardware devices for storing data, including random access memory (RAM), magnetoresistive RAM (MRAM), phase change random access memory (PRAM), dynamic random access memory (DRAM) and/or synchronous dynamic random access memory (SDRAM), core memory, read only memory (ROM), magnetic disk storage mediums, optical storage mediums, flash memory devices or other machine readable mediums for storing data. The term “computer-readable medium” may include, but is not limited to, memory, portable or fixed storage devices, optical storage devices, and various other mediums capable of storing, containing or carrying instructions or data. The term “interface circuitry” as used herein refers to, is part of, or includes circuitry that enables the exchange of information between two or more components or devices. The term “interface circuitry” may refer to one or more hardware interfaces, for example, buses, I/O interfaces, peripheral component interfaces, network interface cards, and/or the like. The term “user equipment” or “UE” as used herein refers to a device with radio communication capabilities and may describe a remote user of network resources in a communications network. The term “user equipment” or “UE” may be considered synonymous to, and may be referred to as, client, mobile, mobile device, mobile terminal, user terminal, mobile unit, mobile station, mobile user, subscriber, user, remote station, access agent, user agent, receiver, radio equipment, reconfigurable radio equipment, reconfigurable mobile device, etc. Furthermore, the term “user equipment” or “UE” may include any type of wireless/wired device or any computing device including a wireless communications interface. The term “network element” as used herein refers to physical or virtualized equipment and/or infrastructure used to provide wired or wireless communication network services. The term “network element” may be considered synonymous to and/or referred to as a networked computer, networking hardware, network equipment, network node, router, switch, hub, bridge, radio network controller, RAN device, RAN node, gateway, server, virtualized VNF, NFVI, etc. The term “computer system” as used herein refers to any type interconnected electronic devices, computer devices, or components thereof. Additionally, the term “computer system” and/or “system” may refer to various components of a computer that are communicatively coupled with one another. Furthermore, the term “computer system” and/or “system” may refer to multiple computer devices and/or multiple computing systems that are communicatively coupled with one another and configured to share computing and/or networking resources. The term “architecture” as used herein refers to a computer architecture or a network architecture. A “network architecture” is a physical and logical design or arrangement of software and/or hardware elements in a network including communication protocols, interfaces, and media transmission. A “computer architecture” is a physical and logical design or arrangement of software and/or hardware elements in a computing system or platform including technology standards for interacts therebetween. The term “appliance,” “computer appliance,” or the like, as used herein refers to a computer device or computer system with program code (e.g., software or firmware) that is specifically designed to provide a specific computing resource. A “virtual appliance” is a virtual machine image to be implemented by a hypervisor-equipped device that virtualizes or emulates a computer appliance or otherwise is dedicated to provide a specific computing resource. The term “element” refers to a unit that is indivisible at a given level of abstraction and has a clearly defined boundary, wherein an element may be any type of entity including, for example, one or more devices, systems, controllers, network elements, modules, etc., or combinations thereof. The term “device” refers to a physical entity embedded inside, or attached to, another physical entity in its vicinity, with capabilities to convey digital information from or to that physical entity. The term “entity” refers to a distinct component of an architecture or device, or information transferred as a payload. The term “controller” refers to an element or entity that has the capability to affect a physical entity, such as by changing its state or causing the physical entity to move. The term “SMTC” refers to an SSB-based measurement timing configuration configured by SSB-MeasurementTimingConfiguration. The term “SSB” refers to an SS/PBCH block. The term “a “Primary Cell” refers to the MCG cell, operating on the primary frequency, in which the UE either performs the initial connection establishment procedure or initiates the connection re-establishment procedure. The term “Primary SCG Cell” refers to the SCG cell in which the UE performs random access when performing the Reconfiguration with Sync procedure for DC operation. The term “Secondary Cell” refers to a cell providing additional radio resources on top of a Special Cell for a UE configured with CA. The term “Secondary Cell Group” refers to the subset of serving cells comprising the PSCell and zero or more secondary cells for a UE configured with DC. The term “Serving Cell” refers to the primary cell for a UE in RRC_CONNECTED not configured with CA/DC there is only one serving cell comprising of the primary cell. The term “serving cell” or “serving cells” refers to the set of cells comprising the Special Cell(s) and all secondary cells for a UE in RRC_CONNECTED configured with carrier aggregation (CA). The term “Special Cell” refers to the PCell of the MCG or the PSCell of the SCG for DC operation; otherwise, the term “Special Cell” refers to the Pcell. The term “channel” as used herein refers to any transmission medium, either tangible or intangible, which is used to communicate data or a data stream. The term “channel” may be synonymous with and/or equivalent to “communications channel,” “data communications channel,” “transmission channel,” “data transmission channel,” “access channel,” “data access channel,” “link,” “data link,” “carrier,” “radiofrequency carrier,” and/or any other like term denoting a pathway or medium through which data is communicated. Additionally, the term “link” as used herein refers to a connection between two devices through a RAT for the purpose of transmitting and receiving information. As used herein, the term “radio technology” refers to technology for wireless transmission and/or reception of electromagnetic radiation for information transfer. The term “radio access technology” or “RAT” refers to the technology used for the underlying physical connection to a radio based communication network. As used herein, the term “communication protocol” (either wired or wireless) refers to a set of standardized rules or instructions implemented by a communication device and/or system to communicate with other devices and/or systems, including instructions for packetizing/depacketizing data, modulating/demodulating signals, implementation of protocols stacks, and/or the like. Examples of wireless communications protocols may be used in various embodiments include a Global System for Mobile Communications (GSM) radio communication technology, a General Packet Radio Service (GPRS) radio communication technology, an Enhanced Data Rates for GSM Evolution (EDGE) radio communication technology, and/or a Third Generation Partnership Project (3GPP) radio communication technology including, for example, 3GPP Fifth Generation (5G) or New Radio (NR), Universal Mobile Telecommunications System (UMTS), Freedom of Multimedia Access (FOMA), Long Term Evolution (LTE), LTE-Advanced (LTE Advanced), LTE Extra, LTE-A Pro, cdmaOne (2G), Code Division Multiple Access 2000 (CDMA 2000), Cellular Digital Packet Data (CDPD), Mobitex, Circuit Switched Data (CSD), High-Speed CSD (HSCSD), Universal Mobile Telecommunications System (UMTS), Wideband Code Division Multiple Access (W-CDM), High Speed Packet Access (HSPA), HSPA Plus (HSPA+), Time Division-Code Division Multiple Access (TD-CDMA), Time Division-Synchronous Code Division Multiple Access (TD-SCDMA), LTE LAA, MuLTEfire, UMTS Terrestrial Radio Access (UTRA), Evolved UTRA (E-UTRA), Evolution-Data Optimized or Evolution-Data Only (EV-DO), Advanced Mobile Phone System (AMPS), Digital AMPS (D-AMPS), Total Access Communication System/Extended Total Access Communication System (TACS/ETACS), Push-to-talk (PTT), Mobile Telephone System (MTS), Improved Mobile Telephone System (IMTS), Advanced Mobile Telephone System (AMTS), Cellular Digital Packet Data (CDPD), DataTAC, Integrated Digital Enhanced Network (iDEN), Personal Digital Cellular (PDC), Personal Handy-phone System (PHS), Wideband Integrated Digital Enhanced Network (WiDEN), iBurst, Unlicensed Mobile Access (UMA), also referred to as also referred to as 3GPP Generic Access Network, or GAN standard), Bluetooth®, Bluetooth Low Energy (BLE), IEEE 802.15.4 based protocols (e.g., IPv6 over Low power Wireless Personal Area Networks (6LoWPAN), WirelessHART, MiWi, Thread, 802.11a, etc.) WiFi-direct, ANT/ANT+, ZigBee, Z-Wave, 3GPP device-to-device (D2D) or Proximity Services (ProSe), Universal Plug and Play (UPnP), Low-Power Wide-Area-Network (LPWAN), Long Range Wide Area Network (LoRA) or LoRaWAN™ developed by Semtech and the LoRa Alliance, Sigfox, Wireless Gigabit Alliance (WiGig) standard, Worldwide Interoperability for Microwave Access (WiMAX), mmWave standards in general (e.g., wireless systems operating at 10-300 GHz and above such as WiGig, IEEE 802.11ad, IEEE 802.1 lay, etc.), V2X communication technologies (including 3GPP C-V2X), Dedicated Short Range Communications (DSRC) communication systems such as Intelligent-Transport-Systems (ITS) including the European ITS-G5, ITS-G5B, ITS-G5C, etc. In addition to the standards listed above, any number of satellite uplink technologies may be used for purposes of the present disclosure including, for example, radios compliant with standards issued by the International Telecommunication Union (ITU), or the European Telecommunications Standards Institute (ETSI), among others. The examples provided herein are thus understood as being applicable to various other communication technologies, both existing and not yet formulated. The term “access network” refers to any network, using any combination of radio technologies, RATs, and/or communication protocols, used to connect user devices and service providers. In the context of WLANs, an “access network” is an IEEE 802 local area network (LAN) or metropolitan area network (MAN) between terminals and access routers connecting to provider services. The term “access router” refers to router that terminates a medium access control (MAC) service from terminals and forwards user traffic to information servers according to Internet Protocol (IP) addresses. The terms “instantiate,” “instantiation,” and the like as used herein refers to the creation of an instance. An “instance” also refers to a concrete occurrence of an object, which may occur, for example, during execution of program code. The term “information element” refers to a structural element containing one or more fields. The term “field” refers to individual contents of an information element, or a data element that contains content. As used herein, a “database object”, “data structure”, or the like may refer to any representation of information that is in the form of an object, attribute-value pair (AVP), key-value pair (KVP), tuple, etc., and may include variables, data structures, functions, methods, classes, database records, database fields, database entities, associations between data and/or database entities (also referred to as a “relation”), blocks and links between blocks in block chain implementations, and/or the like. A “network slice” refers to a logical network that provides specific network capabilities and network characteristics, supporting various service properties for network slice customers. A “Network Slice instance” or “NSI” represents service view of a network slice which exposes the root NetworkSliceSubnet instance. A “network slice subnet” is a representation of a set of network functions and the associated resources (e.g., compute, storage and networking resources) supporting network. A Network Slice Subnet instance” or “NSSI” refers to a Managed Object Instance (MOI) of Network Slice Subnet Information Object Class (IOC). A “Service Level Specification” or “SLS” refers to a set of service level requirements associated with a Service Level Agreement (SLA) to be satisfied by a network slice. The foregoing description provides illustration and description of various example embodiments, but is not intended to be exhaustive or to limit the scope of embodiments to the precise forms disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of various embodiments. Where specific details are set forth in order to describe example embodiments of the disclosure, it should be apparent to one skilled in the art that the disclosure can be practiced without, or with variation of, these specific details. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.
148,236
11943123
DETAILED DESCRIPTION The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention. Unless stated otherwise, a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task. As used herein, the term ‘processor’ refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions. A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured. Described herein are embodiments of a timeline framework for time-state analytics that arise across several real-world domains including operational management of large-scale systems and digital infrastructures. Embodiments of the techniques described herein may be used to solve a class of problems referred to herein as time-state analytics. Time state analytics appear in various applications and contexts, such as fitness tracking, healthcare data, mobile app data, video streaming data, infrastructure monitoring, and many other operational analytics and monitoring use cases. For example, time state analytics are applicable to data that comes in from real world tracking measurements of particular entities, such as a user, device, video playback session, etc. In an example use case such as fitness tracking, various types of status events are collected from data measured with respect to a user, such as when the subject woke up, when they went for a run, various sensor measurements such as heart rate, etc. over time. For entities being measured, whether devices, users, sessions, etc., suppose that various attributes and values are being measured over time. It would be beneficial to be able to perform analytics on such data collected over time. Examples of such types of analytics include summaries, such as the amount of time spent resting, average stress level, etc. It would be further beneficial to be able to perform deeper analytics to understand the behavior of an entity, such as the user in a particular status or particular context. This includes determining insights based on both time and state context. In a health and/or fitness example, this includes determining how long a user was in deep sleep, how long the user was in a high stress state, how long the user was in an aerobic range (for heartbeat) when running, was the average VO2 increasing when running, etc. This includes determining measures that are conditioned on the entity also being in a particular context or set of states. Described herein are techniques for performing time-state analytics, which includes determining time-state metrics. In various embodiments, the time-state metrics are measured in the context of time and state that an entity was in, beyond coarse aggregates (e.g., counts or averages). The time-state metrics include measures of interest of a particular data stream calculated in a particular stateful context. This includes tracking the state or the behavior of a system over time, including measuring statistics and durations of that particular entity over time. For example, determining time-state metrics includes determining, from streaming event data, statistics or measures for entities that are calculated while an entity is in a particular context or state. Other complex examples of time-state metrics and analytics include determining an amount of time or duration that an entity spent in a particular status, the amount of time spent in a certain status while another event was occurring (e.g., how long (duration) a subject was in an aerobic heart range (state) while running (another state)), etc. In such a complex stateful metric example, various states of a user are being tracked, such as whether they are running, the state of the heart rate (e.g., whether it is in an aerobic range), etc. In some embodiments, determining a time-state metric includes determining measurements or statistical summaries of measurements when in a particular status, such as a count of the number of times events of a certain type occurred when in a particular state. In some embodiments, the time-state metrics include behavioral measures (e.g., counts, statistics, durations) or metrics for entities calculated in a particular context (e.g., status/states, time periods). Time-state metrics are beneficial to understand behaviors in a variety of applications, examples of which are described below. Time-State Metrics for Food Delivery The following is an example of determining time-state metrics in the context of a food delivery service. Suppose a user orders food via the food delivery service via an application such as a mobile app. A stream of event data is generated by a platform associated with the food delivery service, where the event stream includes various event information such as when the order was placed, when the order was received by the restaurant, when the order was picked up for delivery, when the chef started making particular items in the order, whether a modification to the order occurred during making of an item, when the order was delivered, etc. For example, the event stream includes food delivery data that is collected and associated with corresponding date/timestamps. The following is an example data model for the food delivery example described above. There are various different entities whose behaviors can be tracked, such as the order, each item in the order, the user, the restaurant, etc. For each entity over time, various types of attributes (which may have various types of values) are determined. In various embodiments, attributes that are tracked over time include statuses, events, measurements, etc. In this food delivery example, different entities may have different types of statuses, events, and measurements. For example, for an order or item entity, example statuses include ready, delivered, prepared, etc. Examples of events pertaining to an order or item entity include whether a modification (a type of event) occurred with respect to the order/item entity. An example of a measurement pertaining to the restaurant entity includes the busyness level of the restaurant. In some embodiments, attribute values (e.g., measurements, status, events, etc.) pertaining to various entities are extracted from event streams. By evaluating an event stream and tracking attributes for entities over time using such a data model as described above, various time-state metrics related to food delivery may be determined. The following are examples of such time-state metrics for food delivery. As described above, the time-state metrics may be used to understand the behavior of an entity (e.g., user or order) in a contextual manner over time. This allows patterns of behavior for entities to be efficiently determined. One example type of time-state metric behavior pattern is how much time an entity spent in a certain status. In a food delivery context, this includes, for example, determining how long an order (entity) that is ready (example of status of order) was waiting to be picked up by a delivery driver (where the waiting is another example status). Another example type of time-state metric behavior pattern is how much time did an entity spend in a particular status when a certain type of event also occurred. In a food delivery context, this includes, for example, determining the amount of delay (duration measure) that was introduced due to a modification (event) to an order (entity). Determining such behavior would allow the food delivery service to understand why delays occurred, and allow them to debug such issues to avoid them going forward. Another example type of time-state metric behavior pattern is how many occurrences of events of a certain type happened in a given status of an entity. In a food delivery context, this includes, for example, determining what is the number (measure) of modifications (event type) requested before/after the restaurant (entity) started preparing food (status). Time-State Metrics for Fitness Tracking The following is an example of determining time-state metrics in a fitness tracking context described above. One example is a fitness tracker that collects fitness data for a user. As one example, the fitness tracker generated an event stream of data indicating various information, such as outputting, over time, stress level, activity status, sleep status, etc. For example, the event stream includes fitness data that is collected and associated with corresponding date/timestamps. The following is an example data model for structuring event stream data to generate time-state metrics for fitness tracking. In this example, time-state metrics are modeled for an entity, which is the user in this example. Values for attributes of various types are extracted from the event stream from the tracker over time. Examples of such types of attributes include statuses, events, and measurements. Examples of statuses for the user (entity) include whether they are asleep, stressed, etc. Examples of events include waking up, starting a run, etc. Examples of measurements include heart rate, VO2, etc. As described above, the various time-state metrics corresponding to different behavior patterns for entities may be tracked using the data modeling and time-state metric computation techniques described herein. One example time-state metric pattern is how much time did the entity spend in a certain status. In the context of a fitness tracker, this includes determining how long a user (entity) was in deep sleep (status) per day. Another example time-state metric pattern is how much time did the entity spend in a certain status when an event or status of a certain type also happened. In the context of a fitness tracker, this includes determining how long the user (entity) was in an “aerobic” heart rate range (status) when running (event type). Another example time-state metric pattern is an (aggregate) measure (e.g., average, peak, minimum, etc.) of a particular attribute when the entity was in a certain status. In the context of a fitness tracker, this includes determining what is the peak heart rate (attribute measure) when the user (entity) was running (also an example type of status). Another example time-state metric pattern is how many of a certain type of event occurred when the entity was in a certain status. In the context of a fitness tracker, this includes determining how many stress level transitions (where the transitions are examples of a type of event) occurred when the user (entity) was resting (status). Time-State Metrics for Video Streaming The following is an example of determining time-state metrics in the context of video streaming. Suppose a client device (e.g., user's smartphone, set top box, gaming device, laptop, etc.) collects video streaming data for video sessions played on the client device. As one example, a video streaming application on a mobile device generates an event stream of data indicating various information, such as the occurrence of events, status changes, quality metrics, etc. in heartbeat messages, where the collected video streaming data is associated with corresponding date/timestamps. The following is an example timeline data model for structuring event stream data to generate time-state metrics for video streaming. In this example, time-state metrics are modeled for an entity, such as a video session. Values for attributes of various types are extracted from the event stream from the tracker over time. Examples of such types of attributes include statuses, events, and measurements over time. Examples of statuses or states for the video streaming session (entity) include the player state (playing, buffering, seeking), the current network state of the device (e.g., WiFi vs. cellular), the current delivery service being used (e.g., content distribution network A vs. B), etc. Examples of events include user actions (e.g., play, pause, seek), he player actions (e.g., bitrate level changes), network changes (e.g., switching to cellular), and service provider actions (e.g., switching the content delivery server). Examples of measurements include the current bitrate level, the current state of the player, the current frames per second, network errors if any, etc. In some embodiments, the different data types that attributes may take are encoded in different types of timeline representations. In some embodiments, timeline objects (generated from observed data values) may be of different timeline types. As described above, various time-state metrics corresponding to different behavior patterns for entities may be tracked using the data modeling and time-state metric computation techniques described herein. As described above, one example time-state metric pattern is how much time did the entity spend in a certain status when an event or status of a certain type also happened. In the context of video streaming, this includes determining how much time the streaming session/player (entity) was buffering (status) when using a particular type of network or app (event types). As shown in the examples above, time-state problems are pervasive in a variety of contexts in which data is collected, example domains of which include fitness, food, video, apps, finance, etc. Embodiments of the time-state metric generation and computation techniques described herein facilitate the determination of time-state metrics in various types of contexts, allowing for the construction of metrics that are measured in a context that is in terms of time and state, beyond coarse aggregates or counts or averages. Embodiments of the time-state metric techniques described herein facilitate actionable insights contextualized in time and status. As will be described in further detail below, the time-state metrics capture behavior measures of interest for entities that are calculated in a specific time and state context. Example patterns of behavior that are determined using the time-state metrics described herein include:How much time did the entity spend in Status X?How much time did the entity spend in Status X when Y also happened?What is the average/peak/min of measurement V when entity was in Status X?How many of Type-X event occurrences happened when status of Y was Z? As shown in the examples described above, time-state metrics such as those described herein are beneficial in a variety of contexts, including fitness, food delivery, video streaming, e-commerce, fintech, apps, automotive domains, etc. The time-state metrics described above, which include behavioral measures of interest calculated in a time/state context, are beneficial for determining actionable insights in a variety of applications. Embodiments of the time-state metric techniques described herein provide improvements over existing basic summary statistics, which may be too coarse-grained to be actionable. Further, the techniques described herein facilitate the determination of time-state metrics in a manner that is more efficient than existing techniques to determine similar types of actionable insights. For example, existing data processing techniques are based on a tabular or relational model of computation. Suppose a database and collected data. The collected data is placed in a table. The collected data stored in the database is queried using a structured language such as SQL (Structured Query Language). Such a relational model is beneficial for manipulating and querying tabular data such as individual records in rows, where queries are made to, for example, aggregate properties of a population of interest across a column or multiple columns (e.g., by selecting, grouping-by, filtering, etc.). However, it is challenging to use such a relational or tabular model of computation, which does not inherently have a notion of time or state, to track event streams over time. For example, implementing time-state metrics such as those described above using an existing relational or tabular model would result in the use of complicated and expensive queries that are prone to errors. The following is an example of attempting to implement a time-state metric using queries of existing relational or tabular databases. An example involving video streaming is described herein for illustrative purposes. FIG.1Aillustrates an embodiment of raw measurements from a video session. For example, the raw measurements are events received from a client content player. A visual representation of the events over time is shown in the example ofFIG.1A. Shown in this example are various events such as player state changes, bitrate changes, seek events, and CDN (content delivery network) switches that occur over time. FIG.1Billustrates an embodiment of a tabular representation of events. In this example, a tabular representation of the raw events described in conjunction withFIG.1Ais shown. In this example, each row corresponds to a timestamp. The events associated with a given timestamp are populated with corresponding values from the raw event stream. In this example, raw measurements for the attributes such as player state, bitrate, CDN, and seek are shown. In this example, the collected raw events have been stored in a table. The table in this example includes various columns, such as a timestamp column. The columns also include columns for attributes present in the raw event data, such as a player state column, a bitrate column, a CDN column, and a seek column. For each event type, a column is created, where values for the various types of events are stored in the appropriate locations within the table (e.g., appropriate column at cell corresponding to associated timestamp). Suppose that in this video streaming scenario, the following metric is desired to be determined:How much time did a session spend in a connection-induced rebuffering state while using CDN C1? Such a metric is an example of a time-state metric, where a measure of a duration of time in a specific context is to be evaluated. For the measure of duration, the temporal and state context includes four sub-components, such as that the player is in a buffering state, the player had already started playing, the user has not recently seeked, and CDN C1 is being used. That is, in order to determine this time-state metric, a count of duration (example of a measure) is determined while in the context of the following four conditions:1. Currently buffering &&2. Play has already initialized &&3. User has not seeked in last 5 seconds &&4. Using CDN C1 As shown in this example, determination of the time-state metric is dependent on multiple conditions, where determination of when the conditions are met is based on the tracking of the occurrence of multiple types of events that may be occurring at different times, and are separated over time. Given the tabular representation of the raw event data shown in the example ofFIG.1B, in order to generate the connection-induced rebuffering measure given the desired context/state, a complex combination of queries would need to be implemented to ensure that the metric is computed only in the appropriate context, which includes numerous conditions, each of which may be complex to compute in a relational of tabular model. For example, consider the first state component: that the player was currently buffering.FIG.1Cillustrates an embodiment of data to be determined. In this example, to determine the duration of buffering, a query would have to be written to determine the time when the player was in the buffering state (e.g., condition 1 above). As shown in this example, this would involve scanning the values of the player state column, and determining the amount of time that the player was in the buffering state, which in this example includes determining the sum of t4−t1, t6−t5, t9−t8, and t14−t12, all of which are non-contiguous entities. As shown in this example, the value for the player attribute is only provided when the player changes state, where the player attribute value is only populated for specific timestamps. Thus, determining the amount of time that the player was within a specific state requires a number of queries to be written (e.g., to find the timestamp when the player==buffer, to find the next point in time when a change in the player value occurred, and then finding the difference between the two timestamps, repeating such processing, and aggregating such time periods in which the player was in the buffer state). In this case, it would be beneficial to model time in a table, as well as model differences in time when a particular state is occurring. However, existing relational or tabular techniques provide insufficient mechanisms to do so, resulting in complex, time-consuming, and error-prone query code. In addition to generating a complex query to determine the above player state, given the desired constraints/state, furthermore, the buffering state before the first play would have to be ignored (to satisfy condition 2 of the context, which is that the player has already initialized), which would in turn involve writing a complex query to track whether play has started, and ignoring buffering (as every player will buffer at start). That is, further code would need to be written to discount buffering before play started. Doing so in a code language such as SQL code is challenging, as there is not a mechanism by which to track the play state, and remove the buffering before the play. That is, in a tabular or SQL-like model or language, it is difficult to express such intents. FIG.1Dillustrates an embodiment of determining a time-state condition. In this example, determining when CDN C1 was being used (e.g., condition 4, above) is shown. As shown in this example, it is desired to be able to track when C1 was in use. While the tabular representation can be queried to determine in which rows the CDN column value was equal to C1, it is more challenging to determine the collection of rows (timestamps) or interval in which the CDN was C1 until switching occurred. As shown in the above example, determining time-state metrics using tabular representations of collected event data is challenging, as, for example, determining intervals or periods in which a system is in a certain state can be challenging. Further, it can be complex to determine logic when multiple conditions are present in the desired time-state metric and are to be combined. FIG.1Eillustrates an example of an SQL query. In the example ofFIG.1E, a complex SQL query to implement the time-state metric above (with four conditions) is shown. The complex queries result in inefficient computation, in particular, if being scaled for numerous video-playing sessions and many time-state metrics. That is, there is significant compute effort to implement time-state metrics when the data is maintained in a tabular or relational model. Further, significant effort is needed to develop and debug such code, in which there may be difficult to discern semantic errors. The following are embodiments of techniques that facilitate efficient configuration and computation, at scale, of time-state metrics, which are usable to determine state and behavior of a system in context. The techniques described herein provide various benefits over existing data processing tools that are based on tabular frameworks. Using embodiments of the time-state framework or model described herein, the evolution of attributes over time is modeled in what is referred to herein as a timeline representation, facilitating efficient configuration and computation of time-state metrics. Modeling the evolution of attributes includes tracking values of attributes over time, rather than, for example, at specific points in time. Further, the techniques described herein include computational operations that operate on the timeline representations described herein, where such timeline operations are used to determine how attributes change in relation to other variables, events, and entities. This includes determining, for example, the evolution of a column, and then also understanding the evolution of that column in the context of other columns, which are also evolving over time. The techniques described herein further allow understanding of group and aggregate behaviors over an entire population, as well as over windows of time. This is an improvement over existing tabular or time series models, which do not track such stateful evolution over time, or require high computational cost and effort in order to implement such a class of time-state analytics. Compared to existing relational-based systems, the techniques described herein are more flexible and efficient. In addition to being more computationally efficient as compared to existing data processing techniques, the techniques described herein facilitate configuration of time-state metrics without requiring coding or writing SQL or other types of code. The following are embodiments of modeling and computing time-state analytics. Embodiments of the time-state analytics techniques described herein include a number of components, including system architecture and integration of time-state metrics, further details of which will be described below. Time-state analytics are a class of big data computation problems for actionable insights that require stateful context-sensitive processing over event streams. As shown in the examples above, time-state analytics are important for a variety of applications. For example, in video streaming, many quality of experience (QoE) metrics such as connection induced buffering, exit before start, average bitrate, etc. are stateful and context-sensitive, falling under the time-state analytics class of computational problems. Described herein are embodiments of a specialized data/compute model for supporting time-state analytics. Embodiments of the data/compute model are also referred to herein as a timeline model implementation. When determining time-state analytics, the timeline processing techniques described herein provide improvements over existing data processing systems, which are based on legacy tabular, relational, or SQL computation models. The timeline processing techniques described herein support fine-grained metrics in real time, at scale. As described above, example benefits provided by the timeline processing techniques described herein include providing actionable fine-grained metrics at reduced cost, with reduced development time, and increased visibility and clarity. While embodiments of timeline processing and time-state analytics are described below in the context of video streaming, the techniques described herein may be variously adapted to accommodate any other type of time-state metrics as appropriate. System Architecture FIG.2illustrates an embodiment of a system for performing time-state analytics. Embodiments of queries to the timeline processor described herein include two components—types of metrics of interest, and types of dimensional aggregations to be performed. For example, a query may include an intent to understand aggregate properties of a stateful metric in the context of some metadata. Embodiments of the time-state analytics platform described herein support stateful metrics, as well as dimensional summarizations. In this example, time-state analytics platform200includes a compositional system architecture. In this example, sensor feed ingestion engine202is configured to ingest feeds or streams of sensor data from client devices. In the example of video streaming, content players on various devices (e.g., laptops, mobile phones, tablets, desktops, set-top boxes, game consoles, etc.) are configured to stream sensor data (collected by the content player for a video streaming session) to platform200. For example, various event data or values measured by the content player are transmitted in messages that are transmitted to platform200over a network such as the Internet. Stateful session metrics determination engine204is configured to determine stateful session metrics (e.g., time-state metrics) on the feed of sensor data ingested by ingestion engine202. In some embodiments, stateful session metrics determination engine204is configured to convert the stream of raw session data, which may include measures and events collected for multiple types of attributes, into timeline representations of attributes. In some embodiments, the timeline representations for attributes are updated as new values for the attribute are ingested. Further details regarding conversion of a stream of ingested values of an attribute into a timeline representation of the attribute are described below. In some embodiments, stateful session metrics determination engine204is further configured to determine time-state metrics by applying a set of timeline operators on the timeline operations. This includes logically combining timeline representations of multiple attributes in order to determine the context in which a time-state metric is computed. Further details regarding timeline operators and combining of timeline representations to compute a time-state metric are described below. In some embodiments, the time-state metrics that are computed (e.g., for a video streaming session) by stateful session metrics determination engine204are stored to data storage layer206. In some embodiments, a time-state metric is computed for events that are included within a certain scope. One example of such a scope is a streaming session (e.g., in the context of video streaming). In some embodiments, time-state metrics are determined on an individual session-level basis. In some embodiments, analytics on cohorts of sessions may be of interest. For example, an individual session is associated with a set of metadata dimensions, such as the ISP (Internet Service Provider, such as Comcast, AT&T, etc.) of the session, the operating system of the session (e.g., iOS, Android, etc.), device type, etc. In some embodiments, multidimensional analytics engine208is configured to perform aggregations or rollups on groupings of metrics that share a set of dimensions. In some embodiments, time-state metrics and aggregations of such time-state metrics are provided as output of platform200via frontend interfaces210. In this example system200, a system decomposition is shown in which stateful session metrics are computed when sensor feeds come in, and where multidimensional analytics are performed in a backend. In this example, there is a form of decoupling of the two tasks of determining stateful session metrics and determining multidimensional analytics. In this example, state session metrics are precomputed, with multidimensional analytics performed on the backend. In some embodiments, the time-state metrics are computed in real-time, as a stream of data is ingested. In other embodiments, the timeline representation conversion and manipulation to determine time-state metrics is performed as a batch process (e.g., as a backend process, not only during streaming). Timeline Representation Transformation and Time-State Metric Computation The following are embodiments of computing stateful session metrics. In some embodiments, the computing of stateful session metrics is based on embodiments of the timeline data/compute model for time-state analytics described herein. Using the timeline data structure and computation models described herein, data is processed as a timeline, allowing for modeling of attributes with values that vary over time. As will be shown below, the use of such a timeline data structure and computation model as described herein facilitates the intuitive configuration of queries and metrics, reduces development effort, and allows for various optimizations to reduce resource usage. In some embodiments, time-state metrics are computed based on timeline representations of attributes. The timeline representations of attributes represent the change in state of values of attributes over time. The timeline representation of an attribute is generated by transforming the raw event data collected for an attribute (which is indicated at points in time, or timestamps) into a state representation that models the change in the values of the attribute over spans of time. In some embodiments, time-state metrics are computed on a session level. For example, raw events for a session are collected. The raw events collected for a session are transformed, using operators, into timeline representations of the changes in values for attributes during that session. A session-level time-state metric is then computed by using a set of timeline operators on the generated timeline representations of one or more attributes, further details of which will be described below. FIG.3illustrates an embodiment of a system for determining time-state metrics. In this example, time-state metrics system300is an embodiment of stateful session metrics determination engine204. In this example, time-state metrics system300includes timeline request configuration engine302and timeline processor310. In this example, timeline request configuration engine302includes configuration file(s)304, compiler306, and operator library308. In this example, timeline processor310includes streaming layer312, operator graph executor engine314, and message layer (to database)316. Further details regarding time-state metrics system300and its various components are described below. In some embodiments, the timeline processor310is configured to implement time-state operators and time-state data structures and model representations. In various embodiments, this includes generating timeline representations of attributes and computing time-state metrics by applying a chain of timeline operators. In some embodiments, the timeline processor is implemented using programming languages such as Scala, Rust, etc. In other embodiments, the timeline processor is implemented as an application programming interface (API) on top of existing analytics databases. Other implementations may be utilized, as appropriate. In some embodiments, the timeline processor takes as input a data stream (e.g., via message layer to input318), computes time-state metrics, and outputs the time-state metrics to a database. In this example, the input data stream (e.g., ingested via message layer318, which is an example of sensor feed ingestion engine202) includes session data provided in the form of heartbeats, which as one example is implemented in a format such as JSON (JavaScript Object Notation). In some embodiments, the stream is from a source such as Amazon S3, where the stream is processed through the timeline processor. In some embodiments, the timeline processor receives as input a timeline request configuration from configuration files (304). In some embodiments, the timeline request configuration is a configuration file for individual time-state metrics. Different metrics may be written for processing the data, where each different metric is associated with a corresponding timeline request configuration file. The time-state metrics are to be computed on the stream of raw data received for a session. An ensemble of multiple time-state metrics may be configured to be computed for the session. In some embodiments, the time-state metric configuration files are consolidated through compiler306. For example, the system includes an operator library through which metrics are written. In some embodiments, a time-state metric is implemented as a collection of timeline operators that are applied and combined in a particular sequence. In some embodiments, each time-state metric is represented as a graph, such as a DAG (directed acyclic graph). The collection or ensemble of DAGs is provided to the compiler. The compiler, based on the DAGs, refers to the operator library to obtain the code needed to execute the operators specified on the DAGs. In some embodiments, to execute a time-state metric (e.g., apply a DAG for every session that comes in), the compiler reads the configuration file for a time-state metric. The compiler then instantiates code to execute the graph of operators that form the time-state metric. The compiler synthesizes code for the DAG runtime to execute. For example, the compiler follows the timeline operator graph, identifies the operators to be performed in sequence, as well as retrieves the code for executing the operators and includes any specified operator parameters. For example, the compiler instantiates runtime objects for the runtime to execute. In some embodiments, the runtime has implementations of the operators in a time-state metric configuration. In some embodiments, the timeline configuration file is in a machine-readable format such as YAML. As one example, the configuration is in a JSON format. In some embodiments, the configuration file is the output of an editor, such as a visual UI (user interface) editor used by a metric-designer to configure a time-state metric. In this example, the compiler generates the runtime code corresponding to the DAG representation specified in the timeline configuration file. The operator graph executor engine314(also referred to herein as a DAG executor) is configured to execute the code provided by the compiler. The DAG executor is applied to data processed by the streaming layer312, which is configured to receive a stream of raw data. For example, there is a message queue (e.g., message layer to input318) that takes heartbeats and ingests them into the system. For example, the message layer is configured to stream raw data into the platform. For example, the DAG executor is configured to traverse the nodes of the DAG, and execute the operators at each node according to the parameters and specification of the node (e.g., input arguments, parameters, etc.). Further details regarding graphs of timeline operators used to implement a time-state metric are described below. In some embodiments, the streaming layer312is configured to support complex event processing by performing various tasks such as fault tolerance, checkpointing, watermarking, etc., or any other data quality processing as appropriate. For example, in the real-world, events may not always arrive in sequence due to network delays, failures, data drops, etc. The streaming layer provides a canonical or cleaned up stream of event data that the DAG executor runs on. In some embodiments, every heartbeat of raw data that is ingested passes through the DAG of operators, end-to-end. For example, the time-state metric is updated for every heartbeat (or message with a set of raw event data). In some cases, the heartbeat may not have events that are of interest to the time-state metric, and may in part be ignored (where the value of the heartbeat may not be changed, since the raw data is not of interest to the operators in the time-state metric and computations are not performed on them, although the span of a timeline may be updated to extend the span range in some embodiments, further details of which are described below). In some embodiments, the input to an operator is a timeline, and the output of a timeline operator in the operator library is a timeline. In some embodiments, the output provided to the database is in a format that is appropriate for the output database. For example, the message layer to the database is configured to translate or convert the timeline output of the time-state metric into a format applicable to the output database. This includes encoding information in formats acceptable by the output database. In some embodiments, the message layer to database316is configured to format time-state metric outputs into an output database. In some embodiments, the message layer also performs summarizations. For example, for a session, it may be desired to have a value of the metric on a periodic basis, such as every minute or every thirty seconds. In some embodiments, when reporting the session to the database, a summary is generated based on aggregated session value metrics and converted to a format applicable to the database. In this example, what is reported to the database is a “real-time” metric, where the current value of the metric is reported according to some frequency or period. Another example type of metric that is reported is an end-of-lifetime metric. For example, at the end of a session, values for the time-state metrics that were applied are reported to the database. In some embodiments, the messaging layer is an interface to an output database that is configured to report the results of the time-state processing. In some embodiments, the interface is configured to determine, for a metric, what to report based on the time-state metric value, when to send the report, how to package the report, etc. The following are embodiments of determining what output data is to be sent. In some embodiments, not only the final result of the metric is provided as output. For example, any node in a DAG (not only the final node) may be tapped, and the corresponding data from that node provided as output. As one example, suppose that the output of the final node in the DAG is tapped into. The output of the time-state metric may be values, as well as timelines themselves (if supported by the database). The following are embodiments of determining when to send or transmit output based on a computed time-state metric. In some embodiments, the results of the metric, which are being applied to data as it is ingested, are used to generate an updated value over time as well. The output of the time-state metric may be provided at the end of a session. The values generated by the metric may also be reported periodically to the database (e.g., every thirty seconds). Metric values may also be provided on demand, as the session raw data is streaming in and processed in real-time. In some embodiments, the processing described herein occurs in a streaming layer, in real-time, operating on raw event data that is being ingested and collected. As described above, the context in which a metric is computed may be based on the combination of measured attributes being in certain respective states. In some embodiments, metadata associated with a session (whose stream of raw data to which the time-state metric is being applied) is stored along with the metric values. For example, when the time-state processor performs a computation on raw data corresponding to time t=X, the time-state processor generates an output value corresponding to time t=X. A row with a timestamp corresponding to t=X is sent to the database with the metric value computed at that timestamp along with metadata describing the session whose raw sensor data the time-state metric was applied to. If multiple metric values are being computed, then, for example, the output row includes multiple columns, one for each type of metric. Each row corresponds to a particular timestamp, and the output values computed for the various time-state metrics for that timestamp are included as column values in the row of data provided to the database. In some embodiments, aggregations or rollups can be performed to aggregate information across different time windows. For example, providing output values as they are computed for every time step may be resource intensive. In some embodiments, samples of time-state metric values (which are potentially being updated as new session raw data is received) may be provided periodically. Rollups can also be performed to determine, for example, averages of the output of a time-state metric, where the average is delivered periodically. This is an example of providing a summary of the metric value. In other embodiments, raw timelines are provided as output to the database. As another example, the time-state metric data structure representation is sent to the database. In some embodiments, the reported data is packaged as raw data or sent in a “session summary”-like data structure composed of the session attributes and the associated time-state metrics of interest. In the above example ofFIG.3, the timeline processor facilitates the use of timeline data models in existing data-processing systems. In some embodiments, the timeline processor is configured to translate the events stored in the system's original format to one or more timeline objects/representations. In some embodiments, producing a timeline object includes determining an encoding of an input and identifying the timestamp or time range associated with each piece of incoming data. The timeline processor is configured to then apply timeline operators. In some embodiments, the timeline operators are selected from a timeline library of pre-defined operators. The timeline processor is also configured to translate the resulting timeline objects to another data format appropriate for exporting. In some embodiments, digesting or translating the timeline objects includes calculating final outputs (e.g., stateful metrics such as connection-induced rebuffering) by evaluating final timeline object data at specified timestamps or time ranges. In some embodiments, digesting timeline objects includes encoding and exporting results of timeline data evaluation in a format appropriate for downstream consumers (e.g., tables, summary statistics, etc.). Example Time-State Metric Determination The following is an example of computing a time-state metric. In some embodiments, the processing described in this example is implemented by time-state metrics system300ofFIG.3. In this example, suppose that the following time-state metric is to be computed for each video session:How much time did each video session spend in connection-induced buffering when using Cellular? In this example, the amount of time that the video session spent in buffering is determined within the context of when the player was using cellular. Suppose the following received heartbeats that are streamed from a content player, where heartbeats are sent out including sensor data that is measured for various attributes: {“timestamp”: “1”, “UserID”: “U1”, “SessionID”: “S1”; “Bitrate”: “720p”, “Network”: “WiFi”, “Player”: “Init”} {“timestamp”: “3”; “UserID”: “U1”, “SessionID”: “S1”, “Player”: “Init”} {“timestamp”: “7”, “UserID”: “U1”, “SessionID”: “S1”, “Bitrate”: “1080p”} {“timestamp”: “8”, “UserID”: “U1”, “SessionID”: “S1”, “Network”: “Cellular”} {“timestamp”: “10”, “UserID”: “U1”, “SessionID”: “S1”, “Player”: “Buffer”} {“timestamp”: “12”, “UserID”: “U1”, “SessionID”: “S1”, “Bitrate”: “480p”, “Player”: “Buffer”} {“timestamp”: “14”, “UserID”: “U1”, “SessionID”: “S1”, “Player”: “Play”} . . . As shown in this example, an arbitrary sequence of events or raw or observed or reported data values is received. While in this example, a heartbeat is shown corresponding to a certain timestamp, in some embodiments, heartbeats are batched together and may include multiple heartbeats, each corresponding to some particular time (indicated by a timestamp). Timeline Representation of Raw Measurement Data In order to determine the duration metric described above, two durations or ranges of time are to be determined. The first duration is the span(s) of time in which the player was in the buffering state. The second duration to be determined is the span(s) of time in which the network was in the cellular state. The end-to-end or overall duration is the amount of time in which the player being in the buffering state overlaps with the network being in the cellular state. That is, the final duration of interest is the amount of time in which the player is in the buffering state AND the network is cellular. In order to determine the time-state metric, two durations are to be determined: (1) when the player is in the buffering state; and (2) when the network is in the cellular state. A logical AND operation is performed to determine the overlap in time, which in turn is used to determine the final duration metric value. The following is an example of determining the portion of the context state that corresponds to “when using Cellular.” This includes modeling when the player was using a cellular network (versus, for example, WiFi). In this example, the network events are extracted from the above example raw data for the session.FIG.4Aillustrates an embodiment of a visualization of discrete events. In this example, a visualization of the values of the network attribute at their corresponding timestamps in the raw event stream is shown. While the value of the network attribute is shown at selected points in time that correspond to when a sensor measurement (determination of network state) was transmitted, it would be beneficial to determine a timeline of when the player was using WiFi or Cellular.FIG.4Billustrates an embodiment of a timeline representation of an attribute. In this example, the raw network event data is passed through a timeline operator that reconstructs the “network state” of when the player was using WiFi or cellular. In this example, the player transitioned to cellular at timestamp=8. In this example, a state timeline of the status of the network being used is constructed. As shown in the example ofFIG.4B, the network state is now modeled as spans of time in which the network state was a certain value (rather than at discrete points corresponding to timestamps in the raw data). In the example ofFIG.4B, the network state over time is constructed, where in this case, the network state is either cellular or WiFi. With respect to the metric under consideration, the particular context with respect to network state of interest is whether or not the network state was cellular. As shown in this example, the discrete network events have been transformed into a state function over time. In this example, to facilitate determination of when the network state was cellular, as well as to facilitate combining with the other context to be determined (when the player was buffering), the network state timeline ofFIG.4Bis passed through another timeline operator to generate a new Boolean timeline representation/model whose value is either True (corresponding to when the network state is cellular) or False (corresponding to when the network state is not cellular). For example, a timeline operator is utilized on the timeline ofFIG.4B, where a True/False value is assigned based on whether the network state value for a span of time was Cellular or WiFi.FIG.4Cillustrates an embodiment of a timeline. As shown in the example ofFIG.4C, the timeline ofFIG.4Bis evaluated to determine that “Is Using Cellular?” is true after timestamp t=8, and false prior. As will be shown below, transforming the raw network events to a True/False timeline facilitates manipulation of multiple timelines in order to determine the overall time-state metric of interest. The following is an example of determining the portion of the time-state metric context that corresponds to connection-induced buffering. FIG.5Aillustrates an embodiment of events at discrete points in time. The example plot ofFIG.5Avisualizes the value of the player events at times corresponding to timestamps in the received raw data above. FIG.5Billustrates an embodiment of a player state timeline. In some embodiments, the player state timeline representation is constructed from raw player events extracted from the raw event data received in heartbeat messages. In this example, the player events have been converted to states, where the player state is represented via spans of time in which the player has a certain state. In order to facilitate determination of the final duration metric, the timeline ofFIG.5Bis further converted into another Boolean timeline that indicates whether the player is in the connection induced buffering state. In this example, the new timeline has Boolean values of either True or False (whereas there were at least three different player states). In this example, the player state timeline is passed through another timeline operator to generate the new Boolean timeline representation/model ofFIG.5C, whose value, for a given span of time, is either True (corresponding to when the player state is buffering) or False (corresponding to when the player state is anything other than buffering). As shown in the example ofFIG.5C, the timeline ofFIG.5Bis evaluated to determine that “Is Connection Induced Buffering?” is true between timestamp t=10 to timestamp t=14, and false otherwise (in the time range of timestamp t=1 to t=15). The following is an example of determining the end-to-end metric, which is the duration of time when conduction-induced buffering and using cellular are performed. In this example, to determine the time-state metric, first the context is determined. In this example, the context is connection-induced buffering when using cellular. In order to do this, the player state and the network state are combined via a logical AND operator. For example, the timelines ofFIGS.4C and5Care combined by applying a logical operation (AND) at each point in time. The following is an example of using timelines to efficiently determine the context by manipulating timeline representations of attribute/variable states.FIG.6Aillustrates an embodiment of manipulating timeline representations of attributes. As shown in the example ofFIGS.4C and5C, Boolean timeline representations were generated from the raw events. In the example ofFIG.6A, the Boolean timelines corresponding to “Is Using Cellular” and “Is Connection Induced Buffering?” shown inFIGS.4C and5Care reproduced. In this example, by generating Boolean timeline representations for the context of interest, the overall context is determined by determining when both timelines were equal to true. In this example, the timelines ofFIGS.4C and5Care ANDed together to generate a new timeline representing when there was connection-induced buffering AND when the player was using cellular.FIG.6Billustrates an embodiment of a context timeline. As described above, the context timeline ofFIG.6Bis the AND of the timelines ofFIGS.4C and5C. In this example, the timelines ofFIGS.4C and5Chave been combined by applying a logical AND operation at each point in time. Now that a timeline representation of the overall context has been determined (e.g., when the overlap in time was true), the metric of duration is computed as the summation of the duration where the overlap was true. In some embodiments, the duration is determined using the context timeline representation ofFIG.6B. In this example, the summation is represented also as a timeline.FIG.6Cillustrates an embodiment of a duration timeline. As shown in the example ofFIG.6C, if the value of duration were measured at time t=4, then the duration value will be 4 seconds spent in buffering time when also using cellular. The timeline representation ofFIG.6Cis generated by determining the cumulative duration of where the state of the input timeline (that ofFIG.6B) was True. As shown in this example, a time-state metric is computed by manipulating timeline representations of attributes. Generating the time-state metric includes the user of various operators to implement the various desired logical functions. As shown in the above example, in order to determine the end-to-end metric of connection-induced buffering when using cellular, a logical overlap of the buffering Boolean timeline ofFIG.5Cand the network Boolean timeline ofFIG.4Cwas determined. Computing the end-to-end duration metric then in turn involves determining an integral of the combined buffering/network timeline. In some embodiments, time-state metrics are specified through the use of various operators. In some embodiments, the operators are included in a library of operators such as operator library308. As shown in this example, events were converted to timeline representations. The timeline representations were then passed through a sequence of timeline operators. For example, the timeline operators take timelines as input and generate output timelines. Some timeline operators also take multiple timelines as input and combine them into one or more output timelines, which may in turn be fed as input to yet other timeline operators. As shown in this example, to determine a time-state metric, raw events are converted to timeline representations and then passed through a chain of timeline operators that are applied in a sequence (e.g., graph of operators as described above). One example type of operator is one that manipulates raw event data into timeline representations of states. Another example type of operator is one that manipulates the timeline representation of states into a manipulatable timeline representation (e.g., to define Boolean values conditioned on a desired logical operator). Further operators for determining certain types of metrics over time (e.g., duration) are also included. The various types of operators thus provide an efficient mechanism by which to create compact queries for desired time-state metrics. In some embodiments, the specification of specific operators is composed based on the type of attribute to be converted. For example, to determine the player state Boolean timeline ofFIG.5C, a first operator is used to extract the raw player state events and generate the player state timeline representation of the player attribute ofFIG.5B. To generate the Boolean timeline representation, the timeline representation ofFIG.5Bis passed through a logical operator which is configured to determine, over time, when the value of the player attribute is equal to buffer. If the value of the player attribute at some time was equal to buffer, then the output value for that time is set to True. If the value of the player attribute at that time was not equal to buffer, then the output value for that time is set to False. Applying the logical operator over the timeline ofFIG.5Bresults in the timeline ofFIG.5C. In this example, the use of timeline representations provides various benefits. Such benefits include more efficient manipulation of data to determine complex state over time, as well as more intuitive visualization of data over time. The use of the representations described herein also simplifies the set of operations used to compose define complex time-state metrics, as compared to using existing relational or tabular techniques. As shown in the above example, determining a time-state metric from raw events includes multiple components. One component is a data structure/model in which raw events are converted into timeline representations. The timeline representation is a data model for data that appears in time-state problems, such as events, step functions, and measurements. A second component is a computation model which includes executing various operators to manipulate such timeline representations (e.g., combine them) in order to determine the end-to-end time-state metric of interest. In some embodiments, the computation model includes what are referred to herein as time-state operators. In some embodiments, time-state operators are configured to take as input one or more input timelines, and produce as output one or more output timelines, according to the specification of the time-state operator. The timeline operators of the computation model are configured to manipulate the aforementioned timeline representation, and provide an efficient mechanism by which to express logic that would be more difficult to implement in tabular models such as SQL. The following are further embodiments regarding time-state operators. As described above, time-state operators are used to manipulate time-state timelines to construct metrics. In some embodiments, time-state operators are configured to take as input one or more timelines, apply one or more transformations, and produce one or more output timelines. In some embodiments, a configuration of a time-state metric involves specification of a chain of time-state operators to be applied in a particular sequence. For example, the time-state operators are primitives from which time-state metrics are composed. In some embodiments, a compositional language is provided that allows users to combine the aforementioned operators into a directed acyclic graph (DAG) to implement the desired or intended time-state metric. One example type of operator is to extract a field or attribute from a heartbeat and add a field or attribute value to a timeline representation of the attribute. As one example, to generate the timeline representation ofFIG.4B, an extraction operator for the network field/attribute is executed. In the example of heartbeats in the JSON format, the network extraction operator is configured to identify the network key in the heartbeat message (e.g., as it is ingested) and extract the corresponding value for that key name. The extracted attribute value is then added to a timeline representation for the attribute. For example, network values are extracted from heartbeats and added to a network timeline. The following are embodiments of a timeline representation of an attribute. In some embodiments, raw events are converted into a timeline representation. For example, the raw event values are encoded as spans. In some embodiments, the timeline representation includes a representation of the states (e.g., corresponding to different values) that an attribute can be in. In the example of the network attribute, the network attribute may be one of two values, WiFi or Cellular. In this example, these event values are treated as the two states that the network attribute can be in at any given time. In heartbeats, network events include indications of when the player was using WiFi or Cellular, along with corresponding timestamps. In some embodiments, a network event occurs when the network attribute value changes. For example, suppose that at time t=1, the network value was WiFi. At time t=8, the network value changed to Cellular. This change in network value is an event that is included in the heartbeat. However, between time t=1 and t=8, no network values were sent (as the value was WiFi during that period). This is shown in the example ofFIGS.4A and4B. In some embodiments, the network field is tracked over time, where the extracted network values from heartbeats are encoded as spans. In some embodiments, a span includes a start time, an end time, and a value that the attribute had between the start time and the end time. In some embodiments, the timeline representation is a data structure that includes a set of span data structures. For example, an event-to-state operator is executed to convert the raw events shown inFIG.4A(which is based, for example, on the raw heartbeat data shown above) to the representation shown inFIG.4B. In some embodiments, the timeline processor is configured to convert raw events recorded at discrete points in time into state functions over time (also referred to herein as a timeline state representation of an attribute, where the timeline representation is generated by performing a transformation on the raw event data). For example, the timeline representation ofFIG.4Bis encoded as a set of spans, where each span is determined when a change in the value of the network attribute was detected from the raw data. For example, the timeline representation ofFIG.4Bis implemented with the following spans: Span_1: {start_time: t=1, end_time: t=7, value: WiFi}, Span_2: {start_time: t=8, end_time t=14, value: Cellular}. In some embodiments of the timeline data model representation, the start and end times are inclusive. In some embodiments, the timeline representation ofFIG.4Ais also encoded as a set of spans, where each span represents zero or more raw events. In some embodiments, one span contains a list of events (here in this example denoted as a comma-separated list of event values enclosed in square brackets) that happened at the end time of the span. For example, the timeline representation ofFIG.4Ais implemented with the following spans: Span_1: {start_time: t=1, end_time: t=1, value: [WiFi]}, Span_2: {start_time: t=2, end_time t=8, value: [Cellular]}, Span_3: {start_time: t=9, end_time: t=14, value: []}. The above example is an example event-type timeline object in which events are encoded. In some embodiments, timeline data structures include lists of spans, such as in the example shown above, storing the “span”s of interest where the timeline had a particular value. Various other semantically equivalent encodings of the timeline data structure may be used as appropriate, such as using discretized vector measurements in time, columnar representations akin to columnar analytics databases, and other compressed representations as well. The above example representations are usable directly as data structures (in addition to other semantically equivalent data structures). In some embodiments, new spans are created, and current spans are closed when a raw event indicates a change in state of the attribute. For example, a new span is generated when an event indicating a change in the value of an attribute is encountered. The now-previous span is closed. For example, at time t=8, a change in the network state value is determined (as it has changed from WiFi to Cellular). Based on this change, the first span is closed, where its end time, which had previously been unspecified and open, is now set to t=7, or the time step just prior to the time of the newly observed value (as this indicates when the network state value stopped being WiFi), thereby closing the span. For example, if the new value was received at time T, then the previous span's end time is now T−1. As one example, the time is modeled discretely, at a granularity of nanoseconds, or any other granularity of units of time or time steps as appropriate. A new span is created to track the span of time that the network attribute is in its new state of Cellular. The start time is set to be when the new network state was detected, which is t=8. The end time for this new span is not set until the next timestamp at which the state of the network changes yet again. In another embodiment, the end time of a current span is set to be the timestamp of the most recent heartbeat. If a heartbeat comes at the next timestamp, and the attribute value has not changed, then the span is extended by updating the end time of the span to be that next timestamp. This is to account for the fact that the timeline processing is occurring in real time, as raw or observed or reported data samples are being streamed (in real-time) and ingested. In this example, the new span is added to or otherwise included in the sequence of spans of the timeline representing the attribute, and the new span is temporally subsequent to the previous span in the sequence. As shown in the example above, a timeline object representing the attribute over time includes a sequence of spans or span elements. Each span specifies an interval of time and an associated value of the attribute over that time interval. The spans are created and updated as observed values of the attribute (which are associated with corresponding timestamps of when the events occurred, were sampled, were reported, etc.) are streamed in. In one embodiment, a span element includes a span start time, a span end time, and a span value. For example, the span start time and the span end time specify a time interval of the span. In some embodiments, the span value is an encoding of the value(s) of the attribute over that time interval specified by the start time and end time. For example, while raw data values for the network attribute may be received intermittently (e.g., when changes in the network attribute occur) as shown in the above example, such as at time t=1 and time t=8, spans are created that specify the value of the network attribute over all time. For example, the span value in the first span of the sequence encodes that the network attribute is the constant state value of “WiFi” during the entire interval of time specified from the start time to the end time of the span. In this way, the span representation may be queried for the value of the network attribute at any time, such as at time t=5, which would return the span value of “WiFi”, even though a network sample corresponding to time t=5 did not exist in the stream of raw data. Performing such a conversion of observed attribute samples into a compact timeline representation using the techniques described herein provides various benefits. For example, specifying an encoding of value(s) of an attribute that is valid over an interval of time that is determined by a start time and end time specified in a span element allows for a compact representation of the varying of the attribute over time, in contrast to explicit enumerating of the value at each possible timestamp. This provides an improvement in the amount of storage needed to maintain information pertaining to the evolution of values of the attribute over time. The encoding of time-varying attribute values in the compact span representation described herein reduces the amount of storage needed to maintain the information about the attribute over continuous time. Further, the compact representation provides for the benefit of indicating a value for the attribute at points in time where samples were not taken or observed (e.g., at intermediate points in time between times at which samples were taken). The following are further embodiments of determining the parameters (start time, end time, and encoded value) of a span. In the above example, the end time of a most recent span, and a start time of a next span in the sequence of spans in a timeline representation were determined based on receiving of updated values of an attribute. For example, the end time of a current span is updated until an event timestamp corresponding to a change in state of the attribute is received, where the end time is no longer updated for the current span element, and a new span is created in the sequence to encode the new attribute state and the interval of time over which the new attribute state value is valid. In some embodiments, the end times and/or start times of spans are determined using other types of time markers. One example of such a marker of time is based on watermarks that are determined when processing real-time streaming data. For example, while a data point or event may be received by the platform at a certain time, the actual event timestamp of a raw data point (timestamp of when the event occurred) would be some time prior. Due to delays, lateness in receipt of data may result in that data not being appropriately incorporated (e.g., where it should have been incorporated based on event time, but was received by the processing system too late to be included). Watermarking may be used to determine thresholds for accounting for late receipt or arrival of events. In some embodiments, the start/end times of spans may be set to timestamps determined based on watermarks. For example, a span end time may be set to a timestamp that is determined according to a watermark threshold (which specifies, for example, an expected or allowed amount of lateness between event time and receipt time), such that for that interval of time specified for the span, no late arrivals of data points are expected (which could potentially indicate changes or updates to the attribute state value that would require, for example, the end time of the span to be retroactively changed based on the late-arriving event). A new, subsequent span element is also created in the sequence of spans, where the start time is based on the timestamp generated according to the watermark threshold, where the value for the new span is set to the value of the prior span (in the case where a new event data value was not received). Existing spans may be closed (e.g., end time is set and no longer changed), and new spans created based on new watermark-based timestamps being determined (which may occur as a batch process over time). In some embodiments, the value of the attribute at a given timestamp or point in time is determined by querying the timeline with the given queried-for timestamp and determining which span (which has a corresponding time span) the queried for timestamp is included in. The value of the attribute at that span is returned. As described above, it would be beneficial to determine when a timeline for an attribute has a certain value. In some embodiments, this is performed by using what is referred to as an “Equals” operator, which takes as input an attribute timeline, and generates a Boolean (e.g., true-false timeline such as that shown in the examples ofFIGS.4C and5C). For example, each of the spans in the timeline representation is accessed, and the value for each span is compared against a value of interest. A new timeline is generated that has the same spans, but for each span, now has a true or false value based on the comparison between the attribute value for that span and the value of interest. Time-State Metric Request Configuration As shown in the above example of determining how much time did a video session spend in connection-induced buffering when using cellular, the computation of the time-state metric involved the execution of a chain of operators applied in a particular sequence to transform raw events to timeline representations, manipulate timeline representations (e.g., perform logical operations on timelines to combine them), as well as determine measures (e.g., cumulative duration) on timelines. In some embodiments, the chain or sequence of operators used as primitives to construct or build the time-state metric is expressed as a directed acyclic graph (DAG) of operators. The processing logic represented by the DAG of operators is registered as a configuration for the time-state metric. Such processing logic is used to perform computations on the event stream in a streaming manner. FIG.7illustrates an embodiment of a time-state metric directed acyclic graph.FIG.7shows an example composition of operators to create a metric DAG for the time-state metric above (How much time did each video session spend in connection-induced buffering when using Cellular?). In this example, Get operators704and706are used to extract specified attributes (network and player attributes, respectively) from the session data702(that may be streaming in). Timeline representations are generated by using the “LatestEventToState” operators708and710to generate the timelines shown in the examples ofFIGS.4B and5B, respectively. In this example, the “GET” operators are configured to extract specified information (e.g., specific fields, or according to a function) from raw data (e.g., JSON format heartbeats, or any other raw data format such as CSV). The “LatestEventToState” operator is configured to take the extracted values for the field and convert the raw events into a timeline representation with the encoding of spans as described above (and update the timeline representation as new events are received). Equals operators712and714are used to generate the True/False timelines shown in the examples ofFIG.4C and5C, respectively. For example, the equals operator is configured to compare each update or state with a fixed value or parameter specified for the parameter (network=True and player=buffer), producing True or False. The output timelines provided as outputs of operators712and714are logically ANDed together using timeline AND operator716to create the timeline shown in the example ofFIG.6B. The timeline ofFIG.6Bis then passed as input to duration operator718, which is configured to perform integration and create the timeline shown in the example ofFIG.6C. The DAG representation of a time-state metric is used to capture the sequence of operations to be executed, as well as the combining of sub-operations, which models the combining of multiple contexts in multiple states. As shown in this example, a node of a DAG representation of a time-state metric is an operator (selected form the operator library) with (optional) corresponding parameters. The directionality of the edges between nodes indicates the input to an operator node, and where the output of the operator node proceeds to (e.g., another node in the graph of operators). Further details regarding timeline operators are described below. Operator Graph Optimizations In some embodiments, new metrics are registered to the timeline analytics system. For example, an ensemble of metrics may be registered or configured, where each time-state metric is represented as a DAG. In some embodiments, the DAGs of the metrics in an ensemble are evaluated, and DAG consolidation is performed. For example, graphs or subgraphs of operators that are common to multiple metrics are identified so that they are only determined once (rather than being computed multiple times and repeated for the entire ensemble). This is one example type of DAG optimization. The following are further details of performing such operator graph optimization. As described above, a compiler is configured to read timeline metric configurations and execute the metric by implementing the operators in the DAG representation of the timeline metric configuration. In some embodiments, multiple time-state metrics are to be applied to the stream of raw data for a session (where the collection of time-state metrics is referred to herein also as an ensemble of time-state metrics). In some embodiments, the compiler is configured to perform optimizations such as consolidation, which includes determining whether there are any common or overlapping portions of the graph representations of the collection of time-state metrics being computed. When a portion of a graph that is common to two or more time-state metrics is identified, the sequence of operations (e.g., sub-graph of nodes) identified in the common portion of the graph is performed once, rather than being repeated for each metric computation. For example, one global sub-DAG of operators can be performed. For example, by representing time-state metrics as a directed graph of operators, graph optimizations are performed to prevent common nodes (where nodes are operators, and the common nodes are those operators that are being performed across multiple metrics) from being repeated. Rather, that portion of time-state processing is reused. In some embodiments, such graph optimizations are performed by the compiler. This improves computation efficiency and reduces computation cost. For example, the compiler is configured to identify subgraphs that are common to graph representations of two or more time-state metrics in an ensemble of multiple time-state metrics to be applied to an incoming session's raw data. In some embodiments, the consolidation performed by the compiler includes performing merging of DAGs. FIG.8illustrates an embodiment of merging time-state metric graph representations. In this example, suppose two time-state metric graph representations802and804are in an ensemble of metrics to be applied to a session's stream of raw data. As shown in the DAG representations of the metrics, the two metrics' graphs share common subgraph portions, such as the subgraph808with the node for the operator Get with the same parameter of “Event” to get, as well the next node being the TwoEventDurations operator, also with the same parameters “page_view” and “navigation_start.” At the end of both metrics is also the EvaluateAt( )operator. As shown in this example, the compiler is configured to evaluate the DAGs of both Metric DAG 1 and Metric DAG 2 and identify overlapping portions of the timeline graph representations to be executed. As an optimization, the compiler merges the two DAGs to generate merged DAG806. As shown in this example, by having graph representations of time-state metrics (which are composed of sequences of operators), various types of graph optimizations may be performed to reduce computation cost. For example, the subgraph with nodes Get(“Event”) and TwoEventDurations (“page_view”, “navigation_start”) need not be performed (e.g., executed) twice. Timeline Configuration File Example FIG.9illustrates an embodiment of a timeline configuration file. In this example, the configuration file is in a JSON format from a particular host. The configuration file may be written in other formats, as appropriate. In this example, at902, an identifier of the computation scope (session identifier path to events for the session) to which the time-state metric will be applied is shown. At904, the data that will be used to designate timestamps is also indicated, as different data may be encoded differently. The DAG portion906includes named nodes, such as “rawEvents” (908), “events” (910), “attemptTrue” (912), “timeToFirstAttempt” (914), “evaluatedInRealtime” (906), etc. For each node, the following node parameters are specified:op: the timeline operator to be performedin: the input to the node that the timeline operator will operate on. The input may be the output of another node, the logical combination of multiple outputs, etc. In some cases, multiple metrics may share portions of their respective DAGs. In the example of the rawEvents node908, the operator is “eventSourceTimeline” which is applied to the source raw data included in heartbeats. Following the node specifications of the inputs, outputs, and operators forms a directed-acyclic graph of a processing chain of operations. Each operator refers to what the input node is. For example, the operation called “get” refers to the output “rawEvents.” In this example, the “rawEvents” node is configured to use the operator “eventSourceTimeline” to convert the source data from the heartbeat format to a timeline compatible format. The heartbeats for a session (identified by the session identifier path) specified in the source parameter are treated as an event source. The “events” node includes a “get” op(eration) that takes as input the raw event timeline from the “rawEvents” node (where the “$” symbol refers to the output of a node in the DAG). In some embodiments, the node specifies a parameter to be used in conjunction with the operator, which includes, for example, a field name in a data set. Some operators, such as the “attemptTrue,” take as input two timelines. That is, an operator may be configured to take in multiple inputs. Operators may also provide multiple timelines as output. In this example, the time-state metrics configuration also includes a taps section918. In some embodiments, the taps portion is a specification of where (e.g., output of particular node) in the DAG that output data is to be obtained. For example, the output of any node may be tapped and provided as output. In this example, the output of node916(“evaluatedInRealtime”) is to be obtained. A specification of where and how the tapped output is to be provided is shown at destination portion920. For example, a protocol and server location for the output to be sent to is specified. The outputs of multiple nodes of the DAG may be specified and provided as output. Operator Taxonomy The following is an example taxonomy of time-state operators:Extract: Take in as input raw data or a time-state object (object with timeline representation) and perform manipulations (e.g., get by field, filter by value) to produce time-state objects (e.g., timelines).Transform: Take in as input one or more timelines and apply different types of arithmetic (e.g., add, subtract, multiply, divide), logic (e.g., AND, OR, Not, etc.), comparisons (e.g.,=Equals, <Less than, >Greater than, >=Greater than or equal to, <=Less than or equal to, etc.), accumulations (e.g., count, duration), stateful operations (e.g., time between events) to produce new timeline objects. Example types of transform operators include logic, arithmetic, comparison, stateful model, accumulator, etc. In some embodiments, applying a transformation to a timeline includes evaluating each span in a timeline, and applying the transformation to the span value in a given span. An output timeline is generated with a corresponding set of spans, but with the span values including transformed versions of the span values of the input timeline. In some embodiments timeline operations are configured to combine two or more timelines and generate a third timeline as output. In some embodiments, the value for the output timeline at a given point in time is determined by accessing the spans of the input timelines, determining which spans of the input timelines the given point in time belongs, and determining the values of the spans that the given point in time is included in. The transformation is then applied to the values obtained from the input timelines. The result of the transformation is then included in the output timeline. In this way, for the output timeline, the value of the output timeline at all points in time for the session is determined. For example, for every point in time, an input timeline is queried. The value of the attribute at the queried-for time is returned. For example, querying a timeline at a queried-for time includes identifying a span in which the queried-for time is included. The span value for the identified span in which the queried-for time is included is returned. If the span value is a function, then the value at the queried-for time is determined according to the function specified in the span value. A transformation is applied to the value returned based on the querying of the input timeline with the queried-for time. If there are multiple input timelines, then each of the input timelines is queried at a queried-for time, as described above, and the returned values are combined in accordance with an operator to be performed. An output timeline is then updated based on the combining of the values returned from the querying of the input timelines. As shown in the example above, in some embodiments, the output of the timeline operator is also a timeline, which also includes a sequence of spans, where the parameters of each span (start time, end time, and encoded value for the interval of time specified by the start and end times) in the output sequence are determined based on the transformation applied to the span elements of the input timeline. The number of spans in the output sequence need not match the number of spans in the input sequence, or have the same start/end times. For example, when a timeline operator evaluates a span in the input sequence of spans, this may result in splitting of the input span into multiple spans in the output sequence. As one example, suppose an operator that checks whether a condition is True if a value is above a threshold, or False if the value is below the threshold. Suppose a span in the input sequence of spans with the following span parameters: Input_span: {start_time: t1; end_time: t3; value: y=m*time+b) Suppose that based on the encoded value of the input span, which is a linear function of time in this example, the operator determines that the condition is False (value is below the threshold) until t2, and True on/after t2, where time t2 is between t1 and t3. This results in the operator splitting the input span into two output spans, such as the following: Output_span_1: {start_time: t1, end_time: t2-1, value=False}, Output_span_2: {start_time: t2, end_time: t3, value=True)Export: Take in as input one or more timeline objects and repackage them into a format/schema that is amenable or compatible for a downstream consumer (e.g., file, database, stream, etc.). As described above, in some embodiments, metrics are implemented as a DAG of time-state operators. The operators are implemented to operate in the domain of timeline representations/data models, where the operators may then be composed in a variety of ways to create time-state metrics as desired. For example, the set of operators (except for “GET” and “LatestEventToState,” which encode raw events measured at points in time into spans) takes timelines as input, and outputs timelines. This allows the output of one operator to become the input of the next or subsequent operator. The following are further embodiments of timeline operator taxonomy. In some embodiments, operators are classified along the following dimensions:Stateful vs. Stateless OperationNumber of Inputs: Unary vs. Binary vs. K-aryType of Input/Output Timelines specified by the temporal type, value type, etc. for input and output timelinesComputation: Mathematical/Algebraic function to apply FIG.10illustrates an embodiment of time-state operators. One example of such an operator is referred to herein as “LatestEventToState.” In some embodiments, the latest event to state operator is configured to perform stateful modeling, including treating event updates as state machine values. This includes interpreting raw event data as a state machine, where updates to events (changes in attribute values) are triggers that are interpreted as state or status changes. In some embodiments, generating a timeline request configuration includes selecting a set of timeline operators and specifying an arrangement (e.g., chain or sequence) of the timeline operators to form a time-state metric. In some embodiments, a library of timeline operators is provided for selection by a designer, where the timeline operators provided form a set of primitives from which time-state metrics are composed. In some embodiments, each timeline operator is associated with a set of code to implement the timeline operator.FIG.11illustrates an embodiment of a timeline operator. In this example, the code shown is configured to take a timeline of events as input and produce a timeline of states, which may be used, for example, to compute a cumulative count of events, or interpret events as state changes. Handling of Attributes with Continuous Values and Discrete Events The network attribute described above is an example of a step function or state function-type event data, where the value for the attribute is one of a finite set of discrete states. The events related to the network attribute are converted into states in a timeline representation. Not every event is convertible into a state change (where, for example, the value of an attribute is constrained to being one of a finite set of values). For example, frame rate is a continuous measurement. When a change in frame rate value is received in a heartbeat, this does not necessarily mean that the frame rate was the same value between the current timestamp and the last time a frame rate measurement was received. For example, if at timestamp t=1, the frame rate was reported as 60 frames per second, and at timestamp t=10, it is reported that the frame rate was measured to be 30 frames per second, this does not necessarily mean that the frame rate had stayed at 60 frames per second between t=1 and t=10. That is, for attributes with continuous values, what is received in the raw data may include samples of measurements, which are not necessarily translatable into a finite set of states. That is, not every piece of raw data can necessarily be translated from an event to a state machine. The following are embodiments of operators for transforming attributes with continuous measurement values (of which samples are received over time) into a timeline representation, including encoding continuous measurement values into spans. As one example, suppose that a bandwidth sample measurement is received in heartbeat messages. The timestamp at which a bandwidth sample was taken is recorded. In some embodiments, for each measurement sample, a span is created and/or a previous span is closed. For example, the timestamp corresponding with a received measurement sample is used to mark the end of one span and the start of a next span. In some embodiments, for the period between two bandwidth samples, interpolation is performed. For example, the values of the attribute during the interval of time are encoded as a time-dependent function. As one example, linear interpolation is performed between the bandwidth values. In this way, according to the linear function, the interpolated bandwidth at a time between the two timestamps is determined. This results in a timeline that has piecewise linear spans. For example, the bandwidth samples are treated as events at discrete points in time. The value for the attribute at intermediate points in time within the span between the times of two bandwidth samples is determined based on interpolation (e.g., linear interpolation, polynomial interpolation, or any other type of interpolation as appropriate). For example, in the data structure for a timeline representation of an attribute with continuous values, the value for a span (e.g., for timestamps in the span between two samples) is specified as a function of the sample values received at the beginning and end of the span. In some embodiments, the sample itself is recorded as a zero-width span, for example, with the start time set to be the same as the end time, and with the value of the zero-width span being the sample measurement that was received in the raw data. In some embodiments, the samples themselves are encoded in an event-type timeline. Duration is another example of a continuously changing value, which may be a native value from raw data, or a derived measurement. Consider for exampleFIG.6C. In some embodiments, such a continuous value that changes over time is represented or encoded in a numerical-type timeline object. As one example, the timeline representation for the timeline ofFIG.6Chas the following data structure representation: {start_time: 0, end_time: 9, value: 0} {start_time: 10, end_time: 14, value: duration=(time−10)} In this example, the possible values of the attribute within the span or interval of time are encoded in a function that, as one example, is determined based on linear extrapolation/interpolation, where the value within a span (start time to end time) is determined according to a linear function of time. In the span from t=0 to t=90, there was no change in value, the slope was 0, and the value is 0. In the span from t=10 to t=14, the value is encoded as a linear function of the form y=m*time+b, where the slope m is 1 and the intercept b is −10. In some embodiments, time is modeled in a discrete manner, such as in the examples described herein. For example, inclusive spans and discrete time are used in various embodiments of the timeline data model described herein. In other embodiments, time is modeled continuously. That is, the timeline implementation described herein may be variously adapted to accommodate discrete or continuous models for time. Another type of timeline is an event-type timeline that is used to encode discrete events. Examples of discrete events include indications of an occurrence of an action. For example, a user clicking on a button is an example of an occurrence of a discrete event, where there is not an associated state. The button press is recorded as a discrete event and provided as raw data. The following is an example of generating a timeline representation of discrete events. For such discrete events, the start time and the end time of the span are the same. In this way, the event is recorded. If the timeline is queried for a time in between the times at which the event was recorded, a null set may be returned (as that event was not known to have occurred in those in-between times). As described above, example types of timelines include event timeline objects for encoding discrete events, state-dynamic-type timelines for encoding step (state) function, and numerical-type timelines for representing continuously evolving values. Span representations are generated for each type of event timeline. In the example of attributes with continuous values, while samples are received at discrete points in time, the value for spans (in the numerical-type timeline representation used to encode such continuous values) is determined as a function of the received samples. For example, rather than treating continuous values such as temperature measurements or humidity measurements as discrete values, they are treated as continuously evolving values, where samples are received, and the continuous nature of the attribute over time is represented by interpolation (where the value over time in some span is computed as a function rather than a specific value). As shown in the above, three example types of timelines supported by the platform described herein include:StateDynamics: embodiments of this type of timeline capture a state having a value at each point in continuous time but changing at discrete points: e.g., player state and CDN state in the above examples. In some embodiments, this type of timeline captures state/step functions. For example, a step function-based timeline is generated and updated based on the latest measurement.Numerical: embodiments of this type of timeline represent (continuous) values varying over time. Such types of timelines may also arise as intermediate representations—for example, the cumulative time spent in the connection-induced rebuffering state. Examples of span values for such a type of timeline representation include time-dependent functions.Event: embodiments of this type of timeline capture a sequence of discrete events. As one example, user seek events, player state updates, and CDN updates are encoded as Event-type timelines. Event-type timeline objects may entail sparse encodings, where instead of tracking whether events occurred at each timestamp/window, an event-type timeline object stores when events occur. For StateDynamics and Numerical objects, in some embodiments, spans are defined. In some embodiments, a span is an event time interval associated with either the value over that interval (for StateDynamics-type timelines) or an encoding of its evolving numerical values (for Numerical-type timelines, such as encoding via time-dependent functions). In some embodiments, StateDynamics and Numerical-type timeline objects are represented as a compact list of span elements (rather than, for example, enumerating each timestamp/window). As shown in the above examples, different types of timeline objects are used for representing different types of attributes, or are used to determine different types of encodings of values that vary over time. Different types of timeline objects are also associated with different ways of determining span starts/ends and encoding values. The use of such compact data structures facilitates the efficient implementation of semantic-aware operations over such encodings. Multidimensional Analytics The following are embodiments of performing multi-dimensional aggregations and analytics. As described above, in some embodiments, time-state metrics are computed within a certain scope, such as on a session-level basis. After session-level metrics have been computed and stored, aggregations over metadata may then be performed. For example, aggregations of the time-state metric across sessions that share one or more characteristics (e.g., device type, operating system type, location, etc.) may be performed. Examples of such roll-ups or aggregations include averages, counts, etc. For example, the system performs pre-computing the per-session metric (as a stream of raw event data is ingested) using the timeline representation described above. A user may then perform an aggregation to determine what that metric was in aggregate for all sessions that happened for Android in San Francisco, or for any other segment, as desired. That is, in a first stage, individual, per-session metrics are computed. A second stage of processing includes performing aggregation across a segment of sessions that share a set of characteristics. FIG.12illustrates an embodiment of determining a time-state metric. In some embodiments, process1200is executed by time-state metrics system300ofFIG.3. The process begins at1202when a stream of raw or observed data values of an attribute is received. Each raw data value is associated with a timestamp (e.g., event timestamp of when an attribute value was measured, collected, reported, sampled, etc.). At1204, the received stream of raw data values is converted into a timeline representation of the attribute over time. In some embodiments, the converting includes encoding the received stream of raw data values into a sequence of one or more spans or span elements. In some embodiments, a span element includes a span start time, a span end time, and a span value. In some embodiments, the span value includes an encoding of the value (or values) of the attribute over a time interval determined by the start time and the end time of the span. For example, a span is an event time interval associated with either the value over that interval (e.g., for attributes with finite states), or an encoding of its evolving (over time) numerical values (e.g., for attributes with continuous values). As one example, the value associated with a span is (semantically) encoded as a constant value (e.g., state) for the interval of time. For example, raw events with observed values at sparse, discrete points in time are converted or encoded into values specified over intervals of time, allowing for the value of the attribute to determined over all time (and not just at the timestamps corresponding to when events were observed or reported). As another example, the value associated with a span is encoded as a time-dependent function that is valid over, or otherwise applicable to, the interval of time (e.g., function that determines, as a function of time, the value of the attribute at points in time within the span's specified time interval). In some embodiments, the timeline objects representing attributes over time are represented as a sequence of spans. As one example, a timeline object is represented as a compact list of span elements. As another example, a timeline object is represented as a table, where each row represents a span, and the table includes columns for span parameters, such as span start time, span end time, and span value. Further details regarding encoding of raw or observed data values into spans of a timeline representation are described above. At1206, a time-state metric is computed according to a timeline request configuration. The timeline request configuration includes one or more timeline operations. The time-state metric is computed at least in part by performing a timeline operation on the timeline representation of the attribute. Further details regarding timeline request configurations, composing of time-state metrics using a set of timeline operators, graph representations of time-state metrics, etc. are described above. In some embodiments, determining the time-state metric includes performing computations on, or otherwise combining, timeline representations of multiple different attributes whose values may vary over time (and where the attributes may be of different data types). FIG.13illustrates an embodiment of performing multi-dimensional aggregation of time-state metrics. In some embodiments, process1300is executed by multidimensional analytics engine208ofFIG.2. The process begins at1302when a first value of a time-state metric computed for a first entity is received. In some embodiments, the first value of the time-state metric is computed using process1200ofFIG.12. For example, the first value of the time-state metric is computed at least in part by performing, according to a timeline request configuration, one or more timeline operations on a first timeline representation of an attribute for the first entity, where the first timeline representation is generated at least in part by converting a first stream of raw data values of the attribute associated with the first entity into a first set of span data structures comprised in the first timeline representation of the attribute for the first entity. At1304, a second value of the time-state metric computed for a second entity is received. In some embodiments, the second value of the time-state metric is computed using process1200ofFIG.12. For example, the second value of the time-state metric is computed at least in part by performing, according to the timeline request configuration, the one or more timeline operations on a second timeline representation of the attribute associated with the second entity generated at least in part by converting a second stream of raw data values of the attribute associated with the second entity into a second set of span data structures comprised in the second timeline representation of the attribute for the second entity At1306, an aggregate value for the time-state metric is determined at least in part by combining the first value of the time-state metric and the second value of the time-state metric computed, respectively, for the first entity and the second entity. For example, the first entity and the second entity are grouped together into a segment based at least in part on one or more shared dimensional attributes. For example, the first entity and the second entity are two video streaming sessions that are grouped together based on the first and second sessions sharing dimensional attributes in common (e.g., common CDN, ISP, device type, etc.). Based on the grouping of the first entity and the second entity into the segment, an aggregate value for the time-state metric is determined by performing an aggregation on the first and second values of the time-state metric. Although the foregoing embodiments have been described in some detail for purposes of clarity of understanding, the invention is not limited to the details provided. There are many alternative ways of implementing the invention. The disclosed embodiments are illustrative and not restrictive.
107,967
11943124
DETAILED DESCRIPTION A system, method, and computer-readable medium are disclosed for performing a connectivity management operation. Various aspects of the invention reflect an appreciation that it is common for a typical data center to monitor and manage many different assets, such as certain computing and networking devices, described in greater detail herein. Certain aspects of the invention likewise reflect an appreciation that such data center assets are typically implemented to work in combination with one another for a particular purpose. Likewise, various aspects of the invention reflect an appreciation that such purposes generally involve the performance of a wide variety of tasks, operations, and processes to service certain workloads. Certain aspects of the invention likewise reflect an appreciation that the use of cloud-based data center management systems often proves to be advantageous as they allow monitoring and management functions to be performed from anywhere, at any time, according to the user's particular needs, and typically at a reduced cost. However, various aspects of the invention likewise reflect an appreciation that the use of such cloud-based approaches may pose certain challenges. For example, communication channels are typically one-way and hindered by firewalls, proxies, and complicated network set-ups. Accordingly, certain aspects of the invention reflect an appreciation there is a need for an always-connected, bidirectional connection to managed data center assets located on the customer's premises such that management actions can be securely performed in real-time. For purposes of this disclosure, an information handling system may include any instrumentality or aggregate of instrumentalities operable to compute, classify, process, transmit, receive, retrieve, originate, switch, store, display, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, or other purposes. For example, an information handling system may be a personal computer, a network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price. The information handling system may include random access memory (RAM), one or more processing resources such as a central processing unit (CPU) or hardware or software control logic, ROM, and/or other types of nonvolatile memory. Additional components of the information handling system may include one or more disk drives, one or more network ports for communicating with external devices as well as various input and output (I/O) devices, such as a keyboard, a mouse, and a video display. The information handling system may also include one or more buses operable to transmit communications between the various hardware components. FIG.1is a generalized illustration of an information handling system100that can be used to implement the system and method of the present invention. The information handling system100includes a processor (e.g., central processor unit or “CPU”)102, input/output (I/O) devices104, such as a display, a keyboard, a mouse, a touchpad or touchscreen, and associated controllers, a hard drive or disk storage106, and various other subsystems108. In various embodiments, the information handling system100also includes network port110operable to connect to a network140, which is likewise accessible by a service provider server142. The information handling system100likewise includes system memory112, which is interconnected to the foregoing via one or more buses114. System memory112further comprises operating system (OS)116and in various embodiments may also comprise a data center monitoring and management console118. In one embodiment, the information handling system100is able to download the data center monitoring and management console118from the service provider server142. In another embodiment, the data center monitoring and management console118is provided as a service from the service provider server142. In certain embodiments, the data center monitoring and management console118may include a monitoring module120, a management module122, an analysis engine124, a connectivity management system (CMS)126, a CMS client130, or a combination thereof. In certain embodiments, the CMS126may be implemented to include a CMS aggregator128. In certain embodiments, the data center monitoring and management console118may be implemented to perform a data center monitoring and management operation. In certain embodiments, the information handling system100may be implemented to include either a CMS126, or a CMS client130, or both. In certain embodiments, the data center monitoring and management operation may be performed during operation of an information handling system100. In various embodiments, performance of the data center monitoring and management operation may result in the realization of improved monitoring and management of certain data center assets, as described in greater detail herein. In certain embodiments, the CMS126may be implemented in combination with the CMS client130to perform a connectivity management operation, described in greater detail herein. As an example, the CMS126may be implemented on one information handling system100, while the CMS client130may be implemented on another, as likewise described in greater detail herein. FIG.2is a simplified block diagram of a data center monitoring and management environment implemented in accordance with an embodiment of the invention. As used herein, a data center broadly refers to a building, a dedicated space within a building, or a group of buildings, used to house a collection of interrelated data center assets244implemented to work in combination with one another for a particular purpose. As likewise used herein, a data center asset244broadly refers to anything, tangible or intangible, that can be owned, controlled, or enabled to produce value as a result of its use within a data center. In certain embodiments, a data center asset244may include a product, or a service, or a combination of the two. As used herein, a tangible data center asset244broadly refers to data center asset244having a physical substance, such as a computing or network device. Examples of computing devices may include personal computers (PCs), laptop PCs, tablet computers, servers, mainframe computers, Redundant Arrays of Independent Disks (RAID) storage units, their associated internal and external components, and so forth. Likewise, examples of network devices may include routers, switches, hubs, repeaters, bridges, gateways, and so forth. Other examples of a tangible data center asset244may include certain data center personnel, such as a data center system administrator, operator, or technician, and so forth. Other examples of a tangible data center asset244may include certain maintenance, repair, and operations (MRO) items, such as replacement and upgrade parts for a particular data center asset244. In certain embodiments, such MRO items may be in the form of consumables, such as air filters, fuses, fasteners, and so forth. As likewise used herein, an intangible data center asset244broadly refers to a data center asset244that lacks physical substance. Examples of intangible data center assets244may include software applications, software services, firmware code, and other non-physical, computer-based assets. Other examples of intangible data center assets244may include digital assets, such as structured and unstructured data of all kinds, still images, video images, audio recordings of speech and other sounds, and so forth. Further examples of intangible data center assets244may include intellectual property, such as patents, trademarks, copyrights, trade names, franchises, goodwill, and knowledge resources, such as data center asset244documentation. Yet other examples of intangible data center assets244may include certain tasks, functions, operations, procedures, or processes performed by data center personnel. Those of skill in the art will recognize that many such examples of tangible and intangible data center assets244are possible. Accordingly, the foregoing is not intended to limit the spirit, scope or intent of the invention. In certain embodiments, the value produced by a data center asset244may be tangible or intangible. As used herein, tangible value broadly refers to value that can be measured. Examples of tangible value may include return on investment (ROI), total cost of ownership (TCO), internal rate of return (IRR), increased performance, more efficient use of resources, improvement in sales, decreased customer support costs, and so forth. As likewise used herein, intangible value broadly refers to value that provides a benefit that may be difficult to measure. Examples of intangible value may include improvements in user experience, customer support, and market perception. Skilled practitioner of the art will recognize that many such examples of tangible and intangible value are possible. Accordingly, the foregoing is not intended to limit the spirit, scope or intent of the invention. In certain embodiments, the data center monitoring and management environment200may include a data center monitoring and management console118. In certain embodiments, the data center monitoring and management console118may be implemented to perform a data center monitoring and management operation. As used herein, a data center monitoring and management operation broadly refers to any task, function, procedure, or process performed, directly or indirectly, within a data center monitoring and management environment200to procure, deploy, configure, implement, operate, monitor, manage, maintain, or remediate a data center asset244. In certain embodiments, a data center monitoring and management operation may include a data center monitoring task. As used herein, a data center monitoring task broadly refers to any function, operation, procedure, or process performed, directly or indirectly, within a data center monitoring and management environment200to monitor the operational status of a particular data center asset244. In various embodiments, a particular data center asset244may be implemented to generate an alert if its operational status exceeds certain parameters. In these embodiments, the definition of such parameters, and the method by which they may be selected, is a matter of design choice. For example, an internal cooling fan of a server may begin to fail, which in turn may cause the operational temperature of the server to exceed its rated level. In this example, the server may be implemented to generate an alert, which provides notification of the occurrence of a data center issue. As used herein, a data center issue broadly refers to an operational situation associated with a particular component of a data monitoring and management environment200, which if not corrected, may result in negative consequences. In certain embodiments, a data center issue may be related to the occurrence, or predicted occurrence, of an anomaly within the data center monitoring and management environment200. In certain embodiments, the anomaly may be related to unusual or unexpected behavior of one or more data center assets244. In certain embodiments, a data center monitoring and management operation may include a data center management task. As used herein, a data center management task broadly refers to any function, operation, procedure, or process performed, directly or indirectly, within a data center monitoring and management environment200to manage a particular data center asset244. In certain embodiments, a data center management task may include a data center deployment operation, a data center remediation operation, a data center remediation documentation operation, a connectivity management operation, or a combination thereof. As used herein, a data center deployment operation broadly refers to any function, task, procedure, or process performed, directly or indirectly, within a data center monitoring and management environment200to install a software file, such as a configuration file, a new software application, a version of an operating system, and so forth, on a data center asset244. As likewise used herein, a data center remediation operation broadly refers to any function, task, procedure, or process performed, directly or indirectly, within a data center monitoring and management environment200to correct an operational situation associated with a component of a data monitoring and management environment200, which if not corrected, may result in negative consequences. A data center remediation documentation operation, as likewise used herein, broadly refers to any function, task, procedure, or process performed, directly or indirectly, within a data center monitoring and management environment200to retrieve, generate, revise, update, or store remediation documentation that may be used in the performance of a data center remediation operation. Likewise, as used herein, a connectivity management operation (also referred to as a data center connectivity management operation) broadly refers to any task, function, procedure, or process performed, directly or indirectly, to manage connectivity between a particular data center asset244and a particular data center monitoring and management console118. In various embodiments, one or more connectivity management operations may be performed to ensure that data exchanged between a particular data center asset244and a particular data center monitoring and management console118during a communication session is secured. In certain of these embodiments, as described in greater detail herein, various cryptographic approaches familiar to skilled practitioners of the art may be used to secure a particular communication session. In certain embodiments, the data center monitoring and management console118may be implemented to receive an alert corresponding to a particular data center issue. In various embodiments, the data center monitoring and management console118may be implemented to receive certain data associated with the operation of a particular data center asset244. In certain embodiments, such operational data may be received through the use of telemetry approaches familiar to those of skill in the art. In various embodiments, the data center monitoring console118may be implemented to process certain operational data received from a particular data center asset to determine whether a data center issue has occurred, is occurring, or is anticipated to occur. In certain embodiments, the data center monitoring and management console118may be implemented to include a monitoring module120, a management monitor122, an analysis engine124, and a connectivity management system (CMS)126, or a combination thereof. In certain embodiments, the monitoring module120may be implemented to monitor the procurement, deployment, implementation, operation, management, maintenance, or remediation of a particular data center asset244at any point in its lifecycle. In certain embodiments, the management module122may be implemented to manage the procurement, deployment, implementation, operation, monitoring, maintenance, or remediation of a particular data center asset244at any point in its lifecycle. In various embodiments, the monitoring module120, the management module122, the analysis engine124, and the CMS126may be implemented, individually or in combination with one another, to perform a data center asset monitoring and management operation, as likewise described in greater detail herein. In various embodiments, a CMS client130may be implemented on certain user devices204, or certain data center assets244, or a combination thereof. In certain embodiments, the CMS126may be implemented in combination with a particular CMS client130to perform a connectivity management operation, as described in greater detail herein. In certain embodiments, the data center monitoring and management environment200may include a repository of data center monitoring and management data220. In certain embodiments, the repository of data center monitoring and management data220may be local to the information handling system100executing the data center monitoring and management console118or may be located remotely. In various embodiments, the repository of data center monitoring and management data220may include certain information associated with data center asset data220, data center asset configuration rules224, data center infrastructure data226, data center remediation data228, and data center personnel data230. As used herein, data center asset data222broadly refers to information associated with a particular data center asset244, such as an information handling system100, or an associated workload, that can be read, measured, and structured into a usable format. For example, data center asset data222associated with a particular server may include the number and type of processors it can support, their speed and architecture, minimum and maximum amounts of memory supported, various storage configurations, the number, type, and speed of input/output channels and ports, and so forth. In various embodiments, the data center asset data222may likewise include certain performance and configuration information associated with a particular workload, as described in greater detail herein. In various embodiments, the data center asset data222may include certain public or proprietary information related to data center asset244configurations associated with a particular workload. In certain embodiments, the data center asset data222may include information associated with data center asset244types, quantities, locations, use types, optimization types, workloads, performance, support information, and cost factors, or a combination thereof, as described in greater detail herein. In certain embodiments, the data center asset data222may include information associated with data center asset244utilization patterns, likewise described in greater detail herein. In certain embodiments, the data center asset data222may include information associated with the allocation of certain data center asset resources, described in greater detail herein, to a particular workload. As likewise used herein, a data center asset configuration rule224broadly refers to a rule used to configure a particular data center asset244. In certain embodiments, one or more data center asset configuration rules224may be used to verify that a particular data center asset244configuration is the most optimal for an associated location, or workload, or to interact with other data center assets244, or a combination thereof, as described in greater detail herein. In certain embodiments, the data center asset configuration rule224may be used in the performance of a data center asset configuration verification operation, a data center remediation operation, or a combination of the two. In certain embodiments, the data center asset configuration verification operation, or the data center remediation operation, or both, may be performed by an asset configuration system250. In certain embodiments, the asset configuration system250may be used in combination with the data center monitoring and management console118to perform a data center asset configuration operation, or a data center remediation operation, or a combination of the two. As used herein, data center infrastructure226data broadly refers to any data associated with a data center infrastructure component. As likewise used herein, a data center infrastructure component broadly refers to any component of a data center monitoring and management environment200that may be involved, directly or indirectly, in the procurement, deployment, implementation, configuration, operation, monitoring, management, maintenance, or remediation of a particular data center asset244. In certain embodiments, data center infrastructure components may include physical structures, such as buildings, equipment racks and enclosures, network and electrical cabling, heating, cooling, and ventilation (HVAC) equipment and associated ductwork, electrical transformers and power conditioning systems, water pumps and piping systems, smoke and fire suppression systems, physical security systems and associated peripherals, and so forth. In various embodiments, data center infrastructure components may likewise include the provision of certain services, such as network connectivity, conditioned airflow, electrical power, and water, or a combination thereof. Data center remediation data228, as used herein, broadly refers to any data associated with the performance of a data center remediation operation, described in greater details herein. In certain embodiments, the data center remediation data228may include information associated with the remediation of a particular data center issue, such as the date and time an alert was received indicating the occurrence of the data center issue. In certain embodiments, the data center remediation data228may likewise include the amount of elapsed time before a corresponding data center remediation operation was begun after receiving the alert, and the amount of elapsed time before it was completed. In various embodiments, the data center remediation data228may include information related to certain data center issues, the frequency of their occurrence, their respective causes, error codes associated with such data center issues, the respective location of each data center asset244associated with such data center issues, and so forth. In various embodiments, the data center remediation data228may include information associated with data center asset244replacement parts, or upgrades, or certain third party services that may need to be procured in order to perform the data center remediation operation. Likewise, in certain embodiments, related data center remediation data228may include the amount of elapsed time before the replacement parts, or data center asset244upgrades, or third party services were received and implemented. In certain embodiments, the data center remediation data228may include information associated with data center personnel who may have performed a particular data center remediation operation. Likewise, in certain embodiments, related data center remediation data228may include the amount of time the data center personnel actually spent performing the operation, issues encountered in performing the operation, and the eventual outcome of the operation that was performed. In certain embodiments, the data center remediation data228may include remediation documentation associated with performing a data center asset remediation operation associated with a particular data center asset244. In various embodiments, such remediation documentation may include information associated with certain attributes, features, characteristics, functional capabilities, operational parameters, and so forth, of a particular data center asset244. In certain embodiments, such remediation documentation may likewise include information, such as step-by-step procedures and associated instructions, video tutorials, diagnostic routines and tests, checklists, and so forth, associated with remediating a particular data center issue. In certain embodiments, the data center remediation data228may include information associated with any related remediation dependencies, such as other data center remediation operations that may need to be performed beforehand. In certain embodiments, the data center remediation data228may include certain time restrictions when a data center remediation operation, such as rebooting a particular server, may be performed. In various embodiments, the data center remediation data228may likewise include certain autonomous remediation rules, described in greater detail herein. In various embodiments, certain of these autonomous remediation rules may be used in the performance of an autonomous remediation operation, described in greater detail herein. Those of skill in the art will recognize that many such examples of data center remediation data228are possible. Accordingly, the foregoing is not intended to limit the spirit, scope, or intent of the invention. Data center personnel data230, as used herein, broadly refers to any data associated with data center personnel who may be directly, or indirectly, involved in the procurement, deployment, configuration, implementation, operation, monitoring, management, maintenance, or remediation of a particular data center asset244. In various embodiments, the data center personnel data230may include job title, work assignment, or responsibility information corresponding to certain data center personnel. In various embodiments, the data center personnel data230may include information related to the type, and number, of data center remediation operations currently being, or previously, performed by certain data center personnel. In various embodiments, the data center personnel data230may include historical information, such as success metrics, associated with data center remediation operations performed by certain data center personnel, such as data center administrators, operators, and technicians. In these embodiments, the data center personnel data230may be updated as individual data center personnel complete each data center remediation task, described in greater detail herein, they are assigned. In various embodiments, the data center personnel data230may likewise include education, certification, and skill level information corresponding to certain data center personnel. Likewise, in various embodiments, the data center personnel data230may include security-related information, such as security clearances, user IDs, passwords, security-related biometrics, authorizations, and so forth, corresponding to certain data center personnel. Those of skill in the art will recognize that many such examples of data center personnel data230are possible. Accordingly, the foregoing is not intended to limit the spirit, scope, or intent of the invention. In certain embodiments, various data center assets244within a data center monitoring and management environment200may have certain interdependencies. As an example, a data center monitoring and management environment200may have multiple servers interconnected by a storage area network (SAN) providing block-level access to various disk arrays and tape libraries. In this example, the servers, various physical and operational elements of the SAN, as well the disk arrays and tape libraries, are interdependent upon one another. In certain embodiments, each data center asset244in a data center monitoring and management environment200may be treated as a separate data center asset244and depreciated individually according to their respective attributes. As an example, a particular rack of servers in a data center monitoring and management environment200may be made up of a variety of individual servers, each of which may have a different depreciation schedule. To continue the example, certain of these data center assets244may be implemented in different combinations to produce an end result. To further illustrate the example, a particular server in the rack of servers may initially be implemented to query a database of customer records. As another example, the same server may be implemented at later time perform an analysis of sales associated with those same customer records. In certain embodiments, each data center asset244in a data center monitoring and management environment200may have an associated maintenance schedule and service contract. For example, a data center monitoring and management environment200may include a wide variety of servers and storage arrays, which may respectively be manufactured by a variety of manufacturers. In this example, the frequency and nature of scheduled maintenance, as well as service contract terms and conditions, may be different for each server and storage array. In certain embodiments, the individual data center assets244in a data center monitoring and management environment200may be configured differently, according to their intended use. To continue the previous example, various servers may be configured with faster or additional processors for one intended workload, while other servers may be configured with additional memory for other intended workloads. Likewise, certain storage arrays may be configured as one RAID configuration, while others may be configured as a different RAID configuration. In certain embodiments, the data center monitoring and management environment200may likewise be implemented to include an asset configuration system250, a product configuration system252, a product fabrication system254, and a supply chain system256, or a combination thereof. In various embodiments, the asset configuration system250may be implemented to perform certain data center asset244configuration operations. In certain embodiments, the data center asset244configuration operation may be performed to configure a particular data center asset244for a particular purpose. In certain embodiments, the data center monitoring and management console118may be implemented to interact with the asset configuration system250to perform a particular data center asset244configuration operation. In various embodiments, the asset configuration system250may be implemented to generate, manage, and provide, or some combination thereof, data center asset configuration rules224. In certain of these embodiments, the data center asset configuration rules224may be used to configure a particular data center asset244for a particular purpose. In certain embodiments, a user202may use a user device204to interact with the data center monitoring and management console118. As used herein, a user device204refers to an information handling system such as a personal computer, a laptop computer, a tablet computer, a personal digital assistant (PDA), a smart phone, a mobile telephone, or other device that is capable of processing and communicating data. In certain embodiments, the communication of the data may take place in real-time or near-real-time. As used herein, real-time broadly refers to processing and providing information within a time interval brief enough to not be discernable by a user202. In certain embodiments, a user device204may be implemented with a camera206, such as a video camera known to skilled practitioners of the art. In certain embodiments, the camera206may be integrated into the user device204. In certain embodiments, the camera206may be implemented as a separate device configured to interoperate with the user device204. As an example, a webcam familiar to those of skill in the art may be implemented receive and communicate various image and audio signals to a user device204via a Universal Serial Bus (USB) interface. In certain embodiments, the user device204may be configured to present a data center monitoring and management console user interface (UI)240. In certain embodiments, the data center monitoring and management console UI240may be implemented to present a graphical representation242of data center asset monitoring and management information, which is automatically generated in response to interaction with the data center monitoring and management console118. In certain embodiments, a data center monitoring and management application238may be implemented on a particular user device204. In various embodiments, the data center monitoring and management application238may be implemented on a mobile user device204, such as a laptop computer, a tablet computer, a smart phone, a dedicated-purpose mobile device, and so forth. In certain of these embodiments, the mobile user device204may be used at various locations within the data center monitoring and management environment200by the user202when performing a data center monitoring and management operation, described in greater detail herein. In various embodiments, the data center monitoring and management application238may be implemented to facilitate a user202, such as a data center administrator, operator, or technician, to perform a particular data center remediation operation. In various embodiments, such facilitation may include using the data center monitoring and management application238to receive a notification of a data center remediation task, described in greater detail herein, being assigned to the user. In certain embodiments, the data center monitoring and management console118may be implemented to generate the notification of the data center remediation task assignment, and assign it to the user, as likewise described in greater detail herein. In certain embodiments, the data center monitoring and management console118may be implemented to generate the data center remediation task, and once generated, provide it to the data center monitoring and management application238associated with the assigned user202. In certain embodiments, such facilitation may include using the data center monitoring and management application238to receive the data center remediation task from the data center monitoring and management console118. In various embodiments, such facilitation may include using the data center monitoring and management application238to confirm that the user202is at the correct physical location of a particular data center asset244associated with a corresponding data center issue. In certain of these embodiments, the data center monitoring and management application238may be implemented to include certain Global Positioning System (GPS) capabilities, familiar to those of skill in the art, which may be used to determine the physical location of the user202in relation to the physical location of a particular data center asset244. In various embodiments, such facilitation may include using the data center monitoring and management application238to ensure the user202is aware of, or is provided the location of, or receives, or a combination thereof, certain remediation resources, described in greater detail herein, that may be needed to perform a particular data center remediation operation. In various embodiments, such facilitation may include using the data center monitoring and management application238to view certain remediation documentation, or augmented instructions, related to performing a particular data center remediation operation. In various embodiments, such facilitation may include using the data center monitoring and management application238to certify that a particular data center remediation operation has been performed successfully. In certain embodiments the UI window240may be implemented as a UI window of the data center monitoring and management application238. In various embodiments, the data center monitoring and management application238may be implemented to include, in part or in whole, certain functionalities associated with the data center monitoring and management console118. In certain embodiments, the data center monitoring and management application238may be implemented to interact in combination with the data center monitoring and management console118, and other components of the data center monitoring and management environment200, to perform a data center monitoring and management operation. In certain embodiments, the user device204may be used to exchange information between the user202and the data center monitoring and management console118, the data center monitoring and management application238, the asset configuration system250, the product configuration system252, the product fabrication system254, and the supply chain system256, or a combination thereof, through the use of a network140. In various embodiments, the asset configuration system250may be implemented to configure a particular data center asset244to meet certain performance goals. In various embodiments, the asset configuration system250may be implemented to use certain data center monitoring and management data220, certain data center asset configuration rules226it may generate or manage, or a combination thereof, to perform such configurations. In various embodiments, the product configuration system252may be implemented to use certain data center monitoring and management data220to optimally configure a particular data center asset244, such as a server, for an intended workload. In various embodiments, the data center monitoring and management data220used by the product configuration system252may have been generated as a result of certain data center monitoring and management operations, described in greater detail herein, being performed by the data center monitoring and management console118. In various embodiments, the product configuration system252may be implemented to provide certain product configuration information to a product fabrication system254. In various embodiments, the product fabrication system254may be implemented to provide certain product fabrication information to a product fabrication environment (not shown). In certain embodiments, the product fabrication information may be used by the product fabrication environment to fabricate a product, such as a server, to match a particular data center asset244configuration. In various embodiments, the data center monitoring and management console UI240may be presented via a website (not shown). In certain embodiments, the website may be provided by one or more of the data center monitoring and management console118, the asset configuration system250, the product configuration system252, the product fabrication system254, or the supply chain system256. In certain embodiments, the supply chain system256may be implemented to manage the provision, fulfillment, or deployment of a particular data center asset244produced in the product fabrication environment. For the purposes of this disclosure a website may be defined as a collection of related web pages which are identified with a common domain name and is published on at least one web server. A website may be accessible via a public IP network or a private local network. A web page is a document which is accessible via a browser which displays the web page via a display device of an information handling system. In various embodiments, the web page also includes the file which causes the document to be presented via the browser. In various embodiments, the web page may comprise a static web page, which is delivered exactly as stored and a dynamic web page, which is generated by a web application that is driven by software that enhances the web page via user input208to a web server. In certain embodiments, the data center monitoring and management console118may be implemented to interact with the asset configuration system250, the product configuration system252, the product fabrication system254, and the supply chain or fulfillment system256, or a combination thereof, each of which in turn may be executing on a separate information handling system100. In certain embodiments, the data center monitoring and management console118may be implemented to interact with the asset configuration system250, the product configuration system252, the product fabrication system254, and the supply chain or fulfillment system256, or a combination thereof, to perform a data center monitoring and management operation, as described in greater detail herein. FIG.3shows a functional block diagram of the performance of certain data center monitoring and management operations implemented in accordance with an embodiment of the invention. In various embodiments, a data center monitoring and management environment200, described in greater detail herein, may be implemented to include one or more data centers, such as data centers ‘1’346through ‘n’348. As likewise described in greater detail herein, each of the data centers ‘1’346through ‘n’348may be implemented to include one or more data center assets244, likewise described in greater detail herein. In certain embodiments, a data center asset244may be implemented to process an associated workload360. A workload360, as used herein, broadly refers to a measure of information processing that can be performed by one or more data center assets244, individually or in combination with one another, within a data center monitoring and management environment200. In certain embodiments, a workload360may be implemented to be processed in a virtual machine (VM) environment, familiar to skilled practitioners of the art. In various embodiments, a workload360may be implemented to be processed as a containerized workload360, likewise familiar to those of skill in the art. In certain embodiments, as described in greater detail herein, the data center monitoring and management environment200may be implemented to include a data center monitoring and management console118. In certain embodiments, the data center monitoring and management console118may be implemented to include a monitoring module120, a management module122, an analysis engine124, and a connectivity management system (CMS)126, or a combination thereof, as described in greater detail herein. In various embodiments, a CMS client130, described in greater detail herein may be implemented on certain user devices ‘A’304through ‘x’314, or certain data center assets244, or within data centers ‘1’346through ‘n’348, or a combination thereof. In certain embodiments, the CMS126may be implemented in combination with a particular CMS client130to perform a connectivity management operation, as likewise described in greater detail herein. As described in greater detail herein, the data center monitoring and management console118may be implemented in certain embodiments to perform a data center monitoring and management operation. In certain embodiments, the data center monitoring and management console118may be implemented to provide a unified framework for the performance of a plurality of data center monitoring and management operations, by a plurality of users, within a common user interface (UI). In certain embodiments, the data center monitoring and management console118, and other components of the data center monitoring environment200, such as the asset configuration system250, may be implemented to be used by a plurality of users, such as users ‘A’302through ‘x’312shown inFIG.3. In various embodiments, certain data center personnel, such as users ‘A’302through ‘x’312, may respectively interact with the data center monitoring and management console118, and other components of the data center monitoring and management environment200, through the use of an associated user device ‘A’304through ‘x’314. In certain embodiments, such interactions may be respectively presented to users ‘A’302through ‘x’312within a user interface (UI) window306through316, corresponding to user devices ‘A’304through ‘x’314. In certain embodiments the UI window306through316may be implemented in a window of a web browser, familiar to skilled practitioners of the art. In certain embodiments, a data center monitoring and management application (MMA)310through320, described in greater detail herein, may be respectively implemented on user devices ‘A’304through ‘x’314. In certain embodiments the UI window306through316may be respectively implemented as a UI window of the data center MMA310through320. In certain embodiments, the data center MMA310through320may be implemented to interact in combination with the data center monitoring and management console118, and other components of the data center monitoring and management environment200, to perform a data center monitoring and management operation. In certain embodiments, the interactions with the data center monitoring and management console118, and other components of the data center monitoring and management environment200, may respectively be presented as a graphical representation308through318within UI windows306through316. In various embodiments, such interactions may be presented to users ‘A’302through ‘x’312via a display device324, such as a projector or large display screen. In certain of these embodiments, the interactions may be presented to users ‘A’302through ‘x’312as a graphical representation348within a UI window336. In certain embodiments, the display device324may be implemented in a command center350, familiar to those of skill in the art, such as a command center350typically found in a data center or a network operations center (NOC). In various embodiments, one or more of the users ‘A’302through ‘x’312may be located within the command center350. In certain of these embodiments, the display device324may be implemented to be generally viewable by one or more of the users ‘A’302through ‘x’312. In certain embodiments, the data center monitoring and management operation may be performed to identify the location350of a particular data center asset244. In certain embodiments, the location350of a data center asset244may be physical, such as the physical address of its associated data center, a particular room in a building at the physical address, a particular location in an equipment rack in that room, and so forth. In certain embodiments, the location350of a data center asset244may be non-physical, such as a network address, a domain, a Uniform Resource Locator (URL), a file name in a directory, and so forth. Certain embodiments of the invention reflect an appreciation that it is not uncommon for large organization to have one or more data centers, such as data centers ‘1’346through ‘n’348. Certain embodiments of the invention reflect an appreciation that it is likewise not uncommon for such data centers to have multiple data center system administrators and data center technicians. Likewise, various embodiments of the invention reflect an appreciation that it is common for a data center system administrator to be responsible for planning, initiating, and overseeing the execution of certain data center monitoring and management operations. Certain embodiments of the invention reflect an appreciation that it is common for a data center system administrator, such as user ‘A’302, to assign a particular data center monitoring and management operation to a data center technician, such as user ‘x’312, as a task to be executed. Certain embodiments of the invention reflect an appreciation that it is likewise common for a data center administrator, such as user ‘A’302, to assume responsibility for performing a particular data center monitoring and management operation. As an example, a data center administrator may receive a stream of data center alerts, each of which is respectively associated with one or more data center issues. To continue the example, several of the alerts may have an initial priority classification of “critical.” However, the administrator may notice that one such alert may be associated with a data center issue that is more critical, or time sensitive, than the others and should be remediated as quickly as possible. Accordingly, the data center administrator may elect to assume responsibility for remediating the data center issue, and as a result, proceed to perform an associated data center remediation operation at that time instead of assigning it to other data center personnel. Certain embodiments of the invention reflect an appreciation that the number of data center assets244in a particular data center ‘1’346through ‘n’348may be quite large. Furthermore, it is not unusual for such data center assets244to be procured, deployed, configured, and implemented on a scheduled, or as needed, basis. It is likewise common for certain existing data center assets244to be replaced, upgraded, reconfigured, maintained, or remediated on a scheduled, or as-needed, basis. Likewise, certain embodiments of the invention reflect an appreciation that such replacements, upgrades, reconfigurations, maintenance, or remediation may be oriented towards hardware, firmware, software, connectivity, or a combination thereof. For example, a data center system administrator may be responsible for the creation of data center asset244procurement, deployment, configuration, and implementation templates, firmware update bundles, operating system (OS) and software application stacks, and so forth. Likewise, a data center technician may be responsible for receiving a procured data center asset244, transporting it to a particular data asset location350in a particular data center ‘1’346through ‘n’348, and implementing it in that location350. The same, or another, data center technician may then be responsible for configuring the data center asset244, establishing network connectivity, applying configuration files, and so forth. To continue the example, the same, or another, data center administrator or technician may be responsible for remediating hardware issues, such as replacing a disc drive in a server or Redundant Array of Independent Disks (RAID) array, or software issues, such as updating a hardware driver or the version of a server's operating system. Accordingly, certain embodiments of the invention reflect an appreciation that a significant amount of coordination may be needed between data center system administrators and data center technicians to assure efficient and reliable operation of a data center. In various embodiments, certain data center monitoring and management operations may include a data center remediation operation, described in greater detail herein. In certain embodiments, a data center remediation operation may be performed to remediate a particular data asset244issue at a particular data asset location350in a particular data center ‘1’346through ‘n’348. In certain embodiments, the data center remediation operation may be performed to ensure that a particular data center asset location350in a particular data center ‘1’346through ‘n’348is available for the replacement or upgrade of an existing data center asset244. As an example, a data center remediation operation may involve deployment of a replacement server that occupies more rack space than the server it will be replacing. In various embodiments, the data center monitoring and management console118, or the data center monitoring and management application310through320, or a combination of the two, may be implemented in a failure tracking mode to capture certain data center asset244telemetry. In various embodiments, the data center asset244telemetry may include data associated with the occurrence of certain events, such as the failure, or anomalous performance, of a particular data center asset244, or an associated workload360, in whole, or in part. In certain embodiments, the data center asset244telemetry may be captured incrementally to provide a historical perspective of the occurrence, and evolution, of an associated data center issue. In various embodiments, the data center monitoring and management console118may likewise be implemented generate certain remediation operation notes. For example, the data center monitoring and management console118may enter certain data center asset244remediation instructions in the data center remediation operation notes. In various embodiments, the data center remediation operation notes may be implemented to contain information related to data center asset244replacement or upgrade parts, data center asset244files that may be needed, installation and configuration instructions related to such files, the physical location350of the data center asset244, and so forth. In certain embodiments, a remediation task344may be generated by associating the previously-generated data center remediation operation notes with the remediation documentation, data center asset files, or other remediation resources342most pertinent to the data center issue, and the administrator, and any data center personnel selected or its remediation. As used herein, a data center remediation task344broadly refers to one or more data center remediation operations, described in greater detail herein, that can be assigned to one or more users ‘A’302through ‘x’312. Certain embodiments of the invention reflect an appreciation that a group of data center personnel, such as users ‘A’302through ‘x’312, will likely possess different skills, certifications, levels of education, knowledge, experience, and so forth. As a result, remediation documentation that is suitable for certain data center personnel may not be suitable for others. For example, a relatively inexperienced data center administrator may be overwhelmed by a massive volume of detailed and somewhat arcane minutiae related to the configuration and administration of multiple virtual machines (VMs) on a large server. However, such remediation documentation may be exactly what a highly skilled and experienced data center administrator needs to remediate subtle server and VM configuration issues. Conversely, the same highly skilled and experienced data center administrator may be hampered, or slowed down, by being provided remediation documentation that is too simplistic, generalized, or high-level for the data center issue they may be attempting to remediate. Likewise, an administrator who is moderately skilled in configuring VMs may benefit from having step-by-step instructions, and corresponding checklists, when remediating a VM-related data center issue. Accordingly, as used herein, pertinent remediation documentation broadly refers to remediation documentation applicable to a corresponding data center issue that is most suited to the skills, certifications, level of education, knowledge, experience, and so forth of the data center personnel assigned to its remediation. In various embodiments, the data center monitoring and management console118may be implemented to generate a corresponding notification of the remediation task344. In certain embodiments, the resulting notification of the remediation task344assignment may be provided to the one or more users ‘A’302through ‘x’312assigned to perform the remediation task344. In certain embodiments, the notification of the remediation task344assignment may be respectively provided to the one or more users ‘A’302through ‘x’312within the UI306through316of their respective user devices ‘A’304through ‘x’314. In certain embodiments, the notification of the remediation task344assignment, and the remediation task344itself, may be implemented such that they are only visible to the users ‘A’302through ‘x’312to which it is assigned. In certain embodiments, the data center monitoring and management console118may be implemented to operate in a monitoring mode. As used herein, monitoring mode broadly refers to a mode of operation where certain monitoring information provided by the monitoring and management console118is available for use by one or more users ‘A’302through ‘x’312. In certain embodiments, one or more of the users ‘A’302through ‘x’312may be command center350users. In certain embodiments, the data center monitoring and management console118may be implemented to operate in a management mode. As used herein, management mode broadly refers to a mode of operation where certain operational functionality of the data center monitoring and management console118is available for use by a user, such as users ‘A’302through ‘x’312. FIG.4shows a block diagram of a connectivity management system implemented in accordance with an embodiment of the invention. In various embodiments, a data center monitoring and management console118, described in greater detail herein, may be implemented to include a connectivity management system (CMS)126. In certain embodiments, the CMS126may be implemented in combination with a CMS client130to perform a connectivity management operation, likewise described in greater detail herein. In various embodiments, one or more connectivity management operations may be performed to initiate, and manage, secure, bi-directional, real-time connectivity between a data center monitoring and management console118and a particular data center asset244, each of which are likewise described in greater detail herein. In various embodiments, the data center monitoring and management console118may be implemented in a cloud environment familiar to skilled practitioners of the art. In certain of these embodiments, the operator of the data center monitoring and management console118may offer its various functionalities and capabilities in the form of one or more or more cloud-based data center services432, described in greater detail herein. In various embodiments, one or more data center assets244may be implemented within a data center402, likewise described in greater detail herein. In certain of these embodiments, the data center402may reside on the premises of a user of one or more data center services432provided by the operator of the data center monitoring and management console118. In various embodiments, the connectivity management system126may be implemented to include one or more CMS aggregators128, one or more CMS services422, and a service mesh proxy434, or a combination thereof. In various embodiments, the CMS aggregator128may be implemented to interact with one or more of the CMS services422, as described in greater detail herein. In various embodiments, the data center services432may likewise be implemented to interact with one or more of the CMS services422, and the service mesh proxy434, or a combination thereof. In certain embodiments, the CMS services422may be implemented to include a CMS discovery424service, a CMS authentication426service, a CMS inventory428service, and a CMS authorization430service, or a combination thereof. In various embodiments, one or more data center assets244may be implemented within a data center402, described in greater detail herein. In certain embodiments, the data center402may be implemented to include an associated data center firewall416. In certain embodiments, a CMS client130may be implemented on one or more data center assets244. In various embodiments, a CMS client130implemented on one data center asset244may likewise be implemented to enable one or more connectivity management operations associated with one or more other data center assets444that are not respectively implemented with their own CMS client130. In certain of these embodiments, the CMS client130may be implemented to assume the identity, and attributes, of a particular data center asset it is directly, or indirectly, associated with. In various embodiments, the CMS client130may be implemented with a proxy management module406. In certain of these embodiments, the proxy management module406may be implemented to manage the CMS client's130connectivity to an external network140through an intermediary proxy server, or the data center firewall416, or both. Those of skill in the art will be familiar with a proxy server, which as typically implemented, is a server application that acts as an intermediary between a client, such as a web browser, requesting a resource, such as a web page, from a provider of that resource, such as a web server. In certain embodiments, the client of a proxy server may be a particular data center asset244requesting a resource, such as a particular data center service432, from the data center monitoring and management console118. Skilled practitioners of the art will likewise be aware that in typical proxy server implementations, a client may direct a request to a proxy server, which evaluates the request and performs the network transactions needed to forward the request to a designated resource provider. Accordingly, the proxy server functions as a relay between the client and a server, and as such acts as an intermediary. Those of skill in the art will be aware that proxy servers also assist in preventing an attacker from invading a private network, such as one implemented within a data center402to provide network connectivity to, and between, certain data center assets244. Skilled practitioners of the art will likewise be aware that server proxies are often implemented in combination with a firewall, such as the data center firewall416. In such implementations, the proxy server, due to it acting as an intermediary, effectively hides an internal network from the Internet, while the firewall prevents unauthorized access by blocking certain ports and programs. Accordingly, a firewall may be configured to allow traffic emanating from a proxy server to pass through to an external network140, while blocking all other traffic from an internal network. Conversely, a firewall may likewise be configured to allow network140traffic emanating from a trusted source to pass through to an internal network, while blocking traffic from unknown or untrusted external sources. As an example, the data center firewall416may be configured in various embodiments to allow traffic emanating from the CMS client130to pass, while the service provider firewall420may be configured to allow traffic emanating from the CMS aggregator128to pass. Likewise, the service provider firewall420may be configured in various embodiments to allow incoming traffic emanating from the CMS client130to be received, while the data center firewall416may be configured to allow incoming network traffic emanating from the CMS aggregator128to be received. In various embodiments, a particular CMS aggregator128may be implemented in combination with a particular CMS client130to provide a split proxy that allows an associated data center asset244to securely communicate with a data center monitoring and management console118. In various embodiments, the split proxy may be implemented in a client/server configuration. In certain of these embodiments, the CMS client130may be implemented as the client component of the client/server configuration and the CMS aggregator128may be implemented as the server component. In certain of these embodiments, one or more connectivity management operations may be respectively performed by the CMS aggregator128and the CMS client130to establish a secure tunnel connection418through a particular network140, such as the Internet. In various embodiments, the secure tunnel connection418may be initiated by the CMS client130first determining the address of the CMS aggregator128it intends to connect to. In these embodiments, the method by which the address of the CMS aggregator128is determined is a matter of design choice. Once the address of the CMS aggregator128is determined, the CMS client130uses it to establish a secure Hypertext Transport Protocol (HTTPS) connection with the CMS aggregator128itself. In response, the CMS aggregator128sets its HTTPS Transport Layer Security (TLS) configuration to “request TLS certificate” from the CMS client130, which triggers the CMS client130to provide its requested TLS certificate408. In certain embodiments, the CMS authentication426service may be implemented to generate and provision the TLS certificate408for the CMS client130. In certain embodiments, the CMS client130may be implemented to generate a self-signed TLS certificate if it has not yet been provisioned with one from the CMS authentication426service. In various embodiments, the CMS client130may then provide an HTTP header with a previously-provisioned authorization token. In certain embodiments, the authorization token may have been generated and provisioned by the CMS authentication426service once the CMS client has been claimed. As used herein, a claimed CMS client130broadly refers to a particular CMS client130that has been bound to an account associated with a user, such as a customer, of one or more data center services432provided by the data center monitoring and management console118. In certain embodiments, a CMS client130may be implemented to maintain its claimed state by renewing its certificate408and being provided an associated claim token. In these embodiments, the frequency, or conditions under which, a CMS client's certificate408is renewed, or the method by which it is renewed, or both, is a matter of design choice. Likewise, in these same embodiments, the frequency, or conditions under which, an associated claim token is generated, or the method by which it is provided to a CMS client130, or both, is a matter of design choice. In various embodiments, the CMS client130may be implemented to have a stable, persistent, and unique identifier (ID) after it is claimed. In certain of these embodiments, the CMS client's130unique ID may be stored within the authorization token. In these embodiments, the method by the CMS client's130unique ID is determine, and the method by which it is stored within an associated authorization token, is a matter of design choice. Once the CMS client130has been claimed, it may be implemented to convert the HTTPS connection to a Websocket connection, familiar to those of skill in the art. After the HTTP connection has been converted to a Websocket connection, tunnel packet processing is initiated and the CMS aggregator128may then perform a Representational State Transfer (REST) request the CMS client130to validate its certificate408. In certain embodiments, the validation of the CMS client's130certificate408is performed by the CMS authorization430service. In various embodiments, the validation of the CMS client's130certificate408is performed to determine a trust level for the CMS client130. In certain of these embodiments, if the CMS client's130certificate408is validated, then it is assigned a “trusted” classification. Likewise, if CMS client's130certificate408fails to be validated, then it is assigned an “untrusted” classification. Accordingly, certain embodiments of the invention reflect an appreciation that “trusted” and “claimed,” as used herein as they relate to a CMS client130are orthogonal. More specifically, “trust” means that the channel of communication can be guaranteed. Likewise, “claimed” the CMS client130can be authenticated and bound to a user, or customer, of pne or more data center services432provided by the data center monitoring and management console118. In various embodiments, the resulting secure tunnel connection418may be implemented to provide a secure channel of communication through a data center firewall416associated with a particular data center402and a service provider firewall420associated with a particular data center monitoring and management console118. In various embodiments, the CMS client130, the secure tunnel connection418, and the CMS aggregator128may be implemented to operate at the application level of the Open Systems Interconnection (OSI) model, familiar to those of skill in the art. Skilled practitioners of the art will likewise be aware that known approaches to network tunneling typically use the network layer of the OSI model. In certain embodiments, the CMS client130and the CMS aggregator128may be implemented to end logical events over the secure tunnel connection418to encapsulate and multiplex individual connection streams and associated metadata. In various embodiments, the CMS discovery424service may be implemented to identify certain data center assets244to be registered and managed by the data center monitoring and management console118. In various embodiments, the CMS discovery424service may be implemented to detect certain events published by a CMS aggregator128. In certain embodiments, the CMS discovery424service may be implemented to maintain a database (not shown) of the respective attributes of all CMS aggregators128and CMS clients130. In certain embodiments, the CMS discovery424service may be implemented to track the relationships between individual CMS clients130and the CMS aggregators128they may be connected to. In various embodiments, the CMS discovery424service may be implemented to detect CMS client130connections and disconnections with a corresponding CMS aggregator128. In certain of these embodiments, a record of such connections and disconnections is stored in a database (not shown) associated with the CMS inventory428service. In various embodiments, the CMS discovery424service may be implemented to detect CMS aggregator128start-up and shut-down events. In certain of these embodiments, a record of related Internet Protocol (IP) addresses and associated state information may is stored in a database (not shown) associated with the CMS inventory428service. In various embodiments, the CMS authentication426service may be implemented to include certain certificate authority (CA) capabilities. In various embodiments, the CMS authentication426service may be implemented to generate a certificate408for an associated CMS client130. In various embodiments, the CMS authentication426service may be implemented to use a third party CA for the generation of a digital certificate for a particular data center asset244. In certain embodiments, the CMS inventory428service may be implemented to maintain an inventory of each CMS aggregator128by an associated unique ID. In certain embodiments, the CMS inventory428service may likewise be implemented to maintain an inventory of each CMS client130by an associated globally unique identifier (GUID). In various embodiments, the CMS authorization430service may be implemented to authenticate a particular data center asset244by requesting certain proof of possession information, and then processing it once it is received. In certain of these embodiments, the proof of possession information may include information associated with whether or not a particular CMS client130possesses the private keys corresponding to an associated certificate408. In various embodiments, the CMS authorization430service may be implemented to authenticate a particular CMS client130associated with a corresponding data center asset244. In certain of these embodiments, the CMS authorization430service may be implemented to perform the authentication by examining a certificate408associated with the CMS client130to ensure that it has been signed by the CMS authentication426service. In various embodiments, the service mesh proxy434may be implemented to integrate knowledge pertaining to individual data center assets244into a service mesh such that certain data center services432have a uniform method of transparently accessing them. In various embodiments, the service mesh proxy434may be implemented with certain protocols corresponding to certain data center assets244. In certain embodiments, the service mesh proxy434may be implemented to encapsulate and multiplex individual connection streams and metadata over the secure tunnel connection418. In certain embodiments, these individual connection streams and metadata may be associated with one or more data center assets244, one or more data center services432, one or more CMS clients130, and one or more CMS aggregators128, or a combination thereof. FIGS.5athrough5dare a sequence diagram showing the performance of certain connectivity management operations implemented in accordance with an embodiment of the invention. In this embodiment, the CMS client130establishes a secure Hypertext Transfer Protocol (HTTPS) connection with the CMS aggregator128in step502, as described in greater detail herein, followed by the provision of its temporary client ID and its previously-provisioned digital certificate to the CMS aggregator in step504. The CMS aggregator128then provides the CMS client130ID and (self-signed) digital certificate to the CMS authorization430service for authentication in step506. Once the CMS client's130credentials have been validated in step508, notification of their validation is provided to the CMS aggregator128by the CMS authorization430service in step510. In response, the CMS aggregator128announces a new CMS client130to the CMS inventory428service in step512, followed by the CMS aggregator128notifying the CMS client130that its digital certificate has been validated in step514. The CMS client130then collects certain information from the data center asset244and in step516, followed by establishing a secure tunnel connection with the CMS aggregator128in step518, which is then multiplexed in step520, as described in greater detail herein. Thereafter, the CMS client130announces itself to the CMS aggregator128and provides it the collected data center asset information in step522. In turn, the CMS aggregator128announces the CMS client130as being in an untrusted/unclaimed state, first to the CMS inventory428service in step524, and then to the CMS authorization430service in step526. In turn, the CMS authorization430service then requests the CMS aggregator128to provide proof of possession in step528. In response, the CMS aggregator128authenticates the proof of possession request in step530and the CMS authentication426service generates a CMS-signed digital certificate in step530. The resulting CMS-signed digital certificate is then provided by the CMS authentication service426to the CMS aggregator128in step534. In turn, the CMS aggregator128respectively provides the proof of possession and the CMS-signed digital certificate to the CMS client130in steps536and538. In response, the CMS client130announces itself to be in a trusted/unclaimed state to the CMS aggregator128in step540. In turn, the CMS aggregator128announces the CMS client130to be in a trusted/unclaimed state to the CMS authorization430service in step542and to the CMS inventory428service in step544. The CMS authentication426service then determines ownership of the CMS client130in step546, followed by the CMS aggregator128providing certain location information associated with the management server to the CMS client130in step548. In turn, the CMS client130requests an ownership voucher from the CMS authentication426service in step550. In response, the CMS authorization430generates an ownership voucher in step552and provides it to the CMS client130in step554. Once it receives the ownership voucher, the CMS client130respectively announces itself as trusted/claimed to the CMS authorization service430and the CMS inventory428service in steps556and558. FIG.6shows a simplified block diagram of a workload orchestrator implemented in accordance with an embodiment of the invention to orchestrate workloads within a service mesh environment. Skilled practitioners of the art will be aware that it has become common for current software applications to be architected as distributed collections of microservices, each of which performs a discrete function. Those of skill in the art will likewise be familiar with the concept of a service mesh612, which as typically implemented is a dedicated infrastructure layer that allows certain capabilities, such as observability, network traffic management, and security to be added to applications without additional code. Accordingly, as used herein, a service mesh612, broadly refers to both the tasks, operations, processes, and software components used to support such an infrastructure layer, and the security or network domain that is created when they are implemented. Certain embodiments of the invention reflect an appreciation that as the number of distributed services grows in size and complexity, they can become more difficult to understand and manage. Likewise, certain embodiments of the invention reflect an appreciation that such understanding and management may include discovery, load balancing, failure recovery, metrics, and monitoring. Certain embodiments of the invention likewise reflect an appreciation that a service mesh612, as typically implemented, may be used to address more complex operational requirements, such as A/B testing, staged software deployments, rate limiting, access control, encryption, and end-to-end authentication. In various embodiments, a data center monitoring and management console118, described in greater detail herein, may be implemented to include a connectivity management system (CMS)126, or one or more data center services432, or a combination thereof, as described in greater detail herein. In certain embodiments, the CMS126may be implemented to include one or more CMS aggregators128, one or more CMS services424, and a service mesh proxy434, or a combination thereof, likewise as described in greater detail herein. In various embodiments, each CMS aggregator128may be implemented to interact with one or more CMS services424, as described in greater detail herein. In various embodiments, the service mesh proxy434may likewise be implemented to interact with one or more CMS aggregators128, one or more CMS services424, or one or more data center services432, or a combination thereof, as likewise described in greater detail herein. In various embodiments, the data center services432may be implemented to include a workload orchestrator602module. In certain of these embodiments, the workload orchestrator602module may be implemented to include a workload scheduler604module, or a workload repository616, or both. In various embodiments, the workload orchestrator602module may be implemented to perform a workload orchestration operation. As used herein, a workload orchestration operation broadly refers to any task, function, operation, procedure, or process performed to automate the assignment and scheduling of a workload, described in greater detail herein, for processing by a particular data center asset, likewise described in greater detail herein. Certain embodiments of the invention reflect an appreciation that the implementation of such workload orchestration operations may be advantageously used to more easily manage complex tasks and workflows within a data center monitoring and management environment. In certain embodiments, one or more workload orchestration operations may be performed to provide one or more associated data center services432. In certain embodiments, a workload orchestration operation may include one or more connectivity management operations, described in greater detail herein. In various embodiments, the workload scheduler604module may be implemented to communicate with the service mesh proxy434module to perform one or more connectivity management operations, described in greater detail herein, within a service mesh612environment. In various embodiments, the service mesh612environment may include one or more data centers ‘1’620through n′640. In certain of these embodiments, one or more of the data centers ‘1’620through n′640may reside on the premises of a user of one or more data center services432provided by the operator of the data center monitoring and management console118. In various embodiments, the data centers ‘1’620through n′640may respectively include data center assets ‘1’622through n′632and a′642through ‘x’652. Likewise, in various embodiments, the data center assets ‘1’622through ‘n’632and a′642through ‘x’652may respectively be implemented with CMS clients ‘1’624through ‘n’634and a′644through ‘x’654. In certain of these embodiments, the CMS clients ‘1’624through ‘n’634and a′644through ‘x’654may be respectively implemented in a baseboard management controller (BMC) implemented within data center assets ‘1’622through n′632and a′642through ‘x’652, or within its OS, or a combination thereof, as described in greater detail herein. In certain embodiments, the data center assets ‘1’622through ‘n’632and a′642through ‘x’652may likewise be respectively implemented with a host operating system (OS) agent ‘1’626through ‘n’636and a′646through ‘x’656, as described in greater detail herein. Various embodiments of the invention reflect an appreciation that the service mesh612environment may include hundreds, if not thousands, of data centers ‘1’620through ‘n’640. Likewise, various embodiments of the invention reflect an appreciation that a provider of data center monitoring and management services may be responsible for monitoring and managing hundreds of thousands, if not millions, of data center assets ‘1’622through n′632and a′642through ‘x’652within such a service mesh612environment. Accordingly, various embodiments of the invention reflect an appreciation that such large numbers of data center assets ‘1’622through n′632and ‘a’642through ‘x’652within a service mesh612environment may pose scalability and workload orchestration challenges for the provision of certain data center services432. Various embodiments of the invention likewise reflect an appreciation that one known approach to addressing such scalability challenges is to implement Function as a Service (FaaS), which enables the deployment of event-driven functions and microservices without repetitive coding. Another aspect of FaaS is its inherent characteristic of being able to scale-out on demand, and then scale back to zero. An example of an open source provider of FaaS approaches is OpenFaas. Likewise, another known approach to scalability challenges within a service mesh612environment includes serverless computing, which broadly refers to a cloud computing execution model in which a cloud services provider allocates machine resources on demand. One example of an open source provider of serverless computing approaches is Knative. However, various embodiments of the invention reflect an appreciation that such FaaS and serverless computing approaches are typically compute resource-intensive and generally have a dependency upon other service mesh612solutions, such as Istio. Accordingly, the host OS agents ‘1’626through ‘n’636and a′646through ‘x’656may respectively be implemented in various embodiments to communicate certain configuration and operational information associated with data center assets ‘1’622through ‘n’632and a′642through ‘x’652to CMS clients ‘1’624through ‘n’634and ‘a’644through ‘x’654. In various embodiments, CMS clients ‘1’624through ‘n’634and a′644through ‘x’654may be implemented to provide the configuration and operational information to one or more CMS aggregators128, which in certain embodiments may in turn provide it to the service mesh proxy434. In various embodiments, the service mesh proxy434may be implemented to provide the configuration and operational information to the workload orchestrator602module. In various embodiments, the workload orchestrator602module may be implemented to use the configuration and operational information to respectively orchestrate614the assignment616of workloads ‘1’628through ‘n’638and a′648through ‘x’658to data center assets ‘1’622through ‘n’632and a′642through ‘x’652for processing. In various embodiments, the workload orchestrator602module may be implemented to respectively provide certain information associated with the orchestration614and the assignment616of workloads ‘1’628through n′638and a′648through ‘x’658to host OS agents ‘1’626through ‘n’636and a′646through ‘x’656via CMS clients ‘1’624through ‘n’634and a′644through ‘x’654. In various embodiments, the information associated with the orchestration614and the assignment616of workloads ‘1’628through ‘n’638and a′648through ‘x’658may respectively be provided to data center assets ‘1’622through ‘n’632and a′642through ‘x’652by host OS agents ‘1’626through ‘n’636and a′646through ‘x’656. In certain of these embodiments, the configuration and operational information may be used by the workload scheduler604module to schedule the assignment616of workloads ‘1’628through ‘n’638and ‘a’648through ‘x’658for processing. In various embodiments, workloads ‘1’628through n′638and a′648through ‘x’658may be stored in the workload repository606. In various embodiments, workloads ‘1’628through ‘n’638and a′648through ‘x’658stored in the workload repository606may be respectively downloaded616to data center assets ‘1’622through ‘n’632and a′642through ‘x’652after they have been assigned616by the workload orchestrator602module. In various embodiments, workloads ‘1’628through n′638and a′648through ‘x’658may respectively be downloaded to data center assets ‘1’622through ‘n’632and a′642through ‘x’652via the service mesh proxy434, one or more CMS aggregators128, and CMS clients ‘1’624through ‘n’634and a′644through ‘x’654. In various embodiments, the workload scheduler604module may be implemented to maintain a record of which workloads ‘1’628through n′638and a′648through ‘x’658are respectively assigned616to which data center assets ‘1’622through ‘n’632and a′642through ‘x’652for processing. Likewise, in various embodiments, the workload scheduler604module may be implemented to maintain a record of when each of the workloads ‘1’628through ‘n’638and ‘a’648through ‘x’658are respectively scheduled for processing by which data center assets ‘1’622through ‘n’632and ‘a’642through ‘x’652. In various embodiments, the workload scheduler604module may likewise be implemented to maintain a record of when the processing of each workload ‘1’628through ‘n’638and ‘a’648through ‘x’658has been completed by which data center assets ‘1’622through ‘n’632and ‘a’642through ‘x’652. In various embodiments, the workload orchestrator602module may be implemented to respectively issue instructions to data center assets ‘1’622through ‘n’632and ‘a’642through ‘x’652when the processing of certain workloads ‘1’628through ‘n’638and ‘a’648through ‘x’658has been completed. FIGS.7aand7brespectively show a simplified block diagram of an embedded, and non-embedded, connectivity management system (CMS) clients implemented in combination with a workload orchestrator to orchestrate workloads within a service mesh environment implemented in accordance with an embodiment of the invention. As shown inFIG.7a, a data center asset244, described in greater detail herein, may be implemented in certain embodiments to include a baseboard management controller (BMC)726and an operating system (OS)730. Skilled practitioners of the art will be familiar with a BMC726, which is a specialized service processor that monitors the physical state of a computer, network server, or other hardware device using sensors. As typically implemented, the monitoring information is communicated to a system administrator through the use of an independent network connection. One known example of a BMC726is the integrated Dell Remote Access Controller (iDRAC®) produced by the Dell Corporation of Round Rock, Texas. In various embodiments, the BMC726may be implemented as an embedded system within a data center asset244. In certain of these embodiments, the BMC726may be implemented to include an embedded CMS client728. As used herein, an embedded CMS client728broadly refers to a CMS client module that is implemented in firmware. In certain embodiments, the embedded CMS client728may be implemented as firmware within the BMC726. Because the embedded CMS client728is implemented in firmware, it provides a layer of security to the data center asset244. Additionally, because the embedded CMS client728is implemented in firmware, it is executable even when the data center asset244has been powered on, but not yet booted. In various embodiments, the BMC726may be implemented to include a Universal Serial Bus (USB) Network Interface Controller (NIC)732, familiar to skilled practitioners of the art. In various embodiments, the OS730may be implemented to include a host OS agent736and one or more workloads738, or a combination thereof. In certain of these embodiments, the OS730may likewise be implemented to include a BMC service module734. In various embodiments, the BMC service module734may be implemented to provide the BMC726access to the OS730. In various embodiments, the host OS agent736may be implemented to communicate with the BMC service module734via a secure Unix socket interface716familiar to skilled practitioners of the art. In various embodiments, the USB NIC732may be implemented in combination with the BMC service module734to provide a secure, primary communication interface between the BMC726and the OS730. In certain of these embodiments, by extension, the USB NIC628may be implemented to provide a secure, primary communication interface between the data center monitoring and management console118and the OS730via the BMC service module734, the embedded CMS client726, and the CMS126, described in greater detail herein. Referring now toFIG.7b, the data center asset244, may be implemented in various embodiments to not include a BMC726, or an embedded CMS client726, or a USB NIC732or a BMC service module734. However, the data center asset may be implemented in certain of these embodiments to include an operating system (OS)730, a host OS agent736, and a non-embedded CMS client130. Various embodiments of the invention reflect an appreciation that not implementing an embedded CMS client726, a USB NIC732, and a BMC service module734may introduce certain security vulnerabilities. Accordingly, the host OS agent736may be implemented in certain embodiments to communicate with the non-embedded CMS client130via a Mutual Transport Layer Security (mTLS) interface746. Those of skill in the art will be familiar mTLS, which uses X.509 digital certificates to establish an encrypted TLS connection between two entities, such as the host OS agent736and the non-embedded CMS130, to authenticate each other. Referring now to bothFIGS.7aand7b, the workload scheduler604module may be implemented in various embodiments to submit a request712to the host OS agent736to schedule the performance of one or more workload orchestration operations to process a particular workload738by an associated data center asset244. In certain of these embodiments, the request712to schedule the performance of the one or more workload orchestration operations may be in the form of a declarative workload manifest, familiar to those of skill in the art, that includes references to signed container images and a startup script. An example of such a declarative workload manifest is provided below: apiVersion: apps/v1kind: workloadSpec:payload:- name: datacollectorimdage: datacollector:1.4.2type:container | packagelaunch:script: ″COMMAND TO RUN PAYLOAD″interface:- port: 443baseURI: /datacolloectorAPISpec: /swagger In various embodiments, the workload orchestrator602module may be implemented to maintain a copy of the declarative workload manifest in the workload repository606. In certain of these embodiments, the workload orchestrator602module may be implemented to delete a copy of a declarative workload manifest in the workload repository606once processing of its associated workload738has been completed. In various embodiments, the request712to schedule the performance of the one or more workload orchestration operations may be in the form of a Representational State Transfer (REST) Application Programming Interface (API) call. Examples of such REST API calls are as follows: --Example API call to submit data collection job $ curl -X POST https://agent.dcm1.endpoint1.site1.com -d @data-collector.yaml --Example API call to see Workload status $ curl -X GET https://agent.dcm1.endpoint1.site1.com/jobs/ID --Example API call to interact with Workload $ curl -X GET https://datacollector.agent.dcm1.endpoint1.site1.com/swagger In various embodiments, the request712to schedule the performance of the one or more workload orchestration operations may be routed from the workload scheduler604module to the host OS agent736via the service mesh proxy434, a CMS aggregator128, the embedded CMS client728the USB NIC732, the BMC service module734, and the Unix socket716. Alternatively, the request712to schedule the performance of the one or more workload orchestration operations may be routed in certain embodiments from the workload scheduler604module to the host OS agent736via the service mesh proxy434, a CMS aggregator128, the non-embedded CMS client130and the mTLS interface746. In certain of these embodiments, the routing of the request712to schedule the performance of one or more workload orchestration operations may be determined through the use of Domain Name System (DNS) resolution and dynamic route configuration approaches familiar to those of skill in the art. In various embodiments, the host OS agent736may be implemented in certain embodiments to download and configure one or more services from the data center monitoring and management console118to process a particular workload738. In various embodiments, the host OS agent736may be implemented to download a particular workload738from the workload repository606. In certain embodiments, the workload738may be downloaded from the workload repository606via the Unix socket interface, the BMC service module734, the USB NIC732, the embedded CMS client728, the CMS aggregator128, and the service mesh proxy434. Alternatively, the workload738may be downloaded in certain embodiments via the mTLS interface746, the non-embedded CMS client130, the CMS aggregator128, and the service mesh proxy434. In various embodiments, the host OS agent736may be implemented to receive a request712from the workload scheduler604module to perform one or more workload orchestration operations to process a particular workload738. In certain of these embodiments, the host OS agent736may be implemented to initiate the processing of the workload738according to instructions included in the request to perform the one or more workload orchestration operations. In various embodiments, the host OS agent736may be implemented to monitor the status of the workload as it is being processed. In various embodiments, the host OS agent736may be implemented to provide certain workload status monitoring information to the workload orchestrator602module. In various embodiments, the host OS agent736may be implemented to receive instructions from the workload orchestrator602module to delete the workload738once its processing has been completed. In various embodiments, the embedded CMS client728may be implemented to authenticate the request712to perform the one or more workload orchestration operations before authorizing it. Alternatively, the non-embedded CMS client130may be implemented in certain embodiments to authenticate the request712to perform the one or more workload orchestration operations before authorizing it. In various embodiments the embedded CMS client728, or the non-embedded CMS client130, may likewise be implemented to process the request712to perform one or more workload orchestration operations to determine its final destination. In various embodiments, the final destination of the request712to perform one or more workload orchestration operations may be determined to be the host OS agent736. In various embodiments, if it is determined that the final destination of the request712to perform one or more workload orchestration operations is the host OS agent736, then it may be routed there by the embedded CMS agent728via the USB NIC732the BMC service module734, and the Unix socket interface716, based upon DNS resolution and dynamic route configuration. Alternatively, the request712to perform one or more workload orchestration operations may be routed in certain embodiments to the host OS agent736via the non-embedded CMS client and the mTLS interface746. In various embodiments, the host OS agent736may be implemented to validate the request712to perform the one or more workload orchestration operations. Once the request712has been validated, the host OS agent736may be implemented in certain embodiments to parse the request712to determine which workloads738need to be downloaded714from the workload repository606. In various embodiments, the host OS agent may be implemented to download714the workloads738from the workload repository606once the request712has been parsed. Once the request712has been parsed by the host OS agent736, it may then be implemented in certain embodiments to request the embedded CMS client728, or the non-embedded CMS client126, to add information to the DNS cache so the workload738can be externally accessed via a Hypertext Transfer Protocol (HTTP) interface while it is being processed. In various embodiments, only workload738processing status calls through a CMS client738,130URL, such as:https://agent.cms1.endpoint1,site1.com In various embodiments, the workload738may be implemented to provide a direct REST interface for certain operations while it is being processed, such as:https://datacollector.agent.cms1.endpoint1.site1.com In various embodiments, the authentication and authorization of a particular workload's738REST interface may be performed by the embedded CMS client728, or the non-embedded CMS client130. In various embodiments, the host OS agent736may be implemented to remove the interface information from the DNS cache once processing of an associated workload738has been completed. In various embodiments, the workloads738downloaded714from the workload repository606may include software containers, software applications, or software packages. In various these embodiments, the host OS agent736may be implemented to validate the digital signatures of certain include software containers, software applications, or software packages downloaded714from the workload repository606. In certain embodiments, the host OS agent736may be implemented to process the request712to check the feasibility of any startup scripts it may contain. In various embodiments, the host OS agent736may be implemented to initiate718the performance of the requested712one or more workload orchestration operations once the digital signatures of the software containers, software applications, or software packages have been validated and any start-up scripts have been checked for feasibility. In certain of these embodiments, the host OS agent736may likewise be implemented to monitor718the status of the one or more workload orchestration operations as they are being performed. In various embodiments, the host OS agent736may be implemented to return a workload Uniform Resource Locator (URL) to the workload orchestrator602module in response to the initial REST API call. In various embodiments, the BMC726may be implanted to provide an API for the CMS126to query the operational status of the USB NIC732. In certain of these embodiments, a response to such a query may provide information associated with the presence, or absence, of the BMC service module734, or the host OS agent736, or both. In various embodiments, the BMC726may be implemented to provide certain data center asset244configuration information. In certain of these embodiments, the provision of such data center244configuration information may allow the identification of standard, and non-standard, components the data center asset244may contain. Skilled practitioners of the art will recognize that many such embodiments and examples are possible. Accordingly, the foregoing is not intended to limit the spirit, scope, or intent of the invention. As will be appreciated by one skilled in the art, the present invention may be embodied as a method, system, or computer program product. Accordingly, embodiments of the invention may be implemented entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in an embodiment combining software and hardware. These various embodiments may all generally be referred to herein as a “circuit,” “module,” or “system.” Furthermore, the present invention may take the form of a computer program product on a computer-usable storage medium having computer-usable program code embodied in the medium. Any suitable computer usable or computer readable medium may be utilized. The computer-usable or computer-readable medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, or a magnetic storage device. In the context of this document, a computer-usable or computer-readable medium may be any medium that can contain, store, communicate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. Computer program code for carrying out operations of the present invention may be written in an object oriented programming language such as Java, Smalltalk, C++ or the like. However, the computer program code for carrying out operations of the present invention may also be written in conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). Embodiments of the invention are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. The present invention is well adapted to attain the advantages mentioned as well as others inherent therein. While the present invention has been depicted, described, and is defined by reference to particular embodiments of the invention, such references do not imply a limitation on the invention, and no such limitation is to be inferred. The invention is capable of considerable modification, alteration, and equivalents in form and function, as will occur to those ordinarily skilled in the pertinent arts. The depicted and described embodiments are examples only, and are not exhaustive of the scope of the invention. Consequently, the invention is intended to be limited only by the spirit and scope of the appended claims, giving full cognizance to equivalents in all respects.
104,250
11943125
DETAILED DESCRIPTION Technology is described for detecting timestamp discontinuities and video resolution discontinuities within a packet stream and marking locations of the detected discontinuities within the packet stream. In some cases, prior to transmission of a packet stream to one or more client devices, a streaming electronic device may perform timestamp discontinuity detection by acquiring a sequence of packets (e.g., a sequence of audio and video packets) corresponding with a packet chunk, identifying a first timestamp associated with an earliest packet within the sequence of packets, identifying a second timestamp associated with a latest packet within the sequence of packets, determining a timestamp time difference between the first timestamp and the second timestamp, identifying a data rate at which the sequence of packets were encoded, determining a maximum chunk time difference based on the data rate at which the sequence of packets were encoded and a data size of the packet chunk, and detecting that a timestamp discontinuity exists within the packet chunk if the timestamp time difference is greater than the maximum chunk time difference. In response to detecting the timestamp discontinuity, the streaming electronic device may identify a location of the timestamp discontinuity within the packet chunk and notify the one or more client devices of the location of the timestamp discontinuity within the packet chunk prior to the one or more client devices receiving the entire packet chunk. In one example, the streaming electronic device may partition the sequence of packets into a first set of packets and a second set of packets based on the location of the timestamp discontinuity and transmit a timestamp discontinuity marker identifying the location of the timestamp discontinuity and/or a special discontinuity packet identifying the location of the timestamp discontinuity between the first set of packets and the second set of packets. A timestamp discontinuity may comprise a discontinuity or gap in time between two consecutive timestamps corresponding with two consecutive video packets. The discontinuity or gap in time may comprise an amount of time greater than the video play time or the length of video between the two consecutive timestamps. For example, if the difference in time between two consecutive timestamps corresponding with two consecutive video packets comprises ten seconds and the length of video between the two consecutive timestamps is only three seconds, then the gap in time between the two consecutive timestamps is seven seconds. The two consecutive video packets may comprise video packetized elementary stream (PES) packets within a packet stream or two adjacent video frames in a sequence of video frames. In one embodiment, a packet stream transmitted to a client device may include a first set of video PES packets (or a first set of video frames) corresponding with a video stream between one and 10 seconds and then a second set of video PES packets (or a second set of video frames) corresponding with the video stream between 15 seconds and 45 seconds; in this case, the video stream has a timestamp discontinuity or gap in time between 10 seconds and 15 seconds. Timestamp discontinuities within the packet stream may occur over time due to signal noise or a high bit error rate causing a transcoding pipeline that is transcoding packets to drop or leave out video packets from the packet stream. Timestamp discontinuities within the packet stream may also occur over time due to network issues that prevent one or more video packets of the packet stream from being pushed over a network to a client device. In some cases, a streaming device streaming video to a client device may have a limited or small-sized memory buffer for storing only a small set of packets and the memory buffer may need to be flushed before each packet of the small set of packets has been successfully transferred to the client device. A resolution discontinuity may comprise a change in video resolution between two adjacent video packets (e.g., between two packetized elementary stream packets) within a packet stream or between two adjacent video frames in a sequence of video frames. In some cases, as available network bandwidth changes over time, a transcoding pipeline may change the bit rates and/or video resolution used for encoding video packets. For example, if the available network bandwidth is greater than a threshold bandwidth (e.g., is greater than 20 Mbps), then a video packet with a resolution of 1920×1080 (1080p) may be generated; however, if the available network bandwidth is less than or equal to the threshold bandwidth, then a video packet with a resolution of 1280×720 (720p) may be generated. One technical issue is that if a client device receives a packet stream with a timestamp discontinuity, then the client device may fail to properly decode and display video frames from the packet stream. However, if the client device is notified ahead of time of receiving packets with a timestamp discontinuity or is notified (e.g., using a special packet marking the point of the timestamp discontinuity) when the timestamp discontinuity exists within the packet stream, then the client device may be able to take measures to prevent a failure to properly decode and display video frames from the packet stream. In one embodiment, in order to notify the client device that a timestamp discontinuity exists within the packet stream, a special discontinuity packet or a null packet may be inserted into the packet stream between the two consecutive packets in which the timestamp discontinuity exists. The special discontinuity packet may be inserted at the point of the timestamp discontinuity within the packet stream and may indicate a length of the time gap corresponding with the timestamp discontinuity. The client device may receive the special discontinuity packet within the packet stream and detect the timestamp discontinuity based on the contents of the special discontinuity packet. In some cases, the client device may detect the existence of a timestamp discontinuity based on the presence of a discontinuity marker within a received packet that may comprise the last packet before the timestamp discontinuity. In order to detect a timestamp discontinuity, a sequence of video frame packets comprising a packet chunk of a chunk size (e.g., 96 KB) may be acquired and every video frame packet within the packet chunk may be parsed to detect whether a timestamp discontinuity exists within the packet chunk. One technical issue with fully parsing every video frame packet within the packet chunk or identifying timestamp information for every video frame packet within the packet chunk is that processor utilization may be excessive and audiovisual (AV) throughput may be reduced. One technical benefit of identifying two different timestamps at the ends of the packet chunk and detecting that a timestamp discontinuity exists within the packet chunk based on a difference between the two different timestamps is that processor utilization may be reduced and audiovisual (AV) throughput may be increased. Furthermore, a technical benefit of identifying a first timestamp corresponding with a first video frame packet closest to the earliest arriving packet within the packet chunk, identifying a second timestamp corresponding with a second video frame packet closest to the latest arriving packet within the packet chunk, determining a timestamp time difference between the first timestamp and the second timestamp, calculating a maximum chunk time difference by dividing the chunk size by a data rate at which each packet of the packet chunk was encoded, and detecting that a timestamp discontinuity exists within the packet chunk in response to detecting that the timestamp time difference is greater than the maximum chunk time difference is that processor utilization may be reduced, memory usage may be reduced, and/or video frame packet throughput may be increased. In one example, the maximum chunk time difference may be calculated by dividing the chunk size by a bit rate (or data rate) at which each packet of the packet chunk was encoded. The bit rate may refer to the amount of data transmitted in a given amount of time (e.g., kilobytes per second). An elementary stream (ES) may contain only one type of data (e.g., only video data). A video elementary stream may be generated using a video encoder. An audio elementary stream may be generated using an audio encoder. The video elementary stream may comprise a sequence of video frames. In one example, the video frames may comprise I-Frames (or intra-coded frames that are coded without reference to any other frames), P-Frames (or predictive-coded frames that are coded using motion compensated prediction from a previous reference frame), and B-Frames (or bi-directionally predictive-coded frames that are coded using motion compensated prediction from previous reference frames and/or future reference frames). A video packetized elementary stream (PES) may correspond with a video ES that has been packetized. Video PES packets may have a variable length. Each video PES packet may correspond with one coded video frame and include a header with timestamp information for the coded video frame. FIG.1depicts one embodiment of a networked computing environment100in which the disclosed technology may be practiced. Networked computing environment100includes a plurality of computing devices interconnected through one or more networks180. The plurality of computing devices may include mobile computing devices (e.g., a smartphone) and non-mobile computing devices (e.g., a desktop computer). The one or more networks180allow a particular computing device to connect to and communicate with another computing device. The depicted computing devices include mobile smartphone110, laptop computing device112, network-connected digital television114, hardware server116, streaming device140, and client device160. In some embodiments, the plurality of computing devices may include other computing devices not shown. In some embodiments, the plurality of computing devices may include more than or less than the number of computing devices depicted inFIG.1. The one or more networks180may include a cellular network, a mobile network, a wireless network, a wired network, a secure network such as an enterprise private network, an unsecure network such as a wireless open network, a local area network (LAN), a wide area network (WAN), the Internet, or a combination of networks. Each network of the one or more networks180may include hubs, bridges, routers, switches, and wired transmission media such as a wired network or direct-wired connection. In some embodiments, computing devices within the networked computing environment100may comprise real hardware computing devices or virtual computing devices, such as one or more virtual machines. Networked storage devices within the networked computing environment100may comprise real hardware storage devices or virtual storage devices, such as one or more virtual disks. The real hardware storage devices may include non-volatile and volatile storage devices. Networked computing environment100may provide a cloud computing environment for one or more computing devices. Cloud computing may refer to Internet-based computing, wherein shared resources, software, and/or information are provided to the one or more computing devices on-demand via the Internet (or other network). The term “cloud” may be used as a metaphor for the Internet, based on the cloud drawings used in computer networking diagrams to depict the Internet as an abstraction of the underlying infrastructure it represents. The client device160may comprise a portable electronic device, a mobile smartphone, a laptop computer, a tablet computer, a desktop computer, a network-connected digital television, or an electronic device with a display screen. One embodiment of client device160includes a network interface165, processor166, memory167, disk168, and display169all in communication with each other. Network interface165allows client device160to connect to one or more networks180. Network interface165may include a wireless network interface and/or a wired network interface. Processor166allows client device160to execute computer readable instructions stored in memory167in order to perform processes discussed herein. Processor166may include one or more processing units, such as one or more CPUs and/or one or more GPUs. Memory167may comprise one or more types of memory (e.g., RAM, SRAM, DRAM, ROM, EEPROM, or Flash). Memory167may comprise a hardware storage device. Display169may comprise a touch-screen display or a light emitting diode (LED) display. In some cases, the server116may comprise a server within a data center. The data center may include one or more servers, such as server160, in communication with one or more storage devices. The servers and data storage devices within a data center may be in communication with each other via a networking fabric connecting server data storage units within the data center to each other. In general, a “server” may refer to a hardware device that acts as the host in a client-server relationship or a software process that shares a resource with or performs work for one or more clients. Communication between computing devices in a client-server relationship may be initiated by a client sending a request to the server asking for access to a particular resource or for particular work to be performed. The server may subsequently perform the actions requested and send a response back to the client. The streaming device140may comprise a network-connected electronic device. One embodiment of streaming device140includes a network interface145, processor146, memory147, and disk148all in communication with each other. Network interface145allows streaming device140to connect to one or more networks180. Network interface145allows streaming device140to connect to one or more client devices, such as client device160, via the one or more networks180. Network interface145may include a wireless network interface and/or a wired network interface. Processor146allows streaming device140to execute computer readable instructions stored in memory147in order to perform processes discussed herein. Processor146may include one or more processing units, such as one or more CPUs and/or one or more GPUs. Memory147may comprise one or more types of memory (e.g., RAM, SRAM, DRAM, ROM, EEPROM, or Flash). Memory147may comprise a hardware storage device. The processor146and memory147may be configured to allow the streaming device140to encrypt video streams, to reduce the video resolution of video streams, and to perform transcoding operations. The streaming device140is connected to an antenna122for receiving over-the-air digital television signals. The streaming device140may stream video streams received from the antenna122to client devices located across a local area network or a wide area network. The antenna122may comprise an over-the-air HDTV antenna for receiving HDTV digital broadcasting signals. HDTV digital broadcasting has several limitations due to the nature of terrestrial systems that broadcast HDTV signals. For example, terrestrial broadcasting systems typically transmit radio frequency (RF) signals from antenna towers that are located in different geographical regions and any obstructing features (e.g., mountains and buildings) between the transmitting antenna (or transmitter) and a receiving antenna (or receiver) may block or weaken the transmitted signals. Moreover, the quality of transmitted signals may be impacted by the presence of electromagnetic interference (EMI) caused by nearby power lines, RF interference caused by nearby radio transmitters, and other sources of RF noise. An HDTV receiver may only be able to reliably decode HDTV content that is received with a signal strength or signal to noise ratio that is sufficient to decode the transmitted HDTV content with an acceptable or correctable number of errors. In one embodiment, the streaming device140may include a wired networking interface (e.g., an Ethernet networking interface), a wireless interface (e.g., a WiFi networking interface), a 1 TB hard drive, a 1 GB SDRAM, one or more OTA tuners, and one or more hardware processors with dedicated video processing and transcoding capability. The streaming device140may include a dual channel transcoder for transcoding high-definition television programs (e.g., from MPEG-2 to MPEG-4 or H.264) from two transport streams. An MPEG transport stream (or transport stream) may comprise a digital container format for the transmission and storage of audio and video information. In some cases, the transport stream may carry data for multiple programs and channels and may include packetized elementary streams with error correction and synchronization pattern features for ensuring data transmission integrity. The transport stream may include one or more video packetized elementary streams, one or more audio packetized elementary streams, scheduling information, and closed captioning information. In some embodiments, the streaming device140may detect that a client device, such as client device160, has requested to receive a live video stream captured by antenna122, and in response, the streaming device140may transmit a reduced resolution version of the live video stream to the client device, determine encryption and decryption keys for passing encrypted messages to the client device while the reduced resolution version of the live video stream is transmitted to the client device, communicate a point in time (or a time period) after which an encrypted version of the live video stream will be transmitted to the client device, and then generate and transmit the encrypted version of the live video stream to the client device subsequent to the point in time (or subsequent to the time period). A video stream or the electronic files or packets comprising portions of the video stream may be encrypted using symmetric encryption or asymmetric encryption (e.g., public-key cryptography) prior to being transmitted to a client device. In one example, video stream packets may be encrypted using a symmetric key encryption algorithm (e.g., AES or triple DES) or an algorithm that utilizes symmetric and asymmetric keys to encrypt data (e.g., PGP). In another example, transport stream packet payloads may be encrypted prior to transmission using a symmetric cryptographic algorithm (e.g., AES) or an asymmetric cryptographic algorithm (e.g., RSA). A symmetric session key allows the client device and the streaming device to use the same session key to encrypt and decrypt audiovisual content and other data. FIG.2Adepicts one embodiment of a streaming device240for streaming audiovisual content to one or more client devices. In one example, the streaming device240may correspond with the streaming device140inFIG.1. As depicted, the streaming device240includes a receiver220for receiving over-the-air digital television signals from an antenna, a transcoding pipeline222, an encryption engine224, a video resolution reducer228, control circuitry218, a streaming server226, and a digital video recorder (DVR) storage230. The receiver220may receive a video stream encoded using a first video coding format (e.g., MPEG-2) from an over-the-air HDTV antenna. The video resolution reducer228may generate a video stream with a reduced video resolution (e.g., cutting the number of pixels in half). The transcoding pipeline222may be used to generate a video stream encoded using a second video coding format (e.g., MPEG-4 or H.264) different from the first video coding format. The control circuitry218may be implemented using an application-specific integrated circuit, a microcontroller (e.g., by executing appropriate instructions), a field-programmable gate array, or a reconfigurable circuit. The control circuitry218may configure the receiver220, the transcoding pipeline222, and/or the encryption engine224to perform processes discussed herein. The encryption engine224may encrypt packets or packet payloads corresponding with either the video stream encoded using the first video coding format or the video stream encoded using the second video coding format. The streaming server226may stream an encrypted version of the video stream encoded using the first video coding format, an encrypted version of the video stream encoded using the second video coding format, an unencrypted version of the video stream encoded using the first video coding format, or a reduced resolution version of the video stream encoded using the first video coding format. In some cases, the streaming device240may convert a video stream from a first video coding format (e.g., from an MPEG-2 or H.262 format) to a second video coding format (e.g., to an MPEG-4 or H.264 format) different from the first video coding format prior to generating an encrypted version of the video stream. The second video coding format may utilize a higher compression rate and/or a lower bit rate compared with the first video coding format. A transcoding process to convert video content from the first video coding format to the second video coding format may decompress the encoded video content into an uncompressed format and then re-compress the uncompressed video content using the second video coding format. The transcoding pipeline222may perform the digital-to-digital conversion in which data that is encoded using the first video coding format is encoded using the second video coding format. The streaming device240may also reduce the resolution of images within a received video stream (e.g., by reducing the number of pixels per image) and/or adjust the image aspect ratio for the video stream (e.g., adjust the aspect ratio from 4:3 to 16:9) prior to generating an encrypted version of the video stream. The streaming device240may also convert a video stream from the first video coding format to a second video coding format and then transmit an unencrypted version of the video stream using the second video coding format prior to generating an encrypted version of the video stream. The streaming server226includes a discontinuity detector216for detecting timestamp discontinuities and/or video resolution discontinuities that occur within a transport stream transmitted by the streaming server226. The discontinuity detector216may sample or acquire packets as they are being transmitted to a client device or prior to the packets being transmitted to a client device. In one embodiment, when a timestamp discontinuity or a video resolution discontinuity is detected, the discontinuity detector216may output or transmit a point-in-time marker to one or more client devices identifying a location of the timestamp discontinuity or the video resolution discontinuity within the transport stream. In another embodiment, the discontinuity detector216may insert a special discontinuity packet or a null packet into the transport stream at the location of the discontinuity (e.g., between two consecutive packets in which a timestamp discontinuity exists). FIG.2Bdepicts one embodiment of a digital container format and circuitry for transmitting audiovisual content from a streaming device to one or more client devices. The digital container format may allow the media content streams242to be combined and transported over a single packetized transport stream282. The transport stream282may be received by a client device, such as client device160inFIG.1, and demultiplexed to generate audio and video streams for display on the client device. Circuitry including the transport stream multiplexer (TS MUX)280may be included within a streaming server, such as the streaming server226ofFIG.2A, to generate the transport stream282. Transport stream multiplexing may refer to a process in which two or more elementary streams are converted into a single transport stream. The media content streams242may include audiovisual content for multiple digital television programs received from an over-the-air HDTV antenna. The substreams of the media content streams242may include a video stream250, a corresponding audio stream260, and a corresponding metadata stream270. The metadata stream270may include closed captioning and subtitling information. In one example, the media content streams242may include a first audio stream for a first language and a second audio stream for a second language that may be synchronized to a common video stream. Elementary streams (e.g., video and audio streams) may be packetized into packetized elementary streams and each packetized elementary stream (PES) may be divided into transport stream packets for transport over the transport stream282. The media content streams242may comprise multiple packetized elementary streams that are time division multiplexed by the transport stream multiplexer (TS MUX)280to generate the transport stream282. Each packetized elementary stream (PES) may comprise a stream of PES packets and each PES packet252may include a PES packet header254and a PES packet payload256. The PES packet252may have a variable packet length (e.g., up to a maximum length of 64 KB) while transport stream packets may have a fixed length (e.g., each transport stream packet may be 188 bytes in length). Each PES packet may be partitioned into a number of transport stream packets as a PES packet that represents an access unit (a frame) in an elementary stream is usually larger than the transport stream packet size (e.g., PES packet length is usually larger than 188 bytes). Each transport stream packet may include a packet identifier (PID) to associate the transport stream packet with the PES packet from which it originated. The PES packet header254may include packet identifier information that distinguishes between audio and video PES packets. In some embodiments, each video PES packet may correspond with a video frame. The PES packet header254may also include timestamp information for the PES packet payload256to allow for the synchronization of a program's audiovisual content. Audio-video synchronization may be performed using timestamp information incorporated into the headers of both video and audio PES packets. The transport stream282may contain multiple programs (e.g., associated with different digital television channels) and each program may have its own time base. The timestamp information within the PES packet header254may include a Decoding Timestamp (DTS), a Presentation Timestamp (PTS) (e.g., used with MPEG-2), and a Composite Timestamp (CTS) (e.g., used with MPEG-4). The PES packet header254may also include reference clock information and/or timestamp information corresponding with an elementary stream clock reference (ESCR) from which decoders of PES streams may derive reference timing. In some cases, a reference clock may be periodically transmitted in the transport stream. In one example, a Program Clock Reference (PCR) may be periodically transmitted in the adaption field of a transport stream packet (e.g., every 100 ms). In some cases, audio and video elementary streams may be synchronized by combining the PCR with the PTS (which may specify a time relative to the PCR) that resides in the PES packet header. As depicted inFIG.2B, a separate messaging channel244may be used to transmit and receive messages between a streaming device and one or more client devices. For example, the messaging channel244may be used by a streaming device to set maximum and minimum times for receiving encrypted audiovisual content and a client device may communicate an encryption ready signal to the streaming device to indicate that the client device is configured and available to decrypt encrypted audiovisual content. FIG.3Adepicts one embodiment of a streaming device340concurrently streaming audiovisual content to a plurality of client devices including client device360and client device362. The streaming device may concurrently stream different digital television channels or different programs to two or more client devices. In one example, the streaming device340may correspond with the streaming device140inFIG.1and the client device360may correspond with the client device160inFIG.1. Upon detecting a request from a client device for a particular video stream or upon detecting that a new session for streaming the particular video stream to the client device should be established, the streaming device340may determine whether to set a fixed time delay before streaming the requested encrypted video stream or to provide a timing window corresponding with a minimum point in time and a maximum point in time during which the client device may signal via an encryption ready signal to the streaming device that the client device is ready to accept the encrypted video stream. If the client device does not provide the encryption ready signal by the maximum point in time, then the streaming device340may transmit the encrypted video stream starting at the maximum point in time. The streaming device340may simultaneously capture or receive one or more digital television channels using an OTA HDTV antenna, such as antenna122. In one embodiment, the client device360may request a live stream of a first digital television channel from the streaming device340and in response the streaming device340may generate a reduced resolution version of the live stream of the first digital television channel and transmit the reduced resolution video stream302to the client device360. While the reduced resolution video stream302is streamed to the client device360, the streaming device340may perform an encryption handshake with the client device360to determine encryption and decryption keys or may transmit decryption information and a corresponding start time for receiving an encrypted video stream within a metadata stream, such as metadata stream270inFIG.2B, or via a messaging channel, such as messaging channel244inFIG.2B. In some cases, the stream302may correspond with the video stream250inFIG.2Bor the transport stream282inFIG.2B. The streaming device340includes a discontinuity detector342for detecting timestamp discontinuities and/or video resolution discontinuities occurring within video packets transmitted from the streaming device340. The client device360includes a discontinuity detector364for detecting timestamp discontinuities occurring within video packets received from the streaming device340. In some cases, the streaming device340may communicate to the client device360that the streaming device340is not able to perform timestamp discontinuity detection via a discontinuity detection disabled signal304. Upon detection that the streaming device340is not able to perform timestamp discontinuity detection, the client device360may enable the discontinuity detector364to acquire incoming transport packets or incoming video packets from the streaming device340and detect timestamp discontinuities occurring within the incoming packets. The streaming device340may transmit a stream312to client device362. The stream312may comprise a video stream, such as the video stream250inFIG.2B, or a transport stream that includes video and audio packets, such as the transport stream282inFIG.2B. The stream312may comprise a video stream that is displayed using a display of the client device362. In some cases, the discontinuity detector342may acquire packets from a video stream before the video stream is time-division multiplexed by a transport stream multiplexer, such as the transport stream multiplexer (TS MUX)280inFIG.2B. In other cases, the discontinuity detector342may acquire packets from a transport stream, such as the transport stream282inFIG.2B. In some embodiments, the streaming device340and the client device360may perform a handshaking operation in order to determine whether discontinuity detection should be performed by the streaming device340or the client device360. In one example, if the streaming device340is streaming less than a threshold number of video streams, then the discontinuity detector342of the streaming device340may be enabled and the discontinuity detector364of the client device360may be disabled. In another example, if the CPU usage or total processor utilization of the streaming device340is greater than a threshold, then the discontinuity detector342of the streaming device340may be disabled and the discontinuity detector364of the client device360may be enabled. In one embodiment, upon detection of a timestamp discontinuity within the stream312, which may comprise a transport stream or a packet stream transmitted from the streaming device340to the client device360, the streaming device340may communicate a location of the timestamp discontinuity via a discontinuity marker314. The discontinuity marker314may identify the location of the timestamp discontinuity using a location timestamp or a packet identifier corresponding with the last packet transmitted to the client device360before the timestamp discontinuity occurs. In some cases, the streaming device340may communicate a location of the timestamp discontinuity by transmitting a first packet identifier and a second packet identifier corresponding with two consecutive packets that straddle the timestamp discontinuity. FIG.3Bdepicts one embodiment of a packet chunk350that comprises a sequence of packets. The sequence of packets may include video packets, such as video packet352, and audio packets, such as audio packet354. The number of audio packets may be less than the number of video packets within the sequence of packets. The video packet352may comprise a video PES header356and a video PES packet payload357. The audio packet354may comprise an audio PES header358and an audio PES packet payload359. The video packet352may be partitioned into a first set of transport stream packets including transport stream packet351and the audio packet354may be partitioned into a second set of transport stream packets including transport stream packet353. Each transport stream packet may comprise a fixed length of 188 bytes. The first set of transport stream packets and the second set of transport stream packets may be transmitted within the transport stream330. The video PES header356may include timestamp information (e.g., a timestamp) for the video PES packet payload357, which may correspond with a particular video frame. The audio PES header358may include timestamp information for the audio PES packet payload359. FIG.3Cdepicts one embodiment of a packet chunk370and a process of identifying a location of a timestamp discontinuity within the packet chunk370. The packet chunk370includes a sequence of packets. In one example, the packet chunk370may correspond with the packet chunk350inFIG.3B. The packet chunk370may include video packets and audio packets. In one embodiment, packets371-373may comprise video packets within the packet chunk370. In another embodiment, packets371-373may comprise audio packets within the packet chunk370. A first time difference374may comprise a time difference between a first timestamp associated with packet371and a second timestamp associated with packet372. A second time difference375may comprise a time difference between the second timestamp associated with packet372and a third timestamp associated with packet373. In some embodiments, the time difference between the first timestamp associated with the packet371and the third timestamp associated with the packet373may correspond with a maximum chunk time difference. As the first timestamp associated with the packet371may correspond with the earliest transmitted packet within the sequence of packets and the third timestamp associated with the packet373may correspond with the latest transmitted packet within the sequence of packets, the time difference between the first timestamp and the third timestamp may be computed and compared with the maximum chunk time difference for the entire packet chunk370to detect the presence of a timestamp discontinuity. In one embodiment, the maximum chunk time difference may be calculated by dividing a chunk size of the packet chunk370by a bit rate (or data rate) at which each audio or video packet of the packet chunk was encoded. If the timestamp time difference is greater than the maximum chunk time difference, then a timestamp discontinuity exists within the packet chunk370. As an example, if the chunk size of the packet chunk370comprises 96 KB and the bit rate at which each video packet was encoded comprises 64 KB per second, then the maximum chunk time difference is 96 KB/64 KBps or 1.5 seconds. If the time difference between timestamps corresponding with the earliest transmitted video packet and the latest transmitted video packet within the sequence of packets is greater than 1.5 seconds, then a timestamp discontinuity exists within the packet chunk370. In some embodiments, after the timestamp discontinuity has been detected, then a binary search may be performed in order to identify the location of the timestamp discontinuity. A binary search method for identifying the location of the timestamp discontinuity may involve dividing a subset of the sequence of packets into two portions, determining timestamp time differences for the two portions, and then identifying which of the two portions includes the timestamp discontinuity (e.g., the portion with the greatest timestamp time difference may comprise the portion with the timestamp discontinuity). The binary search may be performed recursively (or iteratively) until the location of the timestamp discontinuity is identified. In one example, the timestamp for a packet in the middle of the sequence of packets, such as packet372, may be sampled and the first time difference374may be compared with the second time difference375to determine whether the time discontinuity exists within packets between the first timestamp associated with packet371and the second timestamp associated with packet372or within packets between the second timestamp associated with packet372and the third timestamp associated with packet373. Thereafter, a recursive process may be performed to identify the location of the timestamp discontinuity. In another embodiment, the timestamps of every video packet within the sequence of packets may be sampled to determine the location of the timestamp discontinuity. In another embodiment, the timestamps of every audio packet within the sequence of packets may be sampled to determine the location of the timestamp discontinuity. As depicted inFIG.3C, the packet322may comprise the earliest transmitted packet within the packet chunk370or the earliest arriving packet within the packet chunk370. The packet324may comprise the latest transmitted packet within the packet chunk370or the latest arriving packet within the packet chunk370. In one embodiment, a streaming electronic device, such as streaming device226inFIG.2A, may transmit a sequence of packets in which packet322comprising the earliest transmitted packet is transmitted before transmitting packet324comprising the latest transmitted packet. In another embodiment, a client device, such as client device360inFIG.3A, may receive a sequence of packets in which packet322comprising the earliest arriving packet is received before receiving packet324comprising the latest arriving packet. FIG.3Ddepicts one embodiment of two consecutive packet chunks comprising packet chunk370and packet chunk380. As depicted, a timestamp time difference for the packet chunk370may comprise a time difference between a first timestamp associated with the earliest transmitted video packet376and the latest transmitted video packet377. The timestamp time difference for the packet chunk380may comprise a time difference between a first timestamp associated with the earliest transmitted video packet386and the latest transmitted video packet387. FIG.3Edepicts one embodiment of two consecutive packet chunks comprising packet chunk370and packet chunk380. As depicted, a timestamp time difference for the packet chunk370may comprise a time difference between a first timestamp associated with the earliest transmitted audio packet378and the latest transmitted audio packet379. The timestamp time difference for the packet chunk380may comprise a time difference between a first timestamp associated with the earliest transmitted audio packet388and the latest transmitted audio packet389. In some embodiments, the process of identifying the earliest transmitted audio packet within a packet chunk that includes timestamp information may include acquiring the earliest transmitted packet within the packet chunk and determining whether the earliest transmitted packet comprises an audio packet that includes timestamp information. Thereafter, the headers of consecutive packets within the sequence of packets are read from the earliest transmitted packet to the latest transmitted packet to identify the earliest transmitted audio packet within the sequence of packets that includes timestamp information. The process of identifying the latest transmitted audio packet within the packet chunk that includes timestamp information may include acquiring the latest transmitted packet within the packet chunk and determining whether the latest transmitted packet comprises an audio packet that includes timestamp information. Thereafter, the headers of consecutive packets within the sequence of packets are read from the latest transmitted packet to the earliest transmitted packet. FIG.3Fdepicts one embodiment of two consecutive packet chunks comprising packet chunk370and packet chunk380. As depicted, a timestamp time difference for the packet chunk370may comprise a time difference between a first timestamp associated with the earliest transmitted audio packet391and the latest transmitted video packet392. The timestamp time difference for the packet chunk380may comprise a time difference between a first timestamp associated with the earliest transmitted video packet393and the latest transmitted audio packet394. Thus, in some cases, timestamp information may have be obtained from both audio packets and video packets within the same packet chunk. FIG.3Gdepicts one embodiment of two consecutive packet chunks comprising packet chunk370and packet chunk380. As depicted, a timestamp time difference for the packet chunk370may comprise a time difference between a first timestamp associated with the earliest transmitted video packet395within the packet chunk370and a second timestamp associated with the earliest transmitted video packet396within the packet chunk380. In this case, rather than determining the timestamp time difference for the packet chunk370using timestamp information from two video packets within the packet chunk370, only a single timestamp from the earliest transmitted video packet395within the packet chunk370is identified and used to compute the timestamp time difference for the packet chunk370. In some embodiments, a timestamp time difference for the packet chunk380may comprise a time difference between a first timestamp associated with the latest transmitted video packet within the packet chunk370and a second timestamp associated with the latest transmitted video packet within the packet chunk380. FIGS.4A-4Bdepict a flowchart describing one embodiment of a process for detecting timestamp discontinuities within packet chunks. In one embodiment, the process ofFIGS.4A-4Bmay be performed by a streaming electronic device, such as the streaming device140inFIG.1or the streaming device340inFIG.3A. In one embodiment, the process ofFIGS.4A-4Bmay be performed by a client device, such as the client device160inFIG.1or the client device360inFIG.3A. In some embodiments, the process or portions of the process ofFIGS.4A-4Bmay be performed using one or more virtual machines and one or more virtual storage devices. In step402, it is detected that timestamp discontinuity detection should be performed by a streaming device transmitting transport stream packets instead of a client device receiving the transport stream packets. In step404, a number of previously detected timestamp discontinuities is determined. A counter stored in a memory, such as memory147inFIG.1, may increment each time a timestamp discontinuity is detected. In step406, a chunk size for a packet chunk is determined based on the number of previously detected timestamp discontinuities. If the number of previously detected timestamp discontinuities exceeds a threshold number (e.g., exceeds 100), then the chunk size for the packet chunk may be decreased (e.g., cut in half). In step408, a sequence of packets corresponding with the packet chunk is acquired. In step410, the sequence of packets corresponding with the packet chunk is buffered. The sequence of packets may be buffered using a memory, such as memory147inFIG.1. In step412, a first timestamp associated with the earliest arriving packet within the sequence of packets that includes timestamp information is identified. In some cases, the earliest arriving packet within the sequence of packets may comprise the earliest transmitted packet within the sequence of packets that was transmitted from the streaming device. In other cases, the earliest arriving packet within the sequence of packets may comprise the earliest arriving packet within the sequence of packets that was received by the client device. In step414, a second timestamp associated with the latest arriving packet within the sequence of packets that includes timestamp information is identified. In some cases, the latest arriving packet within the sequence of packets may comprise the latest transmitted packet within the sequence of packets that was transmitted from the streaming device. In step416, a timestamp time difference is determined using the first timestamp and the second timestamp. The timestamp time difference may comprise a first time corresponding with the first timestamp minus a second time corresponding with the second timestamp. In step418, a bit rate at which the sequence of packets were encoded is identified. In step420, a maximum chunk time difference is determined using the bit rate at which the sequence of packets were encoded and the chunk size of the packet chunk. The maximum chunk time difference may be estimated as or comprise the chunk size divided by the bit rate. In step422, it is determined whether the timestamp time difference is greater than the maximum chunk time difference. If the timestamp time difference is not greater than the maximum chunk time difference, then it may be deemed that no timestamp discontinuity exists within the packet chunk, and in response step424is performed in which the process of detecting timestamp discontinuities skips to the next packet chunk. Otherwise, if the timestamp time difference is greater than the maximum chunk time difference, then it may be deemed that a timestamp discontinuity exists within the packet chunk, and in response step426is performed. In step426, it is detected that a timestamp discontinuity exists within the packet chunk based on the timestamp time difference and the maximum chunk time difference. In step428, the sequence of packets is parsed to identify a location of the timestamp discontinuity. In step430, the sequence of packets is partitioned into a first set of packets and a second set of packets based on the location of the timestamp discontinuity. In step432, a timestamp discontinuity marker identifying the location of the timestamp discontinuity is transmitted. In one example, the timestamp discontinuity marker may be transmitted from a streaming device to a client device. In step434, one or more discontinuity packets are transmitted or inserted between the first set of packets and the second set of packets. FIGS.4C-4Ddepict a flowchart describing an alternative embodiment of a process for detecting timestamp discontinuities within packet chunks. In one embodiment, the process ofFIGS.4C-4Dmay be performed by a streaming electronic device, such as the streaming device140inFIG.1or the streaming device340inFIG.3A. In one embodiment, the process ofFIGS.4C-4Dmay be performed by a client device, such as the client device160inFIG.1or the client device360inFIG.3A. In some embodiments, the process or portions of the process ofFIGS.4C-4Dmay be performed using one or more virtual machines and one or more virtual storage devices. In step442, it is detected that timestamp discontinuity detection should be enabled. In one embodiment, timestamp discontinuity detection is enabled by enabling a discontinuity detector, such as the discontinuity detector342inFIG.3A, in response to detecting that at least a threshold number of decoder errors have occurred. In step444, a chunk size for a packet chunk is determined. The chunk size may comprise the total data size for the packet chunk. In step446, a sequence of packets corresponding with the packet chunk with an order starting with an earliest transmitted packet and ending with a latest transmitted packet is acquired. In some cases, a sequence of packets corresponding with the packet chunk with an order starting with an earliest arriving packet and ending with a latest arriving packet may be acquired. In one embodiment, the earliest arriving packet may comprise the earliest arriving packet at a client device, such as the client device360inFIG.3A, and the latest arriving packet may comprise the latest arriving packet at the client device. In step448, a first timestamp corresponding with a first packet of the sequence of packets closest to the earliest transmitted packet that includes timestamp information is identified. In step450, a second timestamp corresponding with a second packet of the sequence of packets closest to the latest transmitted packet that includes timestamp information is identified. In some cases, not every packet of the sequence of packets includes timestamp information. In step452, a timestamp time difference between the first timestamp and the second timestamp is determined. In step454, a data rate at which each packet of the sequence of packets was encoded is identified. In step456, a maximum chunk time difference is calculated by dividing the chunk size by the data rate. In step458, it is detected that the timestamp time difference is greater than the maximum chunk time difference. In step460, it is detected that a timestamp discontinuity exists within the packet chunk in response to detecting that the timestamp time difference is greater than the maximum chunk time difference. In step462, the sequence of packets is parsed to identify a location of the timestamp discontinuity. In step464, the sequence of packets is partitioned into a first set of packets and a second set of packets based on the location of the timestamp discontinuity. In step466, a timestamp discontinuity marker identifying the location of the timestamp discontinuity is transmitted. In step468, the first set of packets is transmitted. In step470, one or more discontinuity packets are transmitted or inserted between the first set of packets and the second set of packets. In step472, the second set of packets is transmitted. FIGS.4E-4Fdepict a flowchart describing a second alternative embodiment of a process for detecting timestamp discontinuities within packet chunks. In one embodiment, the process ofFIGS.4E-4Fmay be performed by a streaming electronic device, such as the streaming device140inFIG.1or the streaming device340inFIG.3A. In one embodiment, the process ofFIGS.4E-4Fmay be performed by a client device, such as the client device160inFIG.1or the client device360inFIG.3A. In some embodiments, the process or portions of the process ofFIGS.4E-4Fmay be performed using one or more virtual machines and one or more virtual storage devices. In step480, a first sequence of packets with a packet order starting with a starting packet in the first sequence of packets and ending with an ending packet in the first sequence of packets is acquired. The first sequence of packets may be buffered using a memory buffer, such as memory147inFIG.1. In step482, a first reference timestamp corresponding with a first reference packet of the first sequence of packets closest to the starting packet that includes timestamp information is identified. In step484, a second sequence of packets with a packet order starting with a second starting packet in the second sequence of packets and ending with a second ending packet in the second sequence of packets is acquired. The second sequence of packets may be buffered using a memory buffer, such as memory147FIG.1. In step486, a second reference timestamp corresponding with a second reference packet of the second sequence of packets closest to the second starting packet that includes timestamp information is identified. In step488, a timestamp time difference between the first reference timestamp and the second reference timestamp is determined. In step490, a data rate at which the first sequence of packets was encoded is identified. In step492, a maximum chunk time difference is calculated by dividing a data size of the first sequence of packets by the data rate at which the first sequence of packets was encoded. In step493, it is detected that the timestamp time difference is greater than the maximum chunk time difference. In step494, it is detected that a timestamp discontinuity exists within the first sequence of packets in response to detecting that the timestamp time difference is greater than the maximum chunk time difference. In step495, the first sequence of packets is parsed to identify a location of the timestamp discontinuity. In step496, the sequence of packets is partitioned into a first set of packets and a second set of packets based on the location of the timestamp discontinuity. In step497, a timestamp discontinuity marker identifying the location of the timestamp discontinuity is transmitted. In step498, one or more discontinuity packets are transmitted or inserted between the first set of packets and the second set of packets. One embodiment of the disclosed technology includes acquiring a sequence of packets corresponding with a packet chunk, identifying a first timestamp associated with an earliest packet within the sequence of packets using a streaming device, identifying a second timestamp associated with a latest packet within the sequence of packets using a streaming device, determining a timestamp time difference between the first timestamp and the second timestamp, identifying a data rate at which the sequence of packets were encoded, determining a maximum chunk time difference based on the data rate at which the sequence of packets were encoded and a chunk size of the packet chunk, detecting that a timestamp discontinuity exists within the packet chunk based on the timestamp time difference and the maximum chunk time difference, identifying a location of the timestamp discontinuity within the packet chunk, and transmitting a timestamp discontinuity marker identifying the location of the timestamp discontinuity from the streaming device. In some cases, identifying the first timestamp includes identifying an earliest transmitted packet within the sequence of packets that includes timestamp information. One embodiment of the disclosed technology comprises an electronic device including a storage device (e.g., a semiconductor memory) and one or more processors in communication with the storage device. The storage device configured to store a sequence of packets corresponding with a packet chunk. The one or more processors configured to identify a first timestamp associated with an earliest packet within the sequence of packets and identify a second timestamp associated with a latest packet within the sequence of packets. The one or more processors configured to determine a timestamp time difference between the first timestamp and the second timestamp and determine a data rate at which the sequence of packets were encoded. The one or more processors configured to determine a maximum chunk time difference based on the data rate and a chunk size of the packet chunk. The one or more processors configured to detect that a timestamp discontinuity exists within the packet chunk based on the timestamp time difference and the maximum chunk time difference and identify a location of the timestamp discontinuity within the packet chunk. The one or more processors configured to transmit the location of the timestamp discontinuity. One embodiment of the disclosed technology includes acquiring a first sequence of packets with a packet order starting with a starting packet of the first sequence of packets and ending with an ending packet of the first sequence of packets, identifying a first reference timestamp corresponding with a first reference packet of the first sequence of packets closest to the starting packet, acquiring a second sequence of packets with a packet order starting with a second starting packet of the second sequence of packets and ending with a second ending packet of the second sequence of packets, identifying a second reference timestamp corresponding with a second reference packet of the second sequence of packets closest to the second starting packet, determining a timestamp time difference between the first reference timestamp and the second reference timestamp, identifying a data rate at which the first sequence of packets was encoded, determining a maximum chunk time difference by dividing a data size of the first sequence of packets by the data rate at which the first sequence of packets was encoded, detecting that the timestamp time difference is greater than the maximum chunk time difference, detecting that a timestamp discontinuity exists within the first sequence of packets in response to detecting that the timestamp time difference is greater than the maximum chunk time difference, identifying a location of the timestamp discontinuity within the first sequence of packets, and transmitting the location of the timestamp discontinuity. The disclosed technology may further include partitioning the first sequence of packets into a first set of packets and a second set of packets based on the location of the timestamp discontinuity and transmitting the first set of packets prior to transmitting the location of the timestamp discontinuity. The disclosed technology may be described in the context of computer-executable instructions being executed by a computer or processor. The computer-executable instructions may correspond with portions of computer program code, routines, programs, objects, software components, data structures, or other types of computer-related structures that may be used to perform processes using a computer. Computer program code used for implementing various operations or aspects of the disclosed technology may be developed using one or more programming languages, including an object oriented programming language such as Java or C++, a function programming language such as Lisp, a procedural programming language such as the “C” programming language or Visual Basic, or a dynamic programming language such as Python or JavaScript. In some cases, computer program code or machine-level instructions derived from the computer program code may execute entirely on an end user's computer, partly on an end user's computer, partly on an end user's computer and partly on a remote computer, or entirely on a remote computer or server. The flowcharts and block diagrams in the figures provide illustrations of the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various aspects of the disclosed technology. In this regard, each block in a flowchart may correspond with a program module or portion of computer program code, which may comprise one or more computer-executable instructions for implementing the specified functionality. In some implementations, the functionality noted within a block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In some implementations, the functionality noted within a block may be implemented using hardware, software, or a combination of hardware and software. For purposes of this document, it should be noted that the dimensions of the various features depicted in the figures may not necessarily be drawn to scale. For purposes of this document, reference in the specification to “an embodiment,” “one embodiment,” “some embodiments,” or “another embodiment” may be used to describe different embodiments and do not necessarily refer to the same embodiment. For purposes of this document, a connection may be a direct connection or an indirect connection (e.g., via another part). In some cases, when an element is referred to as being connected or coupled to another element, the element may be directly connected to the other element or indirectly connected to the other element via intervening elements. When an element is referred to as being directly connected to another element, then there are no intervening elements between the element and the other element. For purposes of this document, the term “based on” may be read as “based at least in part on.” For purposes of this document, without additional context, use of numerical terms such as a “first” object, a “second” object, and a “third” object may not imply an ordering of objects, but may instead be used for identification purposes to identify different objects. For purposes of this document, the term “set” of objects may refer to a “set” of one or more of the objects. The various embodiments described above can be combined to provide further embodiments. All of the U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in the Application Data Sheet are incorporated herein by reference, in their entirety. Aspects of the embodiments can be modified, if necessary to employ concepts of the various patents, applications and publications to provide yet further embodiments. These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure.
63,221
11943126
DETAILED DESCRIPTION The following detailed description is directed to using a traffic monitoring service to detect traffic pattern shifts. Two or more core routers can operate in communication with and/or as part of a network. The core routers can be configured to route traffic to, from, and/or across the network, with the traffic being generated by and/or being routed by or to one or more entities such as ingress entities originating traffic routed to one or more core routers, egress entities receiving traffic routed from one or more core routers, and/or other entities that are responsible for traffic associated with the network. The core routers and/or one or more devices in communication with the network and/or the core routers (e.g., a traffic monitoring device) can be configured to generate netflow data. The netflow data can indicate, for example, devices (e.g., the entities) that are generating and/or sending data associated with traffic flowing to and/or through the core routers, packet attributes associated with the traffic flowing to and/or through the core routers, and/or other aspects of the traffic. A server computer can operate in communication with and/or as part of the network. The server computer can host and/or execute one or more application programs such as the netflow data collector, the traffic matrix generator, and/or the traffic monitoring service. The netflow data collector can be configured to obtain and analyze netflow data associated with devices (e.g., the entities) connecting to the network through and/or via the core routers. It should be understood that these examples are illustrative, and therefore should not be construed as being limiting in any way. The traffic matrix generator can be configured to generate, based on the netflow data and routing data obtained from one or more data sources, an augmented traffic matrix. According to various embodiments, the data sources can include an interface inventory and one or more routing tables. The traffic matrix generator can be configured to compile the augmented traffic matrix between pairs of core routers over a time period. The traffic matrix generator can be configured to augment information from the five tuple flows in the netflow data by adding information obtained in the routing data. Specifically, the routing data illustrated and described herein can identify neighboring ingress and egress autonomous systems (e.g., a neighboring ingress and egress entity), which can be obtained from an interface inventory; and the originating and terminating autonomous systems (e.g., originating and terminating entities), which can be determined based on information in a routing table. It should be understood that these examples are illustrative, and therefore should not be construed as being limiting in any way. The traffic monitoring service can be configured to obtain the augmented traffic matrix and perform several levels of analysis on the augmented traffic matrix to identify traffic pattern shifts and entities (e.g., the entities) responsible for the traffic pattern shifts. In particular, the traffic monitoring service can be configured to select a pair of core routers and determine, for the pair of core routers, if traffic between the core routers has shifted in a significant manner. In particular, the traffic monitoring service can be configured to create, for the pair of core routers (any two core routers that are exchanging traffic) a number of time series that can represent a proportion of network traffic that is occurring between the pair of core routers relative to the overall network traffic at a given time period over a number of days, weeks, or the like. The time series can be determined for one or more, or each, hour of the day over a number of days for which a rolling history is maintained (based on the netflow data) such as, for example, a time series for traffic during the 9:00 to 10:00 hour over ten days, twenty-one days, or the like. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. After creating separate time series for each hour of the day for the pair of core routers, the traffic monitoring service can calculate for the pair of core routers a standard deviation and the ARIMA regression confidence interval for proportion of traffic between the pair of routers relative to all network traffic. The traffic monitoring service can then determine if any of the determined hourly utilizations for the pair of core routers exceeds the determined standard deviation by a certain factor (either above or below the standard deviation). The traffic monitoring service can also determine if any of the determined hourly utilizations for the pair of core routers is outside of the determined ARIMA confidence interval. If any of the hourly utilizations for the pair of core routers exceeds the standard deviation by a certain factor (either above or below the standard deviation) and also falls outside of the ARIMA confidence interval for more than a set period of time (e.g., for two days, for three days, etc.), then the pair of core routers can be identified as having a traffic pattern shift. The use of the set period of time as illustrated and described herein can ensure that any detected traffic changes are not merely temporary (e.g., event-driven) spikes or one-time changes, but rather are identified as long-term changes in traffic and therefore deserving of attention. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. It can be appreciated that this level of analysis corresponds to a router-level analysis of the augmented traffic matrix and the result can include a pair of core routers that are experiencing a traffic pattern shift. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. According to various embodiments of the concepts and technologies disclosed herein, for one or more pairs of core routers that are determined as having a long-term traffic pattern shift, the traffic monitoring service can perform an entity-level analysis to identify the entities responsible for the long-term traffic pattern shift. In particular, the traffic monitoring service can be configured to identify an entity (e.g., an Internet services provider (“ISP”) autonomous system (“AS”)) that is closest to an originating point of traffic associated with the pair of core routers, or an ISP AS closest to the network being considered. In the second level (the entity-level) analysis, another time series can be constructed. In the entity-level analysis, the time series can be calculated (again for one or more or each hour over a number of days, weeks, or the like). In the entity-level of analysis, the utilization metric is defined as the proportion of traffic between each major entity that is a source of traffic to the destination core router of the pair of core routers identified in the router-level of analysis, relative to the total traffic between the pair of core routers. In the entity-level of analysis, the determined utilization metrics over the time series are compared to the standard deviation and ARIMA confidence interval for the time series. Again, the time series in the entity-level of analysis can be constructed separately for each hour of the day over a multi-day (e.g., twenty one days, thirty days, etc.) rolling window. If the utilization metric defined for this level exceeds the standard deviation for the series by a certain factor (either above or below) and falls outside the ARIMA confidence interval for the time series for more than a set period of time (e.g., two days, three days, etc.), the entity identified above (e.g., the ISP AS closest to an originating point of traffic associated with the pair of core routers or an ISP AS closest to the network being considered) can be positively identified as the responsible party for the traffic pattern shift. Again, the use of the set period of time as illustrated and described herein can ensure that any detected traffic changes are not temporary and are long-term changes in traffic. As such, when a pair of routers is marked and/or when an entity is identified as illustrated and described herein, this is the result of only long term traffic changes and not due to temporary spikes in traffic. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. The traffic monitoring service also can be configured to generate and output information or data that indicates that a traffic pattern shift has occurred and the entities responsible for the traffic pattern shift. In particular, in various embodiments of the concepts and technologies disclosed herein, the traffic monitoring service can be configured to generate traffic shift data that indicates that a traffic pattern shift has occurred (e.g., between a pair of core routers), the identity of the two or more core routers affected by the traffic pattern shift, and the identity of the one or more entities responsible for the traffic pattern shift. The traffic monitoring service can be configured to provide the traffic shift data to a user associated with a management device. The management device or operating group can take one or more actions in response to identifying the traffic pattern shift such as, for example, increasing capacity of one or more core routers, instantiating a new core router, balance loading and/or rerouting of traffic associated with the entity identified as being responsible for the traffic pattern shift, combinations thereof, or the like. While the subject matter described herein is presented in the general context of program modules that execute in conjunction with the execution of an operating system and application programs on a computer system, those skilled in the art will recognize that other implementations may be performed in combination with other types of program modules. Generally, program modules include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the subject matter described herein may be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like. Referring now toFIG.1, aspects of an operating environment100for various embodiments of the concepts and technologies disclosed herein for using a traffic monitoring service to detect traffic pattern shifts will be described, according to an illustrative embodiment. As will be appreciated with reference to the illustrated and described embodiments disclosed herein, the concepts and technologies disclosed herein are used to ensure that detected traffic pattern shifts are long-term changes to traffic, and short-term traffic changes and/or traffic spikes (e.g., caused by an event or the like) are not acted on, thereby avoiding wasting resources or the like. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. The operating environment100shown inFIG.1includes multiple core routers102A-N (hereinafter collectively and/or generically referred to as “core routers102”), which can operate in communication with and/or as part of a communications network (“network”)104, though this is not necessarily the case in all embodiments. According to various embodiments of the concepts and technologies disclosed herein, the operating environment100includes at least two core routers102, though other numbers of core routers102are included in various embodiments of the concepts and technologies disclosed herein. According to various embodiments of the concepts and technologies disclosed herein, the core routers102can be configured to route traffic to, from, and/or across the network104, with the traffic being generated by and/or being routed to one or more entities106A-N(hereinafter collectively and/or generically referred to as “entities106”). According to various embodiments of the concepts and technologies disclosed herein, the entities106therefore can correspond to ingress entities originating traffic routed to one or more core routers102, egress entities receiving traffic routed from one or more core routers102, and/or other entities that are responsible for traffic associated with the network104. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. According to various embodiments of the concepts and technologies disclosed herein, the core routers102and/or one or more devices in communication with the network104and/or the core routers102(e.g., a traffic monitoring device, not illustrated inFIG.1) can be configured to generate netflow data108. The netflow data108can indicate, for example, devices (e.g., the entities106) that are generating and/or sending data associated with traffic flowing to and/or through the core routers102, packet attributes associated with the traffic flowing to and/or through the core routers102, and/or other aspects of the traffic. Because the netflow data108can include other information as generally is understood, it should be understood that these examples are illustrative, and therefore should not be construed as being limiting in any way. The operating environment100also can include a server computer110, which can operate in communication with and/or as part of the network104, though this is not necessarily the case in all embodiments. According to various embodiments of the concepts and technologies disclosed herein, the functionality of the server computer110may be provided by one or more server computers, other computing systems, or the like. It should be understood that the functionality of the server computer110may be provided by a single device, by two or more similar devices, and/or by two or more dissimilar devices. For purposes of describing the concepts and technologies disclosed herein, the server computer110is described herein as a server computer. It should be understood that this embodiment is illustrative, and should not be construed as being limiting in any way. The server computer110can execute an operating system (not illustrated inFIG.1) and one or more application programs such as, for example, a netflow data collector112, a traffic matrix generator114, and/or a traffic monitoring service116. The operating system can include a computer program that can control the operation of the server computer110. The netflow data collector112, the traffic matrix generator114, and the traffic monitoring service116can include executable programs that can be configured to execute on top of the operating system to provide various functions as illustrated and described herein. Although the netflow data collector112, the traffic matrix generator114, and the traffic monitoring service116are illustrated as separate applications or modules executed by the server computer110, it should be understood that functionality of the netflow data collector112and the traffic matrix generator114can be incorporated into the traffic monitoring service116and/or embodied as or in stand-alone devices or components thereof operating as part of or in communication with the network104and/or the server computer110. As such, the illustrated embodiment should be understood as being illustrative of only some contemplated embodiments and should not be construed as being limiting in any way. The netflow data collector112can be configured to obtain and analyze netflow data108. In particular, as shown inFIG.1, the netflow data collector112can be configured to obtain the netflow data108from one or more core routers102and/or one or more traffic monitoring device associated with and/or operating on the network104and/or other networks (e.g., a carrier network). The netflow data108can be associated with devices (e.g., the entities106) connecting to the network104through and/or via the core routers102. The netflow data collector112also can be configured to extract data features from the netflow data108. These features can capture various netflow characteristics associated with the entities106such as what entities106are connecting to the core routers102and/or the network104via the core routers102; traffic bandwidth associated with these and/or other connections to the core routers102and/or the network104; and/or other aspects of the connections between the entities106and the core routers102and/or the network104via the core routers102. It should be understood that these examples are illustrative, and therefore should not be construed as being limiting in any way. The traffic matrix generator114can be configured to generate, based on the netflow data108and routing data118obtained from one or more data sources120, an augmented traffic matrix122. According to various embodiments, the data sources120can include an interface inventory120A, one or more routing table120B, and/or other data sources120. In particular, the traffic matrix generator114can be configured to compile the augmented traffic matrix122between pairs of core routers102over a time period such as a number of days, a number of weeks, a number of months, or the like. The traffic matrix generator114can be configured to augment information from the five tuple flows in the netflow data108by adding information obtained in the routing data118. Specifically, the routing data118illustrated and described herein can identify neighboring ingress and egress autonomous systems (e.g., a neighboring ingress and egress entity106), which can be obtained from an interface inventory120A; and the originating and terminating autonomous systems (e.g., originating and terminating entities106), which can be determined based on information in a routing table120B. Because other data can be used to augment the netflow data108when generating the augmented traffic matrix, it should be understood that these examples are illustrative, and therefore should not be construed as being limiting in any way. The traffic monitoring service116can be configured to obtain the augmented traffic matrix122and perform several levels of analysis on the augmented traffic matrix122to identify traffic pattern shifts and entities (e.g., the entities106) responsible for the traffic pattern shifts. In particular, the traffic monitoring service116can be configured select a pair of core routers102and determine, for the pair of core routers102, if traffic between the core routers102has shifted in a significant manner (e.g., that the traffic change is a long-term change and not a temporary spike or the like). In particular, the traffic monitoring service116can be configured to create, for the pair of core routers102(any two core routers102that are exchanging traffic) multiple time series that can represent an “router-level utilization,” which can be defined herein as a proportion of network traffic that is occurring between the pair of core routers102relative to the overall network traffic, for a given time period over a number of days, weeks, or the like. The time series can be determined for one or more, or even each hour of the day over a number of days for which a rolling history is maintained (based on the netflow data108). Thus, for example, a first time series may represent the proportion of traffic occurring between a particular pair of core routers relative to all network traffic during the 7:00 AM to 8:00 AM hour over two or more days; and a second and/or nth time series may represent the proportion of traffic occurring between a particular pair of core routers relative to all network traffic during the 8:00 AM to 9:00 AM hour over two or more days; or the like It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. After creating multiple time series (a separate time series for each hour of the day) for the pair of core routers102, the traffic monitoring service116can calculate for the pair of core routers102a standard deviation for utilization of the pair of core routers102and the ARIMA regression confidence interval for each hour of the day over several days, weeks, or the like. The traffic monitoring service116can then determine if any of the determined hourly entity-level utilizations for the pair of core routers102exceeds the determined standard deviation (either above or below the standard deviation). The traffic monitoring service116can also determine if any of the determined hourly utilizations for the pair of core routers102is outside of the determined ARIMA confidence interval. If any of the hourly utilizations for the pair of core routers102exceeds the standard deviation by a certain factor (either above or below the standard deviation) and falls outside of the ARIMA confidence interval for a defined period of time (e.g., over two or more days, ten days, twenty-one days, or the like), the pair of core routers102can be identified as having a long-term traffic pattern shift. It can be appreciated that this level of analysis can correspond to a router-level analysis of the augmented traffic matrix122, and the result of the router-level analysis can include a pair of core routers102that are experiencing a long-term traffic pattern shift. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. According to various embodiments of the concepts and technologies disclosed herein, for one or more pairs of core routers102that are determined as having a long-term traffic pattern shift, the traffic monitoring service116can perform an entity-level analysis to identify the entities106responsible for the long-term traffic pattern shift. In particular, the traffic monitoring service116can be configured to identify an entity (e.g., an Internet services provider (“ISP”) autonomous system (“AS”)) that is closest to an originating point of traffic associated with the pair of core routers102, or an ISP AS closest to the network104being considered. In the second level analysis (referred to herein as an entity-level analysis), another time series can be constructed. In the entity-level analysis, the time series can be calculated for an entity-level utilization metric, which can be defined as a proportion of traffic between each major entity106that is a source of traffic to the destination core router102of the pair of core routers102identified in the router-level analysis, relative to the total traffic between the pair of core routers102. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. The determined entity-level utilizations for the time series can be compared to the standard deviation and ARIMA confidence interval for the time series. Again, the time series in the entity-level of analysis can be constructed separately for each hour of the day over a multi-day (e.g., twenty one days, thirty days, etc.) rolling window. If the entity-level utilization exceeds the standard deviation and the ARIMA confidence interval for more than a set period of time (e.g., two days, three days, ten days, etc.), the entity106identified above (e.g., the ISP AS closest to an originating point of traffic associated with the pair of core routers102or an ISP AS closest to the network104being considered) is positively identified as the responsible party for the traffic pattern shift. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. The traffic monitoring service116also can be configured to generate and output information or data that can indicate that a traffic pattern shift has occurred and the entities106responsible for the traffic pattern shift. In particular, in various embodiments of the concepts and technologies disclosed herein, the traffic monitoring service116can be configured to generate traffic shift data124that indicates that a traffic pattern shift has occurred (e.g., between a pair of core routers102), the identity of the two or more core routers102affected by the traffic pattern shift, and the identity of the one or more entity106responsible for the traffic pattern shift. The traffic monitoring service116can be configured to provide the traffic shift data124to a user associated with a management device126. The management device126can take one or more actions in response to identifying the traffic pattern shift such as, for example, increasing capacity of one or more core routers102, instantiating a new core router102, balance loading and/or rerouting of traffic associated with the entity106identified as being responsible for the traffic pattern shift, combinations thereof, or the like. FIG.1illustrates three core routers102, three entities106, one server computer110, one network104, and one management device126. It should be understood, however, that various implementations of the operating environment100can include two or more than two core routers102; one or more than one entity106; zero, one, or more than one network104; one or more than one server computer110; and/or zero, one, or more than one management device126. As such, the illustrated embodiment should be understood as being illustrative, and should not be construed as being limiting in any way. Turning now toFIG.2, aspects of a method200for identifying long-term traffic pattern shifts and entities responsible for the traffic pattern shifts will be described in detail, according to an illustrative embodiment. It should be understood that the operations of the methods disclosed herein are not necessarily presented in any particular order and that performance of some or all of the operations in an alternative order(s) is possible and is contemplated. The operations have been presented in the demonstrated order for ease of description and illustration. Operations may be added, omitted, and/or performed simultaneously, without departing from the scope of the concepts and technologies disclosed herein. It also should be understood that the methods disclosed herein can be ended at any time and need not be performed in its entirety. Some or all operations of the methods, and/or substantially equivalent operations, can be performed by execution of computer-readable instructions included on a computer storage media, as defined herein. The term “computer-readable instructions,” and variants thereof, as used herein, is used expansively to include routines, applications, application modules, program modules, programs, components, data structures, algorithms, and the like. Computer-readable instructions can be implemented on various system configurations including single-processor or multiprocessor systems, minicomputers, mainframe computers, personal computers, hand-held computing devices, microprocessor-based, programmable consumer electronics, combinations thereof, and the like. Thus, it should be appreciated that the logical operations described herein are implemented (1) as a sequence of computer implemented acts or program modules running on a computing system and/or (2) as interconnected machine logic circuits or circuit modules within the computing system. The implementation is a matter of choice dependent on the performance and other requirements of the computing system. Accordingly, the logical operations described herein are referred to variously as states, operations, structural devices, acts, or modules. These states, operations, structural devices, acts, and modules may be implemented in software, in firmware, in special purpose digital logic, and any combination thereof. As used herein, the phrase “cause a processor to perform operations” and variants thereof is used to refer to causing a processor of a computing system or device, such as the server computer110, to perform one or more operations and/or causing the processor to direct other components of the computing system or device to perform one or more of the operations. For purposes of illustrating and describing the concepts of the present disclosure, the method200is described herein as being performed by the server computer110via execution of one or more software modules such as, for example, the traffic monitoring service116. It should be understood that additional and/or alternative devices and/or network nodes can provide the functionality described herein via execution of one or more modules, applications, and/or other software including, but not limited to, the traffic monitoring service116. Thus, the illustrated embodiments are illustrative, and should not be viewed as being limiting in any way. The method200begins at operation202. At operation202, the server computer110can obtain netflow data108and routing data118. As explained above, the server computer110can be configured to collect or otherwise obtain the netflow data108associated with one or more core routers102using a netflow data collector112, in some embodiments. The netflow data108can be obtained by the server computer110directly from the core routers102, in some embodiments, while in some other embodiments, a network monitoring device can be configured to generate the netflow data108and to provide the netflow data108to the server computer110. Because the netflow data108can be generated by other devices, and because the server computer110can be configured to obtain the netflow data108in additional and/or alternative manners, it should be understood that the above examples are illustrative, and therefore should not be construed as being limiting in any way. Similarly, the server computer110can be configured to collect or otherwise obtain the routing data118from one or more data sources120such as, for example, an interface inventory120A, a routing table120B, and/or other entities. Because the routing data118can be obtained from other entities, it should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. From operation202, the method200can proceed to operation204. At operation204, the server computer110can generate, using the data obtained in operation202, an augmented traffic matrix such as the augmented traffic matrix122illustrated and described above with reference toFIG.1. In particular, the server computer110can be configured, e.g., via execution of the traffic matrix generator114, to generate, based on the netflow data108and routing data118obtained from the one or more data sources120, the augmented traffic matrix122. According to various embodiments, the server computer110(e.g., via execution of the traffic matrix generator114) can be configured to compile the augmented traffic matrix122between pairs of core routers102for a specified time period. The server computer110can be configured to augment information from the five tuple flows in the netflow data108by adding information obtained in the routing data118. Specifically, the routing data118can identify neighboring ingress and egress autonomous systems (e.g., a neighboring ingress and egress entity106) for traffic. This information can be obtained from an interface inventory120A in some embodiments. The routing data118also can include originating and terminating autonomous systems (e.g., originating and terminating entities106), which can be determined based on information in a routing table120B. Because other data can be used to augment the netflow data108when generating the augmented traffic matrix122, it should be understood that these examples are illustrative, and therefore should not be construed as being limiting in any way. From operation204, the method200can proceed to operation206. At operation206, the server computer110can perform a router-level analysis of the augmented traffic matrix122. In particular, in the router-level analysis, the traffic monitoring service116can be configured to create, for a selected pair of core routers102(any two core routers102that are exchanging traffic) a number of time series that can represent a proportion of network traffic that is occurring between the pair of core routers102relative to the overall network traffic. The number of time series can be determined for each hour of the day over a number of days for which a rolling history is maintained (based on the netflow data108). Thus, it can be appreciated that in some embodiments twenty-four time series may be created, with these time series representing the router-level utilizations (defined herein as the proportion of network traffic occurring between the pair of core routers102relative to all network traffic) for the pair of core routers102over each hour of the day over two or more days, weeks, and/or other defined time periods. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. The server computer110can analyze the time series for the pair of core routers102and calculate for the pair of core routers102a standard deviation for router-level utilization of the pair of core routers102and the ARIMA regression confidence interval for the router-level utilizations, i.e., the proportions of traffic between the pair of routers relative to all network traffic. The server computer110can determine if any of the determined router-level utilizations for the pair of core routers102for the one or more time series exceeds the determined standard deviation (either above or below the standard deviation). The server computer110can also determine if any of the determined router-level utilizations for the pair of core routers102for the one or more time series is outside of the determined ARIMA confidence interval. If the server computer110determines, for the pair of core routers102, that the router-level utilizations for the pair of core routers102exceeds the standard deviation (either above or below the standard deviation) and falls outside of the ARIMA confidence interval, the pair of core routers102can be identified as having a long-term traffic pattern shift. Additional details of the router-level analysis will be illustrated and described in more detail hereinbelow with reference toFIG.3. From operation206, the method200can proceed to operation208. At operation208, the server computer110can perform an entity-level analysis of the augmented traffic matrix122. In particular, the server computer110can perform, for one or more pairs of core routers102that are determined during the router-level analysis of operation206as having a long-term traffic pattern shift, an entity-level analysis to identify the entities106responsible for the long-term traffic pattern shift. In particular, the server computer110(e.g., via execution of the traffic monitoring service116) can be configured to identify an entity106(e.g., an ISP AS) that is closest to an originating point of traffic associated with the pair of core routers102, or an ISP AS closest to the network104being considered. The server computer110can consider the proportion of traffic being routed to the pair of core routers102by the selected entity106relative to all traffic associated with the pair of core routers102. This proportion is referred to herein as the “entity-level utilization” and this entity-level utilization can be modeled in another time series for each hour of the day. These entity-level utilizations again can be compared to the standard deviation and ARIMA confidence interval. If the utilization exceeds the standard deviation and the ARIMA confidence interval for more than a set period of time (e.g., two days, three days, etc.), the entity106identified above (e.g., the ISP AS closest to an originating point of traffic associated with the pair of core routers102or an ISP AS closest to the network104being considered) can be positively identified as the responsible party for the traffic pattern shift. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. Additional details of the entity-level analysis will be illustrated and described in more detail hereinbelow with reference toFIG.4. From operation208, the method200can proceed to operation210. At operation210, the server computer110can output traffic shift data124. In particular, the server computer110can generate and output information or data that indicates that a traffic pattern shift has occurred and the entities106responsible for the traffic pattern shift as the traffic shift data124. The traffic shift data124can indicate that a traffic pattern shift has occurred (e.g., between a pair of core routers102), the identity of the two or more core routers102affected by the traffic pattern shift, and the identity of the one or more entity106responsible for the traffic pattern shift. The server computer110can be configured to provide the traffic shift data124to a user associated with a management device126. The management device126can take one or more actions in response to identifying the traffic pattern shift such as, for example, increasing capacity of one or more core routers102, instantiating a new core router102, balance loading and/or rerouting of traffic associated with the entity106identified as being responsible for the traffic pattern shift, combinations thereof, or the like. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. From operation210, the method200can proceed to operation212. The method200can end at operation212. Turning now toFIG.3, aspects of a method300for performing router-level analysis to identify long-term traffic pattern shifts will be described in detail, according to an illustrative embodiment. For purposes of illustrating and describing the concepts of the present disclosure, the method300is described herein as being performed by the server computer110via execution of one or more software modules such as, for example, the traffic monitoring service116. It should be understood that additional and/or alternative devices and/or network nodes can provide the functionality described herein via execution of one or more modules, applications, and/or other software including, but not limited to, the traffic monitoring service116. Thus, the illustrated embodiments are illustrative, and should not be viewed as being limiting in any way. The method300begins at operation302. At operation302, the server computer110can select a pair of core routers102in the augmented traffic matrix122. According to various embodiments of the concepts and technologies disclosed herein, the server computer110can select any pair of core routers102between which traffic is exchanged in operation202. It can be appreciated that the router-level analysis illustrated and described inFIG.3can be iterated and/or looped for each pair of core routers102in some embodiments, i.e., until all core routers102are considered, if desired. As such, various considerations can guide which pair of core routers102is being considered in a particular iteration of operation302such as a most utilized pair of core routers102, or the like. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. From operation302, the method300can proceed to operation304. At operation304, the server computer110can generate a number of time series that can represent router-level utilizations for the pair of core routers102selected in operation302over a time period. As noted above, the router-level utilization can be defined as a proportion of network traffic that is occurring between the pair of core routers102relative to the overall network traffic. The time period can be defined by preferences, settings, or the like. In some embodiments, the netflow data108and the routing data118can be obtained for a rolling time period (e.g., three days, five days, ten days, or the like) and can be maintained on a rolling basis (e.g., applying a first-in-first-out (“FIFO”) data retention model, or the like). Thus, the multiple time series can be generated, in some embodiments, with data points for router-level utilizations for the pair of core routers102on an hourly basis (e.g., for hours of the day) over the specified time period such as a number of days, weeks, or the like. Thus, in some embodiments the server computer110can generate twenty-four time series for the router-level utilizations (one time series for each hour of the day over several days or weeks or the like). It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. From operation304, the method300can proceed to operation306. At operation306, the server computer110can calculate, for the time series generated in operation304, a standard deviation for router-level utilizations of the pair of core routers102. The server computer110also can generate, in operation306, an ARIMA confidence interval for the utilization of the pair of core routers102. It can be appreciated that the server computer110also can be configured to calculate, as part of the calculation of standard deviation, an average or mean utilization for the pair of core routers102. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. From operation306, the method300can proceed to operation308. At operation308, the server computer110can determine, for router-level utilization of the pair of core routers102in the time series, if the router-level utilization exceeds the standard deviation calculated in operation306. It can be appreciated that a router-level utilization that falls under the average utilization by more than some factor times the standard deviation can be determined to exceed the standard deviation. The factor can be defined by settings, configurations, or the like. Similarly, it can be appreciated that an hourly utilization that exceeds the average utilization by more than some factor times the standard deviation can be determined to exceed the standard deviation. If the server computer110determines, in operation308, that one or more hourly utilizations of the pair of core routers102exceeds some factor times the standard deviation calculated in operation306, the method300can proceed to operation310. At operation310, the server computer110can determine, for each hourly utilization of the pair of core routers102, if the router-level utilization falls outside of the ARIMA confidence interval calculated in operation306. In some embodiments, the server computer110can determine in operation306if the router-level utilization exceeds some factor times the ARIMA confidence interval. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. If the server computer110determines, in operation310, that one or more router-level utilizations of the pair of core routers102for a particular hour represented by the time series falls outside of the ARIMA confidence interval calculated in operation306, the method300can proceed to operation312. At operation312, the server computer110can determine if the change in router-utilization for the pair of core routers102satisfies a time constraint. In various embodiments, the time constraint can be defined as a number of days, a number of weeks, or the like. Thus, in operation312, the server computer110can determine if any change in the router-level utilization for the pair of core routers102at a particular hour represented by the time series that exceeds the standard deviation and falls outside of the ARIMA confidence interval has occurred a specified number of times over a specified time period (e.g., every day in hour five over three days; over a majority of days for a week or weeks; or the like). If the server computer110determines, in operation312, that the change in the router-level utilization for the pair of core routers102satisfies the defined time constraint, the method300can proceed to operation314. At operation314, the server computer110can mark the pair of core routers102selected in operation302as being responsible for and/or involved in a statistically significant and/or long-term traffic pattern shift. From operation314, the method can proceed to the functionality illustrated and described below with reference toFIG.4and an entity-level analysis can be performed on the marked pair of core routers102. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. If the server computer110determines, in operation308that no router-level utilizations of a time series for the pair of core routers102exceeds some factor times the standard deviation calculated in operation306, the method300can proceed to operation316. The method300also can proceed to operation316from operation310if the server computer110determines, in operation310, that no router-level utilizations of the time series for the pair of core routers102falls outside of the ARIMA confidence interval (or some factor times the ARIMA confidence interval) calculated in operation306. The method300also can proceed to operation316from operation312if the server computer110determines, in operation312, that the change in utilization for the pair of core routers102does not satisfy a time constraint. The method300can end at operation316. In some embodiments, as noted above, the method300can, instead of ending at operation316, return to operation302to select a next pair of core routers102and the method300can be iterated until all pairs of core routers102have been considered. Thus, it should be understood that the illustrated embodiment is illustrative and should not be construed as being limiting in any way. Turning now toFIG.4, aspects of a method400for performing entity-level analysis to identify entities responsible for long-term traffic pattern shifts will be described in detail, according to an illustrative embodiment. For purposes of illustrating and describing the concepts of the present disclosure, the method400is described herein as being performed by the server computer110via execution of one or more software modules such as, for example, the traffic monitoring service116. It should be understood that additional and/or alternative devices and/or network nodes can provide the functionality described herein via execution of one or more modules, applications, and/or other software including, but not limited to, the traffic monitoring service116. Thus, the illustrated embodiments are illustrative, and should not be viewed as being limiting in any way. At operation402, the server computer110can select a marked pair of core routers102in the augmented traffic matrix122, e.g., a pair of core routers102that have been identified in any iteration of the method300illustrated and described above with reference toFIG.3as being associated with and/or responsible for a statistically significant shift in traffic (e.g., a long-term shift in traffic and not a temporary, short-term, event-driven, or the other short-lived traffic shift). According to various embodiments of the concepts and technologies disclosed herein, the server computer110can select any pair of core routers102that has been marked as being associated with and/or responsible for a statistically significant shift in traffic. It can therefore be appreciated that the entity-level analysis illustrated and described inFIG.4can be iterated and/or looped for each marked pair of core routers102that is associated with a traffic shift, if desired. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. From operation402, the method400can proceed to operation404. At operation404, the server computer110can generated another time series (for one or more or each hour over a number of days, weeks, or the like) for the marked pair of core routers102selected in operation402and one or more entities106. The time period for the time series generated in operation404can be defined by preferences, settings, or the like, as explained above, and the time series can be generated for entity-level utilizations associated with a particular entity106. As used herein, an “entity-level utilization” can be defined herein as referring to the proportion of traffic between a particular entity106that is a source of traffic to the destination core router102of the pair of core routers102identified in the router-level of analysis, relative to the total traffic between the pair of core routers102. The time series can be generated, in some embodiments, with data points representing the entity-level utilizations on an hourly basis over the specified time period such as a number of days, a number of weeks, or the like. As part of generating the time series in operation404, the server computer110can identify an entity106(e.g., an ISP AS) that is closest (among all ISP AS's associated with the network104) to an originating point of traffic associated with the pair of core routers102, in some embodiments. In some other embodiments, the server computer110can identify, as part of operation404, an ISP AS that is closest to the network104being considered (closest of multiple ISP AS's). Operation404also can include the server computer110modeling another time series for the pair of core routers102. From operation404, the method400can proceed to operation406. At operation406, the server computer110can calculate, for the time series generated in operation404, a standard deviation for entity-level utilizations for the pair of core routers102on an hourly basis over the number of days for traffic relating to the entity106identified as part of operation404. The server computer110also can generate, in operation406, an ARIMA confidence interval for the entity-level utilizations for the time series for the pair of core routers102and the entity106. It can be appreciated that the server computer110also can be configured to calculate, as part of the calculation of standard deviation, an average or mean utilization for the pair of core routers102for traffic relating to the entity106identified in operation404. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. From operation406, the method400can proceed to operation408. At operation408, the server computer110can determine, for each entity-level utilization represented in the time series for the pair of core routers102for traffic associated with the entity106identified in operation404, if the entity-level utilization exceeds some factor times the standard deviation calculated in operation406. It can be appreciated that an entity-level utilization that falls under the average utilization by more than some factor times the standard deviation can be determined to exceed the standard deviation. Similarly, it can be appreciated that an entity-level utilization that exceeds the average utilization by more than some factor times the standard deviation can be determined to exceed the standard deviation. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. If the server computer110determines, in operation408, that one or more entity-level utilizations of the time series for the pair of core routers102for traffic associated with the entity106identified in operation404exceeds some factor times the standard deviation calculated in operation406, the method400can proceed to operation410. At operation410, the server computer110can determine, for each entity-level utilization of the pair of core routers102, if the entity-level utilization of the pair of core routers102for traffic associated with the entity106identified in operation404falls outside of the ARIMA confidence interval calculated in operation406. If the server computer110determines, in operation410, that one or more entity-level utilizations of the time series for the pair of core routers102for traffic associated with the entity106identified in operation404falls outside of the ARIMA confidence interval (or some factor times the ARIMA confidence interval such as ninety-five percent or the like) calculated in operation406, the method400can proceed to operation412. At operation412, the server computer110can determine if the change in the entity-level utilization for the pair of core routers102for traffic associated with the entity106identified in operation404satisfies a time constraint. It can be appreciated that the use of the time constraint can be included to ensure that any detected traffic changes are associated only with long-term traffic shifts and not with temporary, event-driven, and/or other short-term traffic spikes. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. In various embodiments, the time constraint can be defined as a number of days, a number of weeks, or the like. Thus, in operation412, the server computer110can determine if any change in entity-level utilization for the pair of core routers102for traffic associated with the entity identified in operation404exceeds some factor times the standard deviation and falls outside of the ARIMA confidence interval (or some factor times the ARIMA confidence interval) has occurred a specified number of times over a specified time period (e.g., in a particular hour over two or more days, five or more days, ten or more days, or the like). If the server computer110determines, in operation412, that the change in entity-level utilization of the time series for the pair of core routers102for traffic associated with the entity106determined in operation404satisfies a time constraint, the method400can proceed to operation414. At operation414, the server computer110can mark the entity106identified in operation404as being responsible for and/or involved in a statistically significant and/or long-term traffic pattern shift. From operation414, the method can proceed to the functionality illustrated and described above with reference to operation210of the method200illustrated and described above with reference toFIG.2. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. If the server computer110determines, in operation408that no entity-level utilization of the time series for the pair of core routers102for traffic associated with the entity106identified in operation404exceeds some factor times the standard deviation calculated in operation406, the method400can proceed to operation416. The method400also can proceed to operation416from operation410if the server computer110determines, in operation410, that no entity-level utilization of the time series for the pair of core routers102for traffic associated with the entity106identified in operation404falls outside of the ARIMA confidence interval (or some factor times the ARIMA confidence interval) calculated in operation406. The method400also can proceed to operation416from operation412if the server computer110determines, in operation412, that the change in utilization for the pair of core routers102for traffic associated with the entity106identified in operation404does not satisfy a time constraint. The method400can end at operation416. In some embodiments, as noted above, the method400can, instead of ending at operation416, return to operation402to select a next pair of marked core routers102and the method400can be iterated until all marked pairs of core routers102have been considered. Thus, it should be understood that the illustrated embodiment is illustrative and should not be construed as being limiting in any way. In some embodiments of the concepts and technologies disclosed herein, the router-level analysis and/or the entity-level analysis can be performed in additional and/or alternative manners. For example, although the above description has referred to creating time series for router-level utilizations and/or entity-level utilizations for a particular hour over two or more days, it should be understood that a particular hour on one day may correspond to a different hour on a second day (e.g., the 9:00-10:00 AM hour on Monday may correspond more to the 10:00-11:00 AM hour on Tuesday than the 9:00-10:00 AM hour on Tuesday). As such, the concepts and technologies disclosed herein can be used to evaluate different hours of the day for separate daily time series constructed for each particular hour of the day in some embodiments. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. Additionally, it should be understood that for a given pair of core routers102being analyzed at a particular time, the pair of core routers102can be analyzed in both directions (e.g., from a first core router102A to a second core router102B and from the second core router102B to the first core router102A). It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. Turning now toFIG.5, additional details of the network104are illustrated, according to an illustrative embodiment. The network104includes a cellular network502, a packet data network504, for example, the Internet, and a circuit switched network506, for example, a publicly switched telephone network (“PSTN”). The cellular network502includes various components such as, but not limited to, base transceiver stations (“BTSs”), Node-B's or e-Node-B's, base station controllers (“BSCs”), radio network controllers (“RNCs”), mobile switching centers (“MSCs”), mobile management entities (“MMEs”), short message service centers (“SMSCs”), multimedia messaging service centers (“MMSCs”), home location registers (“HLRs”), home subscriber servers (“HSSs”), visitor location registers (“VLRs”), charging platforms, billing platforms, voicemail platforms, GPRS core network components, location service nodes, an IP Multimedia Subsystem (“IMS”), and the like. The cellular network502also includes radios and nodes for receiving and transmitting voice, data, and combinations thereof to and from radio transceivers, networks, the packet data network504, and the circuit switched network506. A mobile communications device508, such as, for example, a cellular telephone, a user equipment, a mobile terminal, a PDA, a laptop computer, a handheld computer, and combinations thereof, can be operatively connected to the cellular network502. The cellular network502can be configured as a 2G GSM network and can provide data communications via GPRS and/or EDGE. Additionally, or alternatively, the cellular network502can be configured as a 3G UMTS network and can provide data communications via the HSPA protocol family, for example, HSDPA, EUL (also referred to as HSDPA), and HSPA+. The cellular network502also is compatible with 4G mobile communications standards, 5G mobile communications standards, other mobile communications standards, and evolved and future mobile communications standards. The packet data network504includes various devices, for example, servers, computers, databases, and other devices in communication with one another, as is generally known. The packet data network504devices are accessible via one or more network links. The servers often store various files that are provided to a requesting device such as, for example, a computer, a terminal, a smartphone, or the like. Typically, the requesting device includes software (a “browser”) for executing a web page in a format readable by the browser or other software. Other files and/or data may be accessible via “links” in the retrieved files, as is generally known. In some embodiments, the packet data network504includes or is in communication with the Internet. The circuit switched network506includes various hardware and software for providing circuit switched communications. The circuit switched network506may include, or may be, what is often referred to as a plain old telephone system (POTS). The functionality of a circuit switched network506or other circuit-switched network are generally known and will not be described herein in detail. The illustrated cellular network502is shown in communication with the packet data network504and a circuit switched network506, though it should be appreciated that this is not necessarily the case. One or more Internet-capable devices510, for example, a PC, a laptop, a portable device, or another suitable device, can communicate with one or more cellular networks502, and devices connected thereto, through the packet data network504. It also should be appreciated that the Internet-capable device510can communicate with the packet data network504through the circuit switched network506, the cellular network502, and/or via other networks (not illustrated). As illustrated, a communications device512, for example, a telephone, facsimile machine, modem, computer, or the like, can be in communication with the circuit switched network506, and therethrough to the packet data network504and/or the cellular network502. It should be appreciated that the communications device512can be an Internet-capable device, and can be substantially similar to the Internet-capable device510. In the specification, the network104is used to refer broadly to any combination of the networks502,504,506. It should be appreciated that substantially all of the functionality described with reference to the network104can be performed by the cellular network502, the packet data network504, and/or the circuit switched network506, alone or in combination with other networks, network elements, and the like. FIG.6is a block diagram illustrating a computer system600configured to provide the functionality described herein for using a traffic monitoring service to detect traffic pattern shifts, in accordance with various embodiments of the concepts and technologies disclosed herein. The computer system600includes a processing unit602, a memory604, one or more user interface devices606, one or more input/output (“I/O”) devices608, and one or more network devices610, each of which is operatively connected to a system bus612. The bus612enables bi-directional communication between the processing unit602, the memory604, the user interface devices606, the I/O devices608, and the network devices610. The processing unit602may be a standard central processor that performs arithmetic and logical operations, a more specific purpose programmable logic controller (“PLC”), a programmable gate array, or other type of processor known to those skilled in the art and suitable for controlling the operation of the server computer. As used herein, the word “processor” and/or the phrase “processing unit” when used with regard to any architecture or system can include multiple processors or processing units distributed across and/or operating in parallel in a single machine or in multiple machines. Furthermore, processors and/or processing units can be used to support virtual processing environments. Processors and processing units also can include state machines, application-specific integrated circuits (“ASICs”), combinations thereof, or the like. Because processors and/or processing units are generally known, the processors and processing units disclosed herein will not be described in further detail herein. The memory604communicates with the processing unit602via the system bus612. In some embodiments, the memory604is operatively connected to a memory controller (not shown) that enables communication with the processing unit602via the system bus612. The memory604includes an operating system614and one or more program modules616. The operating system614can include, but is not limited to, members of the WINDOWS, WINDOWS CE, and/or WINDOWS MOBILE families of operating systems from MICROSOFT CORPORATION, the LINUX family of operating systems, the SYMBIAN family of operating systems from SYMBIAN LIMITED, the BREW family of operating systems from QUALCOMM CORPORATION, the MAC OS, iOS, and/or LEOPARD families of operating systems from APPLE CORPORATION, the FREEBSD family of operating systems, the SOLARIS family of operating systems from ORACLE CORPORATION, other operating systems, and the like. The program modules616may include various software and/or program modules described herein. In some embodiments, for example, the program modules616include the netflow data collector112, the traffic matrix generator114, and/or the traffic monitoring service116. These and/or other programs can be embodied in computer-readable media containing instructions that, when executed by the processing unit602, perform one or more of the methods200,300, and400described in detail above with respect toFIGS.2-4and/or other functionality as illustrated and described herein. It can be appreciated that, at least by virtue of the instructions embodying the methods200,300, and400, and/or other functionality illustrated and described herein being stored in the memory604and/or accessed and/or executed by the processing unit602, the computer system600is a special-purpose computing system that can facilitate providing the functionality illustrated and described herein. According to embodiments, the program modules616may be embodied in hardware, software, firmware, or any combination thereof. Although not shown inFIG.6, it should be understood that the memory604also can be configured to store the netflow data108, the routing data118, the interface inventory120A, the routing table120B, the augmented traffic matrix122, the traffic shift data124, and/or other data, if desired. By way of example, and not limitation, computer-readable media may include any available computer storage media or communication media that can be accessed by the computer system600. Communication media includes computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics changed or set in a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media. Computer storage media includes only non-transitory embodiments of computer readable media as illustrated and described herein. Thus, computer storage media can include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, Erasable Programmable ROM (“EPROM”), Electrically Erasable Programmable ROM (“EEPROM”), flash memory or other solid state memory technology, CD-ROM, digital versatile disks (“DVD”), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer system600. In the claims, the phrase “computer storage medium” and variations thereof does not include waves or signals per se and/or communication media. The user interface devices606may include one or more devices with which a user accesses the computer system600. The user interface devices606may include, but are not limited to, computers, servers, personal digital assistants, cellular phones, or any suitable computing devices. The I/O devices608enable a user to interface with the program modules616. In one embodiment, the I/O devices608are operatively connected to an I/O controller (not shown) that enables communication with the processing unit602via the system bus612. The I/O devices608may include one or more input devices, such as, but not limited to, a keyboard, a mouse, or an electronic stylus. Further, the I/O devices608may include one or more output devices, such as, but not limited to, a display screen or a printer. The network devices610enable the computer system600to communicate with other networks or remote systems via a network, such as the network104. Examples of the network devices610include, but are not limited to, a modem, a radio frequency (“RF”) or infrared (“IR”) transceiver, a telephonic interface, a bridge, a router, or a network card. The network104may include a wireless network such as, but not limited to, a Wireless Local Area Network (“WLAN”) such as a WI-FI network, a Wireless Wide Area Network (“WWAN”), a Wireless Personal Area Network (“WPAN”) such as BLUETOOTH, a Wireless Metropolitan Area Network (“WMAN”) such a WiMAX network, or a cellular network. Alternatively, the network104may be a wired network such as, but not limited to, a Wide Area Network (“WAN”) such as the Internet, a Local Area Network (“LAN”) such as the Ethernet, a wired Personal Area Network (“PAN”), or a wired Metropolitan Area Network (“MAN”). FIG.7illustrates an illustrative architecture for a cloud computing platform700that can be capable of executing the software components described herein for using a traffic monitoring service to detect traffic pattern shifts and/or for interacting with the netflow data collector112, the traffic matrix generator114, and/or the traffic monitoring service116. Thus, it can be appreciated that in some embodiments of the concepts and technologies disclosed herein, the cloud computing platform700illustrated inFIG.7can be used to provide the functionality described herein with respect to the core routers102, the entities106, the server computer110, the data sources120, and/or the management device128. The cloud computing platform700thus may be utilized to execute any aspects of the software components presented herein. Thus, according to various embodiments of the concepts and technologies disclosed herein, the netflow data collector112, the traffic matrix generator114, and/or the traffic monitoring service116can be implemented, at least in part, on or by elements included in the cloud computing platform700illustrated and described herein. Those skilled in the art will appreciate that the illustrated cloud computing platform700is a simplification of but only one possible implementation of an illustrative cloud computing platform, and as such, the illustrated cloud computing platform700should not be construed as being limiting in any way. In the illustrated embodiment, the cloud computing platform700can include a hardware resource layer702, a virtualization/control layer704, and a virtual resource layer706. These layers and/or other layers can be configured to cooperate with each other and/or other elements of a cloud computing platform700to perform operations as will be described in detail herein. While connections are shown between some of the components illustrated inFIG.7, it should be understood that some, none, or all of the components illustrated inFIG.7can be configured to interact with one another to carry out various functions described herein. In some embodiments, the components are arranged so as to communicate via one or more networks such as, for example, the network104illustrated and described hereinabove (not shown inFIG.7). Thus, it should be understood thatFIG.7and the following description are intended to provide a general understanding of a suitable environment in which various aspects of embodiments can be implemented, and should not be construed as being limiting in any way. The hardware resource layer702can provide hardware resources. In the illustrated embodiment, the hardware resources can include one or more compute resources708, one or more memory resources710, and one or more other resources712. The compute resource(s)708can include one or more hardware components that can perform computations to process data, and/or to execute computer-executable instructions of one or more application programs, operating systems, services, and/or other software including, but not limited to, the netflow data collector112, the traffic matrix generator114, and/or the traffic monitoring service116illustrated and described herein. According to various embodiments, the compute resources708can include one or more central processing units (“CPUs”). The CPUs can be configured with one or more processing cores. In some embodiments, the compute resources708can include one or more graphics processing units (“GPUs”). The GPUs can be configured to accelerate operations performed by one or more CPUs, and/or to perform computations to process data, and/or to execute computer-executable instructions of one or more application programs, operating systems, and/or other software that may or may not include instructions that are specifically graphics computations and/or related to graphics computations. In some embodiments, the compute resources708can include one or more discrete GPUs. In some other embodiments, the compute resources708can include one or more CPU and/or GPU components that can be configured in accordance with a co-processing CPU/GPU computing model. Thus, it can be appreciated that in some embodiments of the compute resources708, a sequential part of an application can execute on a CPU and a computationally-intensive part of the application can be accelerated by the GPU. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. In some embodiments, the compute resources708also can include one or more system on a chip (“SoC”) components. It should be understood that an SoC component can operate in association with one or more other components as illustrated and described herein, for example, one or more of the memory resources710and/or one or more of the other resources712. In some embodiments in which an SoC component is included, the compute resources708can be or can include one or more embodiments of the SNAPDRAGON brand family of SoCs, available from QUALCOMM of San Diego, California; one or more embodiment of the TEGRA brand family of SoCs, available from NVIDIA of Santa Clara, California; one or more embodiment of the HUMMINGBIRD brand family of SoCs, available from SAMSUNG of Seoul, South Korea; one or more embodiment of the Open Multimedia Application Platform (“OMAP”) family of SoCs, available from TEXAS INSTRUMENTS of Dallas, Texas; one or more customized versions of any of the above SoCs; and/or one or more other brand and/or one or more proprietary SoCs. The compute resources708can be or can include one or more hardware components arranged in accordance with an ARM architecture, available for license from ARM HOLDINGS of Cambridge, United Kingdom. Alternatively, the compute resources708can be or can include one or more hardware components arranged in accordance with an x86 architecture, such as an architecture available from INTEL CORPORATION of Mountain View, California, and others. Those skilled in the art will appreciate the implementation of the compute resources708can utilize various computation architectures and/or processing architectures. As such, the various example embodiments of the compute resources708as mentioned hereinabove should not be construed as being limiting in any way. Rather, implementations of embodiments of the concepts and technologies disclosed herein can be implemented using compute resources708having any of the particular computation architecture and/or combination of computation architectures mentioned herein as well as other architectures. Although not separately illustrated inFIG.7, it should be understood that the compute resources708illustrated and described herein can host and/or execute various services, applications, portals, and/or other functionality illustrated and described herein. Thus, the compute resources708can host and/or can execute the netflow data collector112, the traffic matrix generator114, the traffic monitoring service116, and/or other applications or services illustrated and described herein. The memory resource(s)710can include one or more hardware components that can perform or provide storage operations, including temporary and/or permanent storage operations. In some embodiments, the memory resource(s)710can include volatile and/or non-volatile memory implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data disclosed herein. Computer storage media is defined hereinabove and therefore should be understood as including, in various embodiments, random access memory (“RAM”), read-only memory (“ROM”), Erasable Programmable ROM (“EPROM”), Electrically Erasable Programmable ROM (“EEPROM”), flash memory or other solid state memory technology, CD-ROM, digital versatile disks (“DVD”), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store data and that can be accessed by the compute resources708, subject to the definition of “computer storage media” provided above (e.g., as excluding waves and signals per se and/or communication media as defined in this application). Although not illustrated inFIG.7, it should be understood that the memory resources710can host or store the various data illustrated and described herein including, but not limited to, the netflow data108, the routing data118, the interface inventory120A, the routing table120B, the augmented traffic matrix122, the traffic shift data124, and/or other data, if desired. It should be understood that this example is illustrative, and therefore should not be construed as being limiting in any way. The other resource(s)712can include any other hardware resources that can be utilized by the compute resources(s)708and/or the memory resource(s)710to perform operations. The other resource(s)712can include one or more input and/or output processors (e.g., a network interface controller and/or a wireless radio), one or more modems, one or more codec chipsets, one or more pipeline processors, one or more fast Fourier transform (“FFT”) processors, one or more digital signal processors (“DSPs”), one or more speech synthesizers, combinations thereof, or the like. The hardware resources operating within the hardware resource layer702can be virtualized by one or more virtual machine monitors (“VMMs”)714A-714N (also known as “hypervisors;” hereinafter “VMMs714”). The VMMs714can operate within the virtualization/control layer704to manage one or more virtual resources that can reside in the virtual resource layer706. The VMMs714can be or can include software, firmware, and/or hardware that alone or in combination with other software, firmware, and/or hardware, can manage one or more virtual resources operating within the virtual resource layer706. The virtual resources operating within the virtual resource layer706can include abstractions of at least a portion of the compute resources708, the memory resources710, the other resources712, or any combination thereof. These abstractions are referred to herein as virtual machines (“VMs”). In the illustrated embodiment, the virtual resource layer706includes VMs716A-716N (hereinafter “VMs716”). Based on the foregoing, it should be appreciated that systems and methods for using a traffic monitoring service to detect traffic pattern shifts have been disclosed herein. Although the subject matter presented herein has been described in language specific to computer structural features, methodological and transformative acts, specific computing machinery, and computer-readable media, it is to be understood that the concepts and technologies disclosed herein are not necessarily limited to the specific features, acts, or media described herein. Rather, the specific features, acts and mediums are disclosed as example forms of implementing the concepts and technologies disclosed herein. The subject matter described above is provided by way of illustration only and should not be construed as limiting. Various modifications and changes may be made to the subject matter described herein without following the example embodiments and applications illustrated and described, and without departing from the true spirit and scope of the embodiments of the concepts and technologies disclosed herein.
81,482
11943127
DETAILED DESCRIPTION In order to make the objective, technical schemes and advantages of the present disclosure clearer, embodiments of present disclosure will be further described in detail below with reference to the accompanying drawings by way of embodiments. It should be understood that the embodiments described herein are only for explaining the present disclosure, and are not intended to limit the present disclosure. Embodiment One In related technologies, there are two main methods for controlling background data of an application. The first is to regularly clean up the application after the application runs in the background. The second is to configure a firewall after the application runs in the background, so as to disable the application from sending and receiving packets. Background cleanup cannot ensure the normal data sending and receiving of the application, thus affecting functions of the application, while the firewall only impairs the accessibility of packets sent and received by the application. However, the application will still actively wake up a terminal device and send handshake messages, so that there is still a problem of reduced battery duration. To solve the above problem, a network-based control method for power consumption of an application is provided in the embodiment of the present disclosure, including first determining whether an application freezing function is enabled since the network environment of a system is dynamic and ever-changing; if so, recording and monitoring network characteristics of all applications in the system to identify a target application that frequently detects heartbeats of servers, and monitoring network access of the target application, that is, a peer server to which the target application is connected. If the peer server becomes inaccessible due to lack of a data access function in the current network environment or for other reasons, the target application needs to be frozen. By limiting physical resources (including CPU, memory, I/O, and network) used by process groups through a system kernel, the target application will not frequently wake up the system or send network data or handshake messages, thus saving power consumption. FIG.1is a network-based control method for power consumption of an application provided in the embodiment of the present disclosure. As shown inFIG.1, the network-based control method for power consumption of an application includes following steps of S101to S103. In a step of S101, a target application having a frequent wakeup or heartbeat detection behavior is identified when a freezing function is enabled. It should be noted that the freezing function is responsible for receiving and implementing freezing and unfreezing operations of the target application, with a process freezing technology as its core technology. The process freezing technology is mainly a terminal-based method for restricting the occupation of system resources by applications, including but not limited to CPU resources, Sensor, GPS (global positioning system), memory and other peripheral resources. In the embodiment of the present disclosure, the terminal device is provided with a freezing function, and before the target application is identified, it is necessary to determine whether the freezing function is enabled either by detecting whether a user sends an operating command to enable the freezing function or by detecting whether there is an application to be frozen in all applications of the terminal device. For example, when the user taps “Freezing Function Enable”, the terminal device enables the freezing function. In an embodiment of the present disclosure, the target application having a frequent wakeup or heartbeat detection behavior is identified in all applications of the terminal device, and the frequent wakeup or heartbeat detection behavior will cause power consumption to the terminal device. In the embodiment of the present disclosure, there are following two methods for identifying the target application in all applications. The first method is to detect network behaviors of all applications to identify the target application having the frequent wakeup or heartbeat detection behavior. In order to communicate with the peer server, the application will wake up the terminal device regularly and send heartbeat detection message. When it is detected that a number of times that an application wakes up the terminal device or detects heartbeats exceeds a default number of times within a period of time, it is determined that the application has the frequent wakeup or heartbeat detection behavior, and the default number of times may be set by the terminal device or defined by a user. Specifically, a cyclic and regular detection method may be adopted. For example, the terminal device detects that an application A detects heartbeats for x times at an interval of T, and the application A is identified as the target application when x is greater than the number of times set by the terminal device. For another example, the terminal device acquires the number of times that the application A detects heartbeats at the interval of T, and the application A is identified as the target application when the number of times that the application A detects heartbeats in at least two cycles T is both greater than a normal number of times of heartbeat detection. The second method is to identify the target application having the frequent wakeup or heartbeat detection behavior based on a default information base. The default information base includes a default application library that is restricted from connecting to the peer server, e.g., in Chinese mainland, Google is under regulation and thus its relevant servers are inaccessible, and some school website applications are inaccessible using extranet. In such cases, even without any connection to a designated server, the application will frequently wake up the terminal device and send handshake messages, resulting in reduced battery duration of the terminal device. Therefore, the applications in the default application library may be taken as the target application. Of course, the applications in the default application library may be flexibly adjusted as the case may be. For example, when accessing a school website by intranet, no school website application is included in the default application library. In the embodiment of the present disclosure, the default information base further includes current network states of the terminal device. For example, when the terminal device is in arrears, no application can be connected to the peer server. For another example, in some cases or in some areas (such as in a signal blocking area with a signal blocker), there are certain restrictions on network access or the network is inaccessible, and no application can be connected to the peer server. Therefore, when there are network restrictions on the current network state of the terminal device, the application will still actively wake up the terminal device or detects heartbeats, and the applications (such as “WeChat”, “WEIBO” and other third-party applications) that need to communicate with the server are the target application. In the embodiment of the present disclosure, the target application may be identified only by the first method or the second method, or by a combination of the first method and second method. For example, when it is determined that an application B has a frequent wakeup behavior through detection of network behaviors of the applications, whether the application B is included is determined according to the default application library of the information base, if so, the application B is identified as the target application. In a step of S102, whether a peer server to which the target application is connected is accessible is monitored. In an embodiment of the present disclosure, whether the target application is normally connected to the server is determined by detecting whether the peer server is accessible. It will be understood that it is very likely that the peer server to which the target application is connected is inaccessible if the current network environment of the terminal device does not have the access capability. Therefore, before monitoring whether the peer server to which the target application is connected is accessible, it should be determined that the current network environment is an Internet-connected environment, i.e., the data access function is available in the current network environment. The terminal device may determine whether an SIM (Subscriber Identity Module, and IC) is inserted therein, if so, the current Internet connection of the terminal device is obtained either by further determining whether a data service is enabled or by directly determining whether the terminal device is connected to a wireless network (Wifi) available. When the terminal device is connected to the Internet, the peer server to which the target application is connected is monitored, and whether the peer server is accessible is detected cyclically by network commands. The network commands include, without limitation, DNS, PING, and Socket. For example, if the terminal device sends a PING command to the peer server at t1without any response message from the peer server, and then sends a PING command to the peer server again at t2still without any response message from the peer server, that is, there is no response message from the peer server at both t1and t2which are in one cycle, it is determined that the peer server is inaccessible. For another example, if t1and t2are in two adjacent cycles and there is no response message from the peer server, it is determined that the peer server is inaccessible. In some embodiments, it is also possible to determine whether the server is accessible by determining whether a response message is received within a preset time after the target application sends a heartbeat packet to the peer server for heartbeat detection. In a step of S103, the target application is frozen after the peer server is inaccessible. If the peer server is inaccessible for other reasons in the current Internet-connected environment, the target application is frozen. In this case, the target application will not frequently wake up the system or send network data or handshake messages, thus saving power consumption. It should be understood that the peer server to which the target application is connected is inevitably inaccessible when there is no connection between the terminal device and the Internet, so the target application may be directly frozen. It should be noted that the freezing function will send a freezing command when the target application is frozen by the terminal device. In this case, the terminal device will freeze a process of the target application based on other states of the target application. For example, when the peer server is inaccessible and it is determined that the current target application is running in the background, the process of the target application is frozen. For example, when the peer server is inaccessible and the current target application runs in the foreground, but the terminal device is currently in a screen-off state, the process of the target application is frozen. In some embodiments, it is also possible to determine whether the target application is a default core or a key application, and if so, the process of the target application is frozen. In some embodiments, the freezing function may be automatically enabled to freeze the target application after the terminal device identifies a target application having the frequent wakeup or heartbeat detection behavior and detects that the peer server accessed by the target application is inaccessible. After the target application is frozen, the terminal device may further unfreeze the target application actively or passively. For example, when the target application runs in the foreground or the peer server becomes accessible, the target application is unfrozen. After the Internet connection of the terminal device resumes or/and the server becomes accessible, the freezing function will send an unfreezing command to unfreeze a kernel process of the target application. If the user taps the target application to run the target application in the foreground, the terminal device will immediately unfreeze the target application, cancel all controls on the target application, and lift all restrictions until the target application runs in the background again. With the network-based control method for power consumption of an application provided in the embodiment of the present disclosure, when the freezing function is already enabled normally, it is detected and monitored whether there is a target application frequently detecting heartbeats or waking up the terminal device, the server status of the target application in the current network environment is monitored, and if the server is inaccessible and the target application runs in the target application, the target application will be frozen, so as to avoid power consumption caused by frequent wakeup and heartbeat detection. Embodiment Two In the embodiment of the present disclosure, a network-based control method for power consumption of an application is provided, as shown inFIG.2, including following steps of S201to S206. In a step of S201, it is determined that a freezing function is enabled. In the embodiment of the present disclosure, the terminal device detects whether an enabling command is received from the freezing function, and the process ends if the freezing function is not enabled. In a step of S202, internal applications of the terminal device are detected to identify a target application having a frequent wakeup or heartbeat detection behavior. First, a cyclic and regular detection method may be adopted to detect network behaviors of an application A. If a number of times of heartbeat detection obtained in two cycles is greater than a normal number of times of heartbeat detection, the application A is preliminarily determined as the target application. In this case, whether the application A is an application in an available default application library is further determined according to the default application library that is restricted from connecting to the peer server, if so, it is determined that the application A is the final target application. In some embodiments, when the application A is preliminarily determined as the target application but is not included in the default application library, the application A may still be taken as the final target application, or whether to take the application A as the final target application may be determined based on a current battery level of the terminal device. In a step of S203, the current network environment of the terminal device is monitored to determine whether the terminal device can be connected to the Internet. If so, the step S204is performed. Otherwise, the S205is performed. The peer server to which the target application is connected is inevitably inaccessible when there is no connection between the terminal device and the Internet, so the target application can be directly frozen. In a step of S204, whether the peer server to which the target application is connected is accessible is monitored. If so, the step S205is performed. Otherwise, the process ends. The peer server to which the target application is connected is acquired, and whether the peer server is accessible is monitored, which may be detected by common network commands such as PING commands and Socket. In a step of S205, the target application is frozen. Upon receipt of a freezing command, the freezing function determines whether the target application is an application running in the background. If so, the target application is frozen. By freezing a process group, such application will not frequently wake up the system or send network data or handshake messages, thus saving power consumption. In a step of S206, the target application is unfrozen when the target application is switches to run in the foreground or the peer server becomes accessible. When the Internet connection resumes or the server becomes accessible, or if the user taps the application to run the application in the foreground, the terminal device will immediately unfreeze the application, cancel all control strategies on the application, and lift all network limitations until the application runs in the background again. In the embodiment of the present disclosure, by identifying the current network environment of the terminal device and the accessibility of the server, the target application is actively frozen using the freezing technology when it is detected that the server accessed by the target application is inaccessible, so as to avoid unnecessary wakeup and data receiving and sending operations. When the server being accessed becomes accessible, the target application is unfrozen and services thereof are resumed, so as to improve the duration of the terminal device. Embodiment Three For ease of understanding, the network-based control method for power consumption of an application will be described using a more specific example in this embodiment of the present disclosure.FIG.3is a flowchart of a terminal device for regulating power consumption of applications. As shown inFIG.3, after a freezing function of the terminal device is enabled, the terminal device detects applications therein to identify a target application, performs network detection on the target application, identifies the applications to be frozen based on network detection results, and freezes the applications to be frozen by freezing a kernel process by the freezing function. Specifically, as shown inFIG.4, which is a flowchart of the network-based control method for power consumption of an application, the method includes following steps of S401to S408. In a step of S401, it is determined that a freezing function is enabled. In a step of S402, network characteristics of internal applications of the terminal device are detected to identify a target application having a frequent wakeup or heartbeat detection behavior. In the embodiment of the present disclosure, an application having the frequent wakeup or heartbeat detection behavior is identified based on a knowledge base available. Alternatively, a cyclic and regular detection method may be adopted to detect network behaviors of the applications to identify the application having the frequent wakeup or heartbeat detection behavior. In a step of S403, a current Internet connection of the terminal device is obtained, and a next cyclic detection process is performed directly when there is no SIM inserted, no enabled data service, or no Wifi and other data network environment available. In a step of S404, all currently running applications are traversed. In a step of S405, a next background application is selected, and if there is no such application, the cyclic detection process ends. In a step of S406, whether the background application is the target application is determined. If so, the S407is performed. Otherwise, the process ends. In a step of S407, a server accessed by the application is obtained, and whether the server is accessible is determined by DNS/PING/Socket and other general methods. If so, the S408is performed. Otherwise, the next cyclic detection process is performed directly. In a step of S408, the target applications are frozen based on a network detection result of the target application. Embodiment Four In the embodiment of the present disclosure, a terminal device which may be implemented in various forms is provided. For example, the terminal devices described in the embodiment of the present disclosure may include mobile terminals such as mobile phones, tablet computers, notebook computers, palm computers, Personal Digital Assistant (PDA), Portable Media Player (PMP), navigation devices, wearable devices, smart bracelets and pedometers, as well as fixed terminal devices such as digital TVs and desktop computers. As shown inFIG.5, the terminal device includes a processor501, a memory502, and a communication bus503. The communication bus503is configured to perform connection and communication between the processor501and the memory502. The processor501is configured to execute one or more computer program stored in the memory502to perform the following steps in above embodiments:determining that there is a target application having a frequent wakeup or heartbeat detection behavior when a freezing function is enabled;monitoring whether a peer server to which the target application is connected is accessible; andfreezing the target application by the freezing function after the peer server becomes inaccessible. The target application having the frequent wakeup or heartbeat detection behavior is identified by following two methods. The first method is to detect network behaviors of all applications to identify the target application having the frequent wakeup or heartbeat detection behavior. The second method is to identify the target application having the frequent wakeup or heartbeat detection behavior based on a default information base. In an embodiment of the present disclosure, whether the peer server to which the target application is connected is accessible is determined only after determining that the current network environment is an Internet-connected environment. Specifically, whether the peer server is accessible may be cyclically detected through network commands. The target application can be frozen directly when it is determined that the current network environment is an Internet-disconnected environment. It should be noted that as an essential part of the embodiments of the present disclosure, the freezing function is responsible for receiving and implementing freezing and unfreezing operations of the target application. Before freezing the target application, whether the target application is running in the background may be determined, if so, a process of the target application is frozen. The target application is unfrozen when the target application runs in the foreground or the peer server becomes accessible. In an embodiment of the present disclosure, a non-transitory computer-readable storage medium storing one or more programs is further provided, where the one or more programs, when executed by one or more processors, cause the one or more processors to perform the steps of the network-based control method for power consumption of an application in the above embodiments. The non-transitory computer-readable storage medium includes volatile or nonvolatile, removable or non-removable media implemented in any method or technology for storing information, such as computer-readable commands, data structures, computer program modules or other data. A computer readable storage medium includes, without limitation, RAM (Random Access Memory), ROM (Read-Only Memory), EEPROM (Electrically erasable programmable read-only memory), flash memory or other memory technologies, CD-ROM (Compact Disc Read-Only Memory), digital versatile disc (DVD) or other optical disc memory, cartridge, tape, disk memory or other magnetic memory devices, or any other media configured to store desired information and accessed by a computer. According to the network-based control method for power consumption of an application, the terminal device and the non-transitory computer-readable storage medium provided in the embodiments of the present disclosure, a target application having a frequent wakeup or heartbeat detection behavior is identified when a freezing function is enabled; whether a peer server to which the target application is connected is accessible is monitored; and the target application is frozen by the freezing function when the peer server is inaccessible. In some implementation processes, those applications that frequently perform heartbeat detection with the server can be controlled timely and effectively using the freezing technology after determining that the peer server to which each application is connected is inaccessible by detecting network access thereof, thereby reducing the power consumption caused by frequent wakeup and heartbeat detection. Therefore, those having ordinary skills in the art should understand that all or some of the steps, systems and functional modules/units in the devices disclosed above can be implemented as software (which may be implemented by computer program codes executable by a computing device), firmware, hardware and appropriate combinations thereof. In the implementations by hardware, the division between the functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components. For example, a physical component may have a plurality of functions, or a function or step may be performed cooperatively by several physical components. Some or all of the physical components may be implemented as software executed by a processor, such as CPU, a digital signal processor or a microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Furthermore, it is well known to those of ordinary skill in the art that communication media typically contain computer-readable commands, data structures, computer program modules, or other data in modulated data signals such as carrier transmission or other transmission mechanisms, and may include any information delivery media. Therefore, the present disclosure is not limited to any particular combination of hardware and software. The above descriptions are further detailed descriptions of the embodiments of the present disclosure with reference to specific implementation ways, and it cannot be assumed that the specific implementation ways of the present disclosure are limited to these descriptions. For those of ordinary skill in the art to which the present disclosure belongs, a number of simple derivations or substitutions may also be made without departing from the concept of the present disclosure, all of which should be regarded as falling into the protection scope of the present disclosure.
26,456
11943128
DETAILED DESCRIPTION In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present inventive subject matter. It will be apparent, however, that the present inventive subject matter may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present inventive subject matter. Embodiments are described herein according to the following outline:1.0. General Overview2.0. System Overview2.1. Network Nodes2.2. Data Units2.3. Paths2.4. Telemetry Collection2.5. Example Device Telemetry Components2.6. Example Telemetry Data2.7. Miscellaneous3.0. Functional Overview4.0. Example Embodiments5.0 Example Network Device5.1 Ports5.2 Packet Processors5.3 Buffers5.4 Queues5.5 Traffic Management5.6 Forwarding Logic5.7 Multi-Pipeline Architecture5.8 Miscellaneous6.0. Implementation Mechanism—Hardware Overview7.0. Extensions and Alternatives 1.0. General Overview Approaches, techniques, and mechanisms are disclosed for collecting telemetry data within a network. According to an embodiment, a first node on a network is configured as an ingress telemetry node. The first node may receive a data unit for forwarding purposes, and determine to clone it for telemetry purposes, thereby producing a telemetry data unit. The first node may further inject telemetry and/or other data into the cloned data unit. The telemetry data unit continues along the same path as the original data unit until it reaches a second node on the network, referred to as an egress edge telemetry node. The second node extracts the telemetry data from the telemetry data unit and sends telemetry information based thereon to a telemetry collector, while the original data unit continues to its final destination, if that destination is not the second node. In an embodiment, an ingress edge telemetry node is configured to select data units to generate telemetry data units for based on one or more telemetry triggers. A telemetry trigger includes one or more criteria against which data unit metadata or even payload is compared, and/or against which device state information is compared. A telemetry data unit may be generated for any data unit that matches such criteria, or for any data unit that is being processed when the device state matches the criteria. Or, in an embodiment, a telemetry data unit is generated for only a sample of such data units. A node may have any number of defined telemetry triggers. In an embodiment, any node along the path between the first node and the second node may be configured to inject further telemetry data into the telemetry data unit, and/or modify the telemetry data already in the telemetry data unit. Any such node is referred as a telemetry transit node. The telemetry information sent by the egress edge telemetry node, which may also be referred to as telemetry report data, may include, or be derived from, this additional telemetry data as well. The egress edge telemetry node may likewise be configured to add to or update the telemetry data. In an embodiment, the data that the ingress edge telemetry node inserts may include telemetry marking data, such as a telemetry marker in or after a packet header, that indicates to subsequent nodes that a data unit is a telemetry data unit. Based on the marking data, the subsequent node may avoid cloning the clone and/or recognize that it needs to insert or update telemetry data. According to an embodiment, the inserted telemetry data may include or be based on, without limitation, a node identifier, an identifier for an ingress port over which the data unit arrived, an identifier for an egress port over which the data unit is to be sent, an ingress timestamp, an egress timestamp, an identifier of a queue to which the data unit was assigned, a length of that queue when the data unit was enqueued or dequeued, a measure of delay experienced by the data unit, and so forth. In an embodiment, the telemetry data unit may be only a partial clone of the original data unit. This may both reduce the bandwidth utilized by telemetry data units and avoid or reduce the likelihood of the telemetry data unit's size growing greater than a maximum transmission unit (“MTU”) size as it is annotated with telemetry data. For example, the telemetry data unit may be stripped of all payload data when cloned. Or, the telemetry data unit may be truncated to a specific number of bytes, or to the size of a Transport Data Unit within the node. In an embodiment, the telemetry data unit may be truncated by a telemetry node upon detecting that the telemetry data unit exceeds a certain size. According to an embodiment, the telemetry information that the egress edge telemetry node sends to the telemetry collector includes all of the telemetry data it receives in the telemetry data unit, plus, optionally, additional telemetry data generated by the egress edge telemetry node. In another embodiment, the telemetry information is a summary report generated by a central processing unit or other logic within the egress edge telemetry node, based on the telemetry data received in the telemetry data unit. In an embodiment, the egress edge telemetry node may determine whether or not to send the telemetry information based on filtering and/or sampling criteria. For instance, the egress edge telemetry node may send the telemetry information only if the extracted telemetry data indicates that the telemetry data unit has experienced an aggregate amount of queue delay over a certain threshold over its entire path. Or, the egress edge telemetry node may send the telemetry information a certain percentage of the time, which percentage may be fixed or may vary based on the observed telemetry data. According to an embodiment, different network nodes may have different telemetry triggers defined, such that they may generate telemetry data units for different flows or types of data units. There may also be different telemetry triggers for different roles—for instance, a node might have one set of telemetry triggers that indicate data units for which the node may function as an ingress telemetry node, another set of telemetry triggers that indicate data units for which the node may function as a telemetry node, and/or another set of telemetry triggers that indicate data units for which the node may function as an ingress edge telemetry node. In an embodiment, a transit node may include separate telemetry triggers indicating when it should send a telemetry data unit or a telemetry report based thereon to a telemetry collector (e.g. if the telemetry data unit grows too large, if the telemetry data unit contains data for a maximum number of hops, if an event occurs, etc.), instead of or in addition forwarding the telemetry data unit on. Optionally, the transit node may generate a new telemetry data unit at such times so as to continue collecting telemetry data. Depending on the embodiment, a telemetry node may inject telemetry data into any telemetry data unit, or only for those that match telemetry triggers defined for that specific node. Similarly, an egress telemetry node may extract data from any telemetry data unit that it receives, or only for those that match telemetry triggers defined for the egress telemetry node, depending on the embodiment. In an embodiment, a node that is an ingress edge telemetry node for network traffic flowing in one direction may be—but not necessarily is—an egress edge telemetry node for traffic flowing the other direction. In an embodiment, a group of interconnected network nodes that includes one or more ingress edge telemetry nodes, one or more egress edge telemetry nodes, zero or more telemetry transit nodes, and a telemetry collector may be referred to telemetry transit domain. In some embodiments, the telemetry collector or a management node may push telemetry triggers out to all nodes within a telemetry domain, or telemetry triggers may be defined for each node individually. In some embodiments, among other advantages, by transmitting telemetry data in a cloned data unit instead of injecting the telemetry data in the original data unit, a system may collect telemetry data about the path traveled by a data unit without exceeding the maximum transmission unit size. Further, in some embodiments, using a cloned telemetry data unit instead of injecting the telemetry data in the original data unit avoids the requirement of specialized logic at the end point to parse and interpret the telemetry data, which might otherwise delay application logic in consuming and acting upon the original data unit. Also, in some embodiments, forwarding logic may handle the cloned data unit separately than the original data unit, giving downstream forwarding logic flexibility to drop or reprioritize telemetry data units when extra bandwidth is needed. Or, in an embodiment, forwarding logic may prioritize a telemetry data unit more highly when the telemetry data it carries indicates a high delay measure, so as to more quickly bring such delays to the attention of an administrator and/or network monitoring logic. In some embodiments, a node is configured to clone a data unit and inject the resulting telemetry data unit into the data unit processing pipeline as quickly as possible, so as to minimize changes to the network state between the time that the original data unit passes through a node and the time that the cloned data unit passes through that node. This may thereby increase the relevance of the collected telemetry data to the original data unit. In an embodiment, for instance, an egress packet processor may determine that telemetry triggers have been met for a newly dequeued data unit and instruct a downstream component, such as the transmit buffer, to forward the original data unit and then send a clone of the data unit to an internal port that recirculates the clone to a traffic manager or ingress arbiter for immediate enqueuing in an egress queue. Such an embodiment may, for instance, result in the telemetry data unit being transmitted within a matter of a few hundred nanoseconds of the original data unit. In another embodiment, the transmit buffer may include specialized hardware configured to generate and transmit the cloned data unit immediately before or after the original data unit. 2.0. System Overview FIG.1is an illustrative view of various aspects of an example networking system100, also referred to as a network, in which the techniques described herein may be practiced, according to an embodiment. 2.1. Network Nodes Networking system100comprises a plurality of interconnected nodes110a-110n(collectively nodes110), each implemented by a different computing device. For example, a node110may be a single networking computing device, such as a switch or router, in which some or all of the processing components described herein are implemented in application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or other integrated circuit(s). As another example, a node110may include one or more memories storing instructions for implementing various components described herein, one or more hardware processors configured to execute the instructions stored in the one or more memories, and various data repositories in the one or more memories for storing data structures utilized and manipulated by the various components. Each node110is connected to one or more other nodes110in network100by one or more communication links120. Each node110may have any number of inbound (ingress) and outbound (egress) network ports, which allow the node110to interface with other nodes for communication purposes. The communication links120may include any suitable transmission media, such as wired cabling (e.g. copper, optical, etc.) coupled to these ports, and/or wireless communication mechanisms. Note that system100illustrates only one of many possible arrangements of nodes110within a network. Other networks may include fewer or additional nodes110having any number of links120between them. In an embodiment, nodes110may generally be classified as either terminal nodes, corresponding to nodes110a-h, or intermediary nodes, corresponding to nodes110i-n. Terminal nodes110a-htypically have a small number of communication links120within network100—often a single link120—and are generally only senders or consumers of network traffic within network100. By contrast, intermediary nodes110i-nmay have a significant number of communication links120to a larger number of other nodes110, and generally function as switches that relay network traffic from one node110to another. Hence, intermediary nodes110i-ntypically comprise various components for implementing forwarding logic, such as described elsewhere herein, whereas terminal nodes110a-hmay not necessarily have such logic, depending on the embodiment. In an embodiment, intermediary nodes110i-nmay be divided into leaf nodes110i-kand spine nodes110l-m. Leaf nodes110i-kprovide terminal nodes110a-gwith access to nodes110a-gthat may be connected to other leaf nodes110i-k. Leaf nodes110i-kmay be, for instance, top-of-rack or end-of-rack switch devices. Spine nodes110l-ninterconnect leaf nodes110i-kand/or other spine nodes110i-k. Spine nodes110l-nmay be, for instance, spine switch devices. Optionally, certain terminal nodes, such as node110h, may connect directly to one or more spine nodes110l-nfor management or other purposes. In yet other embodiments, nodes110h-nmay be divided in other manners, such as into core, aggregation, and access nodes, depending on the network architecture. In an embodiment, network100may be a data center, cloud, etc. In an embodiment, some or all of nodes110a-gmay be “host” nodes that host various servers or other applications. In an embodiment, some or all of nodes110a-gmay be “edge” nodes, such as gateway, firewall, and/or load-balancing devices, that provide outside devices and/or networks with access to network100. Such devices may have a number of external facing ports to connect to these devices and/or networks. In yet other embodiments, some or all of nodes110a-gmay be client devices, such as personal computers, smartphones, and so forth. Although only nodes110a-nare depicted, in other embodiments, a network100may be significantly larger, comprising many more terminal nodes110a-hand intermediary nodes110i-n, with topologies of various complexities. The path between two nodes in a network100may be significantly longer depending on the topology and/or network conditions. Moreover, in some embodiments, network100may be smaller than depicted. 2.2. Data Units While each node110may or may not have a variety of other functions, in an embodiment, each node110is configured to send, receive, and/or relay data to one or more other nodes110via the links120. In general, data is communicated as series of discrete units or structures represented by signals transmitted over the communication links120. When a node110receives a unit of data, it may examine addressing information and/or other information within the unit of data to determine how to process the unit (e.g. whether the node110is the intended recipient, or the data unit should be forwarded on to another node110). The addressing information may be, for instance, a Media Access Control (“MAC”) address, Internet Protocol (IP) address, MPLS label, and/or any other suitable information. Addressing information, flags, labels, and other metadata used for determining how to handle a data unit are typically embedded within a portion of the data unit known as the header. The header is typically at the beginning of the data unit, and is followed by the payload of the data unit, which is the information actually being sent in the data unit. A header is typically comprised of fields of different types, such as a destination address field, source address field, destination port field, source port field, and so forth. In some protocols, the number and the arrangement of fields may be fixed. Other protocols allow for arbitrary numbers of fields, with some or all of the fields being preceded by type information that explains to a node110the meaning of the field. Different nodes110within a network100may send, receive, and/or forward data units at different communication levels, or layers. For instance, a first node110may send a unit of data at the transport layer (e.g. a TCP segment) to a second node110over a path that includes an intermediate node110. This unit of data, which may sometimes first be broken into smaller units, may be encapsulated in units of data at various sublevels before it is transmitted from the first node110. For example, a TCP segment may be encapsulated into IP packets, those IP packets may be encapsulated into Ethernet frames, and those Ethernet frames may eventually be sent out as a collection of signal-encoded bits to the intermediate device. Depending on the network layer at which the device operates, the intermediate node110may examine, and in some cases reconstruct, the original frames, packets, and/or segment before forwarding that data to the second node110. Or, the intermediate node110may simply examine the frames, or in some cases the packets as well, and forward the data to the second node110without ever examining the original segment (or, in some cases, the packet). Data units of the network layer at which a node110operates are considered protocol data units (“PDUs”), such as, without limitation, IP packets for the network layer (layer 3), frames for the Ethernet layer (layer 2), etc. A PDU may, for data unit handling purposes both within and without the device, be broken up into smaller subunits, or portions. The node110need not necessarily ever assemble the data in a PDU together, but rather may in an embodiment act upon constituent portions of the PDU, which may be linked together by identifiers, linked lists, or other suitable constructs. Within a node itself, these portions are referred to herein as transport data units (“TDUs”). For instance, if the PDU is a packet, the TDU might be one or more cells. For convenience, many of the techniques described in this disclosure are described with respect to embodiments where the PDUs are IP packets in a L3 (level 3) network, and the TDUs are constituent cells and/or frames in an L2 (level 2) network, in which contexts the described techniques have particular advantages. It will be recognized, however, that these techniques may also be applied to realize advantages in routing other types of data units conforming to other protocols and/or at other communication layers within a network. Thus, unless otherwise stated or apparent, the techniques described herein should also be understood to apply to contexts in which the PDUs and TDUs are of any other types of data structures, such as segments, InfiniBand Messages, or datagrams. That is, in these contexts, other types of data structures may be used in place of packets, cells, frames, and so forth. 2.3. Paths Any node in the depicted network100may communicate with any other node in the network100by sending data units through a series of nodes110and links, referred to as a path. For example, Node B (110b) may send data units to Node D (110d) via a path from Node B to Node I to Node L to Node J to Node D. Depending on the size of network100, there may be a large number of valid paths between two nodes. For example, another path from Node B to Node D is from Node B to Node I to Node M to Node J to Node D. In an embodiment, a node110does not actually need to specify a full path for a data unit that it sends. Rather, the node110may simply be configured to calculate the best path for the data unit out of the device (e.g. which egress port it should send the data unit out on). When a node110receives a data unit (e.g. via an ingress port) that is not addressed directly to the node110, the node110relays the data unit along to either the destination node110, or a “next hop” node110that the node110calculates is in a better position to relay the data unit to the destination node110. In this manner, the actual path of a data unit is product of each node110along the path making routing decisions about how best to move the data unit along to the destination node110identified by the data unit. For example, certain nodes110, such as switches and/or routers, may include forwarding logic that analyzes destination address(es) and/or other header information within a data unit to identify a specific one of its egress ports that should send the data unit. The forwarding logic may, for instance, lookup or compare a destination address to forwarding instructions within a forwarding table, and resolve the egress port to send the data unit to. The forwarding instructions may indicate, for instance, an egress port over which to send the unit of data, a label to attach the unit of data, etc. The node110may then send the data unit out the resolved egress port, which results in the data unit proceeding along a corresponding communication link to the next hop in its path. Where multiple ports provide valid paths to the destination, the forwarding logic may further utilize other header information and/or load balancing logic to resolve the data unit to a specific one of these ports. As a result, it is possible for a node to send two data units to the same destination via different ports/paths. In an embodiment, a node may utilize various data within a data unit to classify the data unit as belonging to a specific traffic flow. While different traffic flows may be assigned to different ports/paths, the forwarding logic of a node110is such that, within a specific time window, all data units that belong to a specific traffic flow are assigned to the same port/path. 2.4. Telemetry Collection FIG.2illustrates the collection of telemetry data within a network200using a telemetry domain250, according to an embodiment. Network200may be, for example, similar to network100, in that it comprises a plurality of nodes interconnected by communication links. In the case of network200, these nodes may include host nodes210, edge nodes220/240, and transit nodes230, all of which are network devices. For instance, host nodes210may be examples of terminal nodes110a-g, while edge nodes220/240and transit nodes230may be examples of intermediary nodes110i-n(e.g. leaf nodes and/or spine nodes). While only a limited number of nodes are depicted for illustrative purposes, it will be understood that network200may comprise many such nodes. FIG.2illustrates the path of a single PDU205through the network200. Host210agenerates PDU205, with an address of host210blisted as its intended destination. PDU205may be a data unit of any type, such as a TCP/IP packet, UDP/IP packet, VxLAN Transit packet, and so forth. The host node210aforwards the PDU205to the edge node220. Edge node220is the edge to the path telemetry domain250. The path telemetry domain is a subset of nodes within the network200, each of which may have one or more telemetry features enabled. Edge node220is specifically configured to function as an ingress edge telemetry node for network traffic, including PDU205, that is entering the path telemetry domain250. As such, edge node220includes logic for determining when certain telemetry triggering conditions are met. These conditions, also referred to as telemetry triggers, may include receiving one or more data units with certain attributes (e.g. being of a certain type, belonging to a specific traffic flow, belonging to a particular traffic flow class, having attributes with specified values, etc.), conditions related to the device state (e.g. buffer fill levels, queue sizes, delays, etc.), and/or random or probability-based sampling conditions. For instance, there may be a rule that indicates that every hundredth data unit from host210ato host210bwill trigger telemetry collection. As another example, there may be a rule that whenever a buffer fill level exceeds a certain amount, telemetry collection should be triggered. The triggers may be fixed globally for path telemetry domain250, or vary depending on the node. In an embodiment, the triggers may be set programmatically by administrative logic or a user. When the telemetry triggering conditions are met, the edge node220may clone a PDU that was being processed when the triggering conditions were met (in this case PDU205). The resulting cloned data unit, which may also be referred to as a telemetry data unit, may be a full copy of the original PDU, or a truncated version. For example, the clone225of PDU205may include only a certain number of bytes from PDU205, only a first one or more cells from the PDU205, or only the header of PDU205, depending on the embodiment. The edge node further inserts telemetry data, such as telemetry data222, into the clone225, such as in the payload of clone225, or at the beginning or end thereof. Telemetry data222may include a variety of state information and metrics, depending on the embodiment. Examples of such data are described elsewhere herein. The edge node220may further insert other data into a clone, such as a marker in a special header field or at the beginning of the payload that indicates to other nodes in path telemetry domain250that the clone225is a telemetry data unit. The edge node220utilizes its forwarding logic to forward both PDU205and clone225to a “next hop” on their path to host210b, which as depicted is transit node230b. Although there may be many other valid paths to host210b(e.g. through transit node230a), in an embodiment, clone225should be sent along the same path as PDU205. Because clone225is a clone of PDU205, this should occur automatically in most cases, as a result of clone225having certain header information that is the same as found in PDU205. For instance, in an embodiment, the forwarding logic of edge node220uses a “five-tuple” combination of source address, source port, destination address, destination port, and PDU type to map a data unit to a flow (and corresponding egress port). The cloning process ensures that both PDU205and clone225have the same five-tuple flow identifier, and consequently are sent down the same path, as long as the network state remains the same between sending. Transit node230breceives both PDU205and its clone225. Transit node230bis configured to function as a transit telemetry node for the domain250, in that it may update the telemetry data portion (e.g. modify existing or insert additional telemetry data) in telemetry data units that pass through it. For instance, the transit node230bmay insert its own state information and metrics into clone225. The transit node230bmay also or instead calculate aggregate metrics based on the metrics it collects and the metrics already found in telemetry data222, and replace those metrics with the calculated aggregate metrics. Transit node230bmay always update telemetry data222for any data unit that it recognizes as a telemetry data unit (e.g. through an inserted marker), or transit node230bmay update telemetry data222only when certain transit telemetry triggers are met. Transit node230butilizes its forwarding logic to forward both PDU205and clone225to a “next hop” on their path to host210b, which as depicted is edge node240b. Edge node240bis configured to function as an egress edge telemetry node within the path telemetry domain250. As such, egress node240bmonitors for telemetry data units (e.g. by searching for inserted telemetry markers). When it receives a telemetry data unit, such as clone225, the egress node240bextracts the telemetry data therein and sends telemetry information245based thereon to a telemetry collector260. Optionally, the telemetry information245may further be based on telemetry data generated by edge node240b. In an embodiment, egress node240bmay function as an egress edge telemetry node for all telemetry data units that it detects, or just for those meeting certain triggering conditions or having a certain marker. The telemetry information245may include some or all of the telemetry data222, including updates from by any intervening transit nodes230. In an embodiment, telemetry information245may be or include a report that summarizes such telemetry data222. For instance, edge node240may send the telemetry data222to an internal processor configured to process the telemetry data222and generate a report based thereon. In an embodiment, edge node240may only send the telemetry information245when certain conditions are met, either within the telemetry data222itself, or related to the state of edge node240bitself. The telemetry collector260may be a separate device on the network200, or logic implemented by a processor or other component on each edge node240. In an embodiment, the telemetry collector260is configured to receive telemetry information from each edge node220/240in the path telemetry domain250. In an embodiment, the telemetry collector260is internal to egress edge240b, such as a processor coupled to a special internal port. However implemented, the telemetry collector260may perform any of a variety of functions with the telemetry information it collects, such as generating reports and analyses for an end user, generating notifications in response to observing certain conditions in the network200, reconfiguring different devices on the network200when certain conditions occur, and so forth. In an embodiment, the telemetry collector260, or telemetry report generation logic at the egress node240b, may correlate telemetry data from a telemetry data unit to the original data unit from which it was cloned. It may do so, for example, by matching identifying information from the header (e.g. a combination of addresses, a fragment identifier, and offset) of the telemetry data unit to the original data unit. The telemetry data may then be used for forensics or other analyses involving the original data unit. In an embodiment, egress node240bis configured to discard (drop) a telemetry data unit after extracting the telemetry data from a telemetry data unit. Hence clone225is not forwarded on to host210b. However, forwarding logic within egress node240bstill resolves PDU205to host201b, and sends PDU205out an egress port connected thereto. FIG.2illustrates but one example of telemetry collection within a network. Other embodiments may include fewer, additional, and/or different components in varying arrangements. For example, the path telemetry domain250may include additional edge nodes220/240and transit nodes230, each configured to function in similar manner to that described above. Moreover, there may be other nodes in between the transit nodes230and/or edge nodes220/240that do not function in the path telemetry domain. The path traversed by PDU205may further include some of these additional transit or other nodes. Likewise, there may be additional host nodes210. There may also be additional nodes between some host nodes210and the edge nodes220/240, which do not participate in the path telemetry domain250. While PDU205is specifically illustrated as being sent between host210aand host210b, similar techniques may be practiced for data units sent by any two nodes in a network, including both terminal nodes and intermediary nodes. In an embodiment, the data in PDU205may actually originate from outside of network200, and/or be destined for a device that is outside of network200. Host210aand/or210bmay be a gateway, firewall, or other device that functions as a source or destination for the purpose of the traversal of PDU205through network200. The roles of the edge nodes may be reversed depending on the direction of a data unit. For instance, for a PDU205traversing the path telemetry domain250in the opposite direction, edge node240bmay function as an ingress edge telemetry node, while edge node220may function as an egress edge telemetry node. In an embodiment, the path telemetry domain250may have different edge nodes and/or transit nodes depending on the type or flow of traffic, or even the current state of the various nodes. For instance, edge node220might not flag another PDU from host210afor cloning at a different time, if the PDU is of a different type, or for a different traffic flow, or if the node220is in a different state. On the other hand, transit node230amight receive this other PDU, and determine that telemetry triggering conditions have been met, and thus generate a telemetry data unit. Transit node230awould thus be the ingress edge telemetry node for this PDU. Similarly, different nodes may function as transit nodes and/or edge nodes depending on any of the foregoing factors. In an embodiment, a domain250may have more than one telemetry collector260. Some or all of the edge nodes240may be configured to send telemetry information to different telemetry collectors260. Each node240may be configured to send telemetry information to a specific telemetry collector260. Or, a node240may be configured to determine the telemetry collector260to send telemetry information to, based on factors such as the data unit type of the original data unit, the destination of the original data unit, the specific telemetry data observed, and so forth. In an embodiment, each edge node240may send the same telemetry information to all telemetry collectors260. In an embodiment, a node240may send different telemetry information derived from a same telemetry data unit to different telemetry collectors260. In an embodiment, a destination node, such as host210b, may receive certain telemetry data units and may therefore also include logic for handling telemetry data units. The destination node may, for example, include logic for separating telemetry data units from regular data units so that they do not interfere with processing of the data units at an application layer. A destination node may further include egress edge telemetry logic for extracting and/or reporting telemetry data from a telemetry data unit. The destination node may, for example, function as an egress edge telemetry node240within the telemetry domain250. As another example, the telemetry data unit may bypass egress edge telemetry nodes240, or egress edge telemetry nodes240may allow telemetry data units to pass through them to the destination. For instance, in the latter case, an egress edge telemetry node240may not send telemetry report data to the telemetry collector260under certain conditions, but instead opt to let the telemetry data unit pass through to the destination. In yet other embodiments, egress edge telemetry nodes240are simply omitted, and the destination node must be prepared to handle telemetry data units. 2.5. Example Device Telemetry Components FIG.3is an illustrative view of various components of an example telemetry subsystem300for a network device, according to an embodiment. Subsystem300may be a subsystem within a switch or other network device, as described elsewhere herein, or any other computing device. In an embodiment, the various components of subsystem300described below are hardware-based logic units within one or more application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or other integrated circuit(s). In other embodiments, some or all of the components may be software-based logic implemented by one or more processors executing instructions stored in one or more computer-readable media. Subsystem300may be utilized in any device that generates, updates, or extracts data from telemetry data units. For example, some or all of edge nodes220/240and transit nodes230may include components from subsystem300. In other embodiments, however, a telemetry path domain250may be practiced without the specific details of subsystem300, while subsystem300may be deployed in devices other than those depicted inFIG.2. Subsystem300receives a data unit305, such as PDU205or any other data unit. In an embodiment, the data unit305is accompanied by control information306, either within the data unit305itself, or as sideband information. The control information306may include various metadata and/or instructions for handling the data unit305. Depending on the implementation, the data unit305may be received from a variety of source components within the implementing device. For example, the data unit305may have been recently dequeued from an ingress or egress queue. Or, the data unit305may be received from a packet processor. Subsystem300has a telemetry data unit detection component315configured to determine whether the data unit305is a normal data unit, or a telemetry data unit generated by a node upstream in the path of data unit305. For instance, the telemetry data unit detection component315may scan the header or first bytes of the payload of data unit305for a certain marker. The marker may be fixed for all data units, or specific to the data unit type. If data unit305is determined not to be a telemetry data unit, it is forwarded to ingress edge telemetry detection component320. Otherwise, the ingress edge telemetry detection component320should be bypassed, so as to avoid unnecessarily cloning the telemetry data unit, and the data unit305is sent to telemetry data insertion component330. Ingress edge telemetry detection logic320evaluates the data unit and/or device state information against one or more telemetry triggers328, such as described in other sections. If those telemetry triggers are met, a cloner322generates a clone of the data unit305, such as clone225. Telemetry tag insertion component324inserts the telemetry marker that the telemetry data unit detection component315was searching for into the clone. For instance, the telemetry marker may take the form of unused fields of a standard data unit header or select modifications to the standard data unit header. Truncator326truncates the clone. The clone may be truncated to a certain length (e.g. a number of bytes or cells), or everything but the header may be removed. In an embodiment, the header is retained, as well as a certain number of bytes from the payload. The order of truncator326and tag insertion component324may be reversed in some embodiments. Moreover, rather than featuring a dedicated truncator326, cloner322may simply only clone the desired part of the data unit305. In an embodiment, ingress edge telemetry detection component320determines whether there is already a telemetry data unit that exists for a data unit305(e.g. generated by a previous node) before cloning the data unit305. For example, ingress processing logic may add control information306to data unit305when it detects a telemetry data unit that was cloned from the data unit305within a certain timespan of receiving the data unit305. A data unit305may include a pseudo-unique identifier (e.g. a fragment identifier and offset in its header), and the fact that two data units305with the same identifier are received in that time span may be interpreted as an indication that the data unit305has already been cloned. The ingress edge telemetry detection component320may therefore avoid generating another clone. The result of components322-326is that a telemetry data unit325is generated based on the data unit305. The telemetry data unit325is either sent to the telemetry data insertion component330directly, or by reintroducing it into the data unit processing pipeline, so that it will be received at telemetry data unit detection component315in the same manner as data unit305. The data unit305, meanwhile, may continue to the egress port(s)390to which it should be forwarded, either directly or via one or more intermediate downstream logic components, depending on the embodiment. Telemetry data insertion component330is configured to, if it receives a telemetry data unit (e.g. data unit305if it is a telemetry data unit, or a newly generated telemetry data unit325), insert and/or modify telemetry data within the telemetry data unit. Subsystem300may include a telemetry data generator335that generates telemetry data based upon the state of the implementing device and/or metrics related to the implementing device's processing of data unit305or telemetry data unit325. For instance, control information306may include identifiers of ports or queues that data unit305has traversed or will traverse. Control information306might further include statistics related to the processing of data unit305or telemetry data unit325in such queues. Or, such metrics may be obtained from other sources within the implementing device. Further examples of such telemetry data are described in other sections. In an embodiment, telemetry data insertion component330may be configured to only insert telemetry data when certain conditions are met. For example, telemetry data insertion component330may evaluate telemetry triggers similar to telemetry triggers328to determine whether to insert telemetry data. Telemetry data insertion component330sends the telemetry data unit to egress edge telemetry detection component340, which is configured to detect whether the telemetry data unit has reached an extraction point. For those telemetry data units that have reached an extraction point, the telemetry data unit is diverted to a telemetry extraction component342. All other telemetry data units may be forwarded on to their destination port(s)390, as with data units305. Depending on the embodiment, and on which node is implementing subsystem300, egress edge telemetry detection component340may be configured to send all, some, or none of the telemetry data units that it receives to telemetry extractor342. For instance, if implemented by a node that has been configured only as a transit telemetry node, telemetry detection component340may be essentially inactive, in that all telemetry data units pass through it and on to port(s)390. In an embodiment, egress edge telemetry detection component340may be configured to divert all telemetry data units, or all telemetry data units of a certain type, to the telemetry extractor342. In an embodiment, egress edge telemetry detection component340may be configured to divert telemetry data units to telemetry data extractor342only if certain telemetry conditions are met, similar to telemetry triggers328. In an embodiment, egress edge telemetry detection component340may divert telemetry data units to telemetry extractor342if the telemetry data units are greater than a certain size, or contain telemetry data from more than a certain number of nodes. For example, it may be desirable to avoid a telemetry data unit growing too large. Optionally, in such embodiments, if the implementing device is configured as a transit node, egress edge telemetry detection component340may trigger re-cloning of the telemetry data unit without the extracted telemetry data. The re-cloned data unit may be sent out port(s)390, thereby allowing an empty telemetry data unit to continue along the path for telemetry data collection at subsequent nodes. The telemetry extractor342extracts telemetry data from the telemetry data unit. In some embodiments, it may send the telemetry data directly to a telemetry collector. In other embodiments, telemetry report generator345may generate telemetry report data from the raw telemetry data that was extracted from the telemetry data. For instance, the telemetry report data may include aggregate metrics such as a total or average delay or queue size, a total number of nodes in the path of the telemetry data unit, categorizations of metrics (e.g. high delay, low delay, etc.), and so forth. In an embodiment, the telemetry report data may filter the telemetry data to include only data from nodes with unexpected or outlier values. For instance, the telemetry report data may preserve state data from any nodes where the telemetry data unit experienced a delay above a certain value, but omit state data from other nodes. In an embodiment, telemetry report generator345may analyze the extracted telemetry data, and only send telemetry report data if certain conditions are met. For example, the telemetry report generator345may send the telemetry report data only if a certain amount of delay was encountered. In essence, the telemetry report generator345may provide filtering for the telemetry collector, thereby reducing the bandwidth of telemetry traffic on the network. Depending on the embodiment, the telemetry collector may be reachable via one of the port(s)390(e.g. when it is a separate node on the network), via a separate link, or via an internal port (e.g. for consumption by a local processor). Components of subsystem300may be deployed in varying locations within a device. For instance, with respect to the network devices depicted at the end of this disclosure, subsystem300may be deployed partially or entirely within or after an egress packet processor. Or, detection components315,320, and/or340may be deployed in an ingress arbiter or packet processor, which then mark the data units for telemetry generation and collection (e.g. using control information306) by a downstream traffic manager or egress packet processor. In an embodiment, cloner322is implemented at a transmit buffer for transmitting data units that have already been processed by an egress packet processor. The cloner322may, for instance, cause multiple copies of the data unit305to be transmitted one after another, or close to one after another, with one of the copies including telemetry marking data and/or telemetry data inserted by the telemetry data insertion component330. Or, the cloner322may cause the data unit305to be copied from the transmit buffer to an internal port that recirculates back to a traffic manager, which then causes the clone (that is, the telemetry data unit325) to be enqueued in the same egress queue as the data unit305had been. Telemetry data insertion component330may then cause telemetry data to be inserted in the telemetry data unit325when the telemetry data unit325is dequeued from that egress queue, and the telemetry data unit325may eventually be sent out the same port390as the original data unit305. FIG.3illustrates but one arrangement of components for implementing telemetry collection in a network or other computing device. Other embodiments may include fewer or additional components in varying arrangements, and the division of work between components may likewise vary. For instance, in an embodiment, egress edge telemetry detection340may be deployed before telemetry data insertion component330and/or ingress edge telemetry detection component320. In an embodiment, not all nodes in a network need include all of the components of subsystem300. For instance, a transit node may omit logic for edge detection components320or340. As another example, a node may have an ingress edge telemetry detection component320, but no egress edge telemetry detection component340, or vice versa. In yet another embodiment, components of subsystem300may be activated or deactivated depending on the function of a node. For instance, in a transit node, ingress edge telemetry detection component320may be disabled. In an embodiment, the destination of a data unit305may include egress edge telemetry detection logic340, telemetry data extractor342, and/or telemetry report generator345. In such a device, other components of subsystem330may be omitted, and of course the data unit305would not continue on to port390for further forwarding, since the destination will have been reached. In an embodiment, telemetry data insertion component may be configured to replace a set of raw telemetry data from a set of two or more nodes with a single entry of telemetry data that comprises aggregate statistics for all of the nodes in the set. Whenever a telemetry data unit has telemetry data from more than a certain number of nodes, the data from a most recent set of nodes may be analyzed, summarized, and inserted in place of the original data from those nodes. 2.6. Example Telemetry Data In an embodiment, the telemetry data may include an identifier of the edge node220. In an embodiment, the telemetry data may include various state data for the node. This may include individual statistics for individual ports and/or port groups within the node, such as port loading data, per-port buffer fill levels (e.g. an amount of buffer space occupied by data units assigned to the port, a number of data units assigned to the port, etc.), per-port queue lengths (e.g. number of data units currently enqueued in a queue, average number of data units enqueued for all queues assigned to the port, etc.), per-port queue delays (e.g. approximate measure of time or clock cycles that a data unit sits in the queue before being dequeued, on average), and so forth. This may further include aggregate statistics for the entire node, such as an overall buffer fill level, an overall average queue length, an overall average queue delay, an overall average delay between a time when a data unit arrives at a node and a time when the data unit leaves, and so forth. In an embodiment, the telemetry data may also or instead include data that is more specific to the telemetry data unit and/or the original data unit from which the telemetry data unit was cloned. For instance, rather than including metrics for each port or port group in the node, the telemetry data may include only metrics that are related to a port or port group associated with the data unit, such an ingress port over which the data unit arrived and an egress port to which the data unit is to be forwarded. The telemetry data specific to the data unit may further include, for instance, an identifier of an ingress port over which the node received the data unit and/or an egress port to which the node resolved the data unit for forwarding. This data may further include a timestamp associated with an ingress event for the data unit (such as when it was received at an ingress arbiter or packet processor, when it was enqueued in an ingress or egress queue, etc.), and/or a timestamp associated with an egress event for the data unit (such as dequeuing the data unit from an egress queue, transferring to a transmit buffer, etc.). In an embodiment, rather than, or in addition to, including such timestamps, the telemetry data may include a total delay, which is a difference between two such timestamps. The telemetry data may further include data and/or statistics related to a queue in which the data unit was enqueued. For instance, the telemetry data may include an identifier of an egress queue in which the data was enqueued. The telemetry data may further include statistics specific to the identified queue, such as a length of the queue at a time when the data unit was enqueued and/or dequeued (or the most recently determined length as of such a time), an actual or approximate delay experienced by the data unit in the queue, an actual or approximate amount of buffer space utilized by data units within that queue, and so forth. In an embodiment, the telemetry data may also or instead include metrics based on the foregoing, such as running averages, deviations, or functions of such measures over time. In an embodiment, instead of including specific measures, the measures may be “quantized,” in that they are categorized according to a value range in which they fall. For instance, instead of specifying an exact number of seconds that a data unit was delayed, the telemetry data may simply specify that the delay was “high” or “low,” depending on which range the delay falls in. Likewise, instead of specifying an actual amount of buffer space consumed, the telemetry data may simply indicate whether the calculated amount of space is considered to fall in a “congested” or “uncongested” range. The foregoing statistics data may be determined in a variety of manners, depending on the embodiment. For example, different components of a node, such as an ingress arbiter or traffic manager, may include logic for calculating the statistics, which they then embed in control information or other metadata that accompanies the data unit through the data unit processing pipeline on to the telemetry data insertion component330. In an embodiment, a separate subsystem may exist for tracking and maintaining some or all of the foregoing data, and the telemetry data insertion component330may lookup the data using that subsystem when needed. In an embodiment, any data unit specific statistics in the telemetry data may be relative to the telemetry data unit. In another embodiment, the telemetry data insertion component may be capable of locating data unit specific statistics for the original data unit using the telemetry data unit, and hence the data unit specific statistics may be specific to the original data unit. In yet other embodiments, both types of statistics may be included. In an embodiment, the telemetry report data sent to a telemetry collector may include some or all of the foregoing telemetry data for each node in the telemetry data unit's path. Or, the telemetry report data may include some or all of the foregoing telemetry data only for one or more nodes of interest in the path, such as for any nodes at which a delay measure (queue delay, total delay, etc) or other measure falls in a “high” or other undesirable range. In an embodiment, telemetry report data may additionally or instead include any type of data derived from the foregoing, including averages, sums, and/or other functions across multiple nodes, as well as quantized versions of the foregoing. As non-limiting examples, the telemetry report data might include a sum queue delay experienced by the telemetry data unit across all nodes in the path, a sum total delay, an average queue size, a general characterization of the path as being congested or normal based on the observed metrics, and so forth. The telemetry data described in this section are non-limiting examples of telemetry data that may be inserted in a telemetry data unit. A variety of other types of statistics may be tracked and recorded in a telemetry data unit. 2.7. Miscellaneous In an embodiment, a node may elect to drop a telemetry data unit under certain conditions, while allowing the original data unit to carry on its path. For instance, a node may be configured to drop all telemetry traffic (that is, all telemetry data units), once it reaches a certain level of congestion, as indicated by one or more metrics such as queue size, queue length, or buffer fill level. As another example, a node may be configured to not enqueue telemetry data units in a queue when the queue is over a certain length, has a certain measure of delay, or is any other undesirable state. In an embodiment, if the volume of telemetry traffic exceeds a threshold (e.g. above a certain number of telemetry data units enqueued or transmitted in a period of time, more than a certain amount of buffer space occupied by telemetry data units, etc.), a node may stop generating telemetry data units and/or inserting telemetry data in existing telemetry data units. Optionally, the node may further drop new telemetry data units it receives and/or certain existing telemetry data units that it enqueues or dequeues. In an embodiment, a network may have more than one path telemetry domain, which may be overlapping or non-overlapping. Each domain may have its own edge nodes, transit nodes, telemetry collector(s), telemetry triggers, telemetry marker, and so forth. In an embodiment, ingress edge telemetry functionality may be enabled or disabled in a node on a per-port basis. Likewise, egress edge telemetry functionality may be enabled or disabled in a node on a per-port basis. Similarly, the telemetry triggers for a node may vary depending on the port. In such embodiments, transit telemetry functionality may be enabled or disabled globally, or also on a per-port basis. In an embodiment, when a telemetry data unit carries more than a maximum threshold amount of data, a transit telemetry node may condense that data into summary data before passing the telemetry data unit on, so as to make room for additional telemetry data. Or, summary data may be generated for each of one or more “legs” of the path traversed by the telemetry data unit. As already explained, in an embodiment, telemetry information may be sent to multiple telemetry collectors. In an embodiment, one of these collectors may be the destination endpoint for the original data unit. The telemetry information may be sent to the endpoint, or the telemetry data unit itself may simply be forwarded on to the endpoint. In an embodiment, a collector may be the original source node. The telemetry information and/or telemetry data unit may be forwarded (“reflected”) along a reverse path back to the source node. In an embodiment, rather than waiting for a data unit to arrive at an ingress edge telemetry node, a source node (e.g. host210a) may include logic for cloning a data unit before sending that data unit. For instance, a source node may include logic similar to ingress edge telemetry detection component320for evaluating when telemetry triggers are met. When the triggers are met, logic similar to that of components322-326may generate a telemetry data unit and inject the telemetry data unit into the data unit processing pipeline. The source node may or may not insert telemetry data into the telemetry data unit before forwarding the telemetry data unit on. In an embodiment, using this technique, a source node may be capable of transmitting a telemetry data unit immediately before or after the original data unit, ensuring that the metrics collected for the telemetry data unit remain highly relevant to the original data unit. 3.0. Functional Overview FIG.4illustrates a flow400for generating and collecting telemetry information for a data unit as it traverses a network, according to an embodiment. Flow400may be performed within a network of switches and/or other devices described in other sections. In an embodiment, flow400may be performed by nodes such as nodes220,230, and240within network200, using components such as illustrated with respect to subsystem300. However, flow400may similarly be performed in other networks of devices using other arrangements of components. Moreover, telemetry domain250and subsystem300need not be limited specifically to performance of the example flow described below. In an embodiment, each of the processes described in connection with the functional blocks described below may be implemented using one or more integrated circuits, logic components, computer programs, other software elements, and/or digital logic in any of a general-purpose computer or a special-purpose computer, while performing data retrieval, transformation, and storage operations that involve interacting with and transforming the physical state of memory of the computer. Block410comprises receiving a data unit at a node that has been configured as an ingress edge telemetry node. The node may have been configured as an ingress edge telemetry node for all network traffic it receives, or only for certain types of traffic, such as data units assigned to a flow or having other attributes similar to the data unit. In an embodiment, the node compares the data unit, or data unit metadata, to telemetry triggers to determine whether the node is an ingress edge telemetry node for the data unit. Block415comprises the ingress edge telemetry node cloning the data unit. In an embodiment, the node performs block415for all data units that the node receives. In an embodiment, the node performs block415only for a subset of data units it receives. For instance, the node may perform415only a certain percentage of the time, or at a certain probability level. Of course, if the node determines not to perform block415for a data unit, flow400does not continue for that data unit. Cloning a data unit may involve such steps as, without limitation, copying the data unit from a location in a buffer to a location in another buffer, retransmitting the data unit from a transmit buffer, and/or re-enqueuing a buffered data unit in a queue so that it is processed multiple times without there necessarily being multiple physical copies of the data unit. The cloned data unit, which is known as a telemetry data unit, may be a full clone of the original data unit, or may comprise only a header or other portion of the original data unit, depending on the embodiment. Block420comprises the ingress edge telemetry node inserting telemetry data into the telemetry data unit. The telemetry data may be any suitable set of metadata, state data, statistics, metrics, etc. Example telemetry data is described in other sections. In an embodiment, the telemetry data is inserted in or as the payload of the telemetry data unit, thereby replacing payload data from the original data unit. In an embodiment, the telemetry data may be preceded by a telemetry instructions header, including a telemetry marker identifying the data unit as carrying telemetry data, an identifier of the original data unit, instructions for downstream components or nodes as to how and what telemetry data should be inserted, and/or other suitable data. Block425comprises the ingress edge telemetry node forwarding the original data unit and its clone (the telemetry data unit) to a next hop in their path to their intended destination. For instance, the forwarding logic in the ingress edge telemetry node may compare a destination address or other information in the headers of the data unit and the telemetry data unit to entries in a forwarding table. Based thereon, the node may determine an egress port by which the data unit and telemetry data unit should be forwarded to reach the next hop in their path. The node may then transmit the original data unit and the telemetry data unit out this port. Since the original data unit and the telemetry data unit include the same header, both will typically be forwarded to the same next hop. Note that, in some embodiments, the data unit and the telemetry data unit may pass separately through the forwarding logic of the node, and hence these steps may be repeated separately for the node and its clone, though typically in close temporal proximity. Moreover, certain steps of this process may, in some embodiments, be performed prior to the telemetry data being inserted in the telemetry data unit in block420, as the telemetry data unit might need to pass through an egress queue associated with the identified egress port before some of the telemetry data can be collected. Block430comprises receiving the data unit and the telemetry data unit at a node that has been configured as a transit telemetry node. In an embodiment, similar to the ingress edge telemetry node, the node may be configured as a telemetry transit node for some or all telemetry data units that it receives. In an embodiment, the data unit and telemetry data unit may pass through one or more non-telemetry nodes on their way to the transit node. In another embodiment, the data unit and telemetry data unit arrive directly from the ingress edge telemetry node (or another transit telemetry node, in the case that flow400has looped back to block430). Block435comprises the transit node updating the telemetry data unit, by inserting its own telemetry data into the telemetry data unit, and/or by modifying aggregate telemetry data to reflect statistics at the transit telemetry node. The transit telemetry node may perform block435in response to recognizing the telemetry data unit as a telemetry data unit by, for instance, a telemetry marker or other information in the telemetry data unit. The telemetry data may include a variety of telemetry data generated by the transit telemetry node, as described in other sections. Block440comprises forwarding the original data unit and the telemetry data unit to their next hop on their path to the destination address, using forwarding logic as described with respect to block430. From here, flow400may return to block430, in which case the original data unit and the telemetry data unit are received by another transit telemetry node. The original data unit and the telemetry data unit may be received and processed by any number of telemetry data units over the course of flow400. Eventually, however, flow400proceeds to block450. Block450comprises receiving the original data unit and the telemetry data unit at a node that has been configured as an egress edge telemetry node. Again, the node may be configured as an egress edge telemetry node for some or all telemetry data units that it receives. For instance, the node may be an egress edge telemetry node for telemetry data units destined to one egress port, but not another. Or, the node may be an egress edge telemetry node for telemetry data units that meet certain telemetry triggers. Block455comprises the egress edge telemetry node forwarding the original data unit towards its intended destination, as it normally would. The destination may, in some cases, be a next hop. In other cases, there may be one or more additional hops before the original data unit will reach its intended destination. Block455may be performed with forwarding logic in similar manner to block435. However, instead of also forwarding the telemetry data unit to the destination specified in the telemetry data unit header, in block460, the egress edge telemetry node detects telemetry data in the telemetry data unit. The node may scan for actual telemetry data, or the existence of such data may be inferred from a telemetry marker or other identifying information in the telemetry data unit. Responsive to detecting the telemetry data, the egress edge telemetry node extracts the telemetry data in block470. In an embodiment, this may comprise, for instance, removing the payload from the telemetry data unit, and dropping the telemetry data unit for forwarding purposes. In yet other embodiments, the telemetry data unit need not necessarily be dropped. Block475comprises generating telemetry report data at the egress edge telemetry node, based on the extracted telemetry data. This may involve various calculations, processing, and analysis steps by a processor or other component of the egress edge telemetry node, such as described in other sections. In an embodiment, this may be a trivial step, in that the telemetry report data is the extracted telemetry data. In an embodiment, the telemetry report data may optionally include or be based on telemetry data generated by the egress edge telemetry node. In another embodiment, flow400may include a step of the egress edge node inserting its own telemetry data into the telemetry data unit before the telemetry data is extracted from the telemetry data unit in block460. Block480comprises sending the telemetry report data to a telemetry collector. The telemetry report data may be sent, for instance, out over the network as one or more packets to a centralized telemetry collector. Or, the node may have its own internal telemetry collector to which the telemetry report data may be sent. Flow400is but one example method flow for generating and collecting telemetry data. Other flows may include fewer, additional, or different steps in varying arrangements. For instance, in an embodiment, the original data unit and telemetry data unit may not necessarily pass through any transit telemetry node, and thus blocks430-440may be skipped. As another example, the telemetry data unit may be generated at the original source that sent the original data unit, and hence flow400may begin at block430. In an embodiment, block420may be omitted, in that an ingress edge telemetry node may be merely responsible for creating a clone, and telemetry data is not inserted until the clone reaches a transit node. In yet another embodiment, insertion of telemetry data (e.g. block435) at any hop in the path of the data unit may be optional, depending on the configuration of the hop and/or telemetry triggers at the hop. In an embodiment, flow400may include a step of deciding whether to send the telemetry report data to the telemetry collector, based on the telemetry data and/or other factors, as described elsewhere herein. Flow400may be performed for multiple data units concurrently as they pass through a path telemetry domain. Some data units may be following different paths, including a reverse path. Consequently, different nodes may act in different capacities for different data units, depending on the paths and/or other attributes of those data units. For instance, the ingress edge telemetry node of blocks410-425may be (but is not necessarily) an egress edge telemetry node for another data unit that is being returned from the destination of the original data unit back to the sender of the original data unit. In an embodiment the egress edge telemetry node is not at the edge of the telemetry domain, meaning that the original data unit may pass through additional telemetry transit nodes. The egress edge telemetry node may have decided to act as an egress edge telemetry node instead of a transit node, however, based on the size or other characteristics of the telemetry data already in the original data unit. In such an embodiment, the original data unit may again be cloned, returning flow400back to block415, and additional telemetry data may be collected for the original data unit. In an embodiment, the destination of the data unit may function as the egress edge telemetry node, in which case block455may be omitted. 4.0. Example Embodiments Examples of some embodiments are represented, without limitation, as follows: A system comprising a plurality of network devices, the plurality of devices comprising: a plurality of communication interfaces configured to receive data units; packet-switching logic configured to identify destinations for the data units and forward the data units out specific communication interfaces, of the communication interfaces, that are mapped to those destinations; wherein the network devices include: a first network device configured to: when one or more telemetry trigger conditions occur, generate a telemetry data unit by inserting telemetry data in an at least partial clone of a data unit received by the first network device; forward the data unit and the telemetry data unit along a same path to a destination based on same header data in both the data unit and the telemetry data unit; a second network device configured to: receive the data unit and the telemetry data unit; extract the telemetry data from the telemetry data unit; send telemetry report data derived from the telemetry data unit to a telemetry collector. In an embodiment, the network devices further include: one or more transit network devices, each device of the one or more transit devices configured to: receive the data unit and the telemetry data unit; update the telemetry data in the telemetry data unit based on additional telemetry data collected at the device; continue forwarding the data unit and the telemetry data unit along the path to the destination. In an embodiment, the plurality of network devices form a telemetry domain within a data center, the telemetry domain including a plurality of edge devices that are configured in the same manner as the first device and the second device, wherein the one or more transit network devices include a plurality of transit devices that interconnect the edge devices. In an embodiment, the system further comprises the telemetry collector, the telemetry collector being a separate device in the telemetry domain to which a plurality of the edge devices send telemetry information, including the telemetry report data, derived from telemetry data units. In an embodiment, the telemetry data includes one or more of: an identifier of the first device, an ingress port identifier, an egress port identifier, an ingress timestamp, an egress timestamp, a queue identifier, a queue length, a congestion measure, a buffer fill measure, and/or a delay measure. In an embodiment, the telemetry report data includes an aggregate delay measured for the telemetry data unit in the path from the first device to the second device. In an embodiment, the second network device is further configured to continue forwarding the data unit along the path to the destination. In an embodiment, the second network device is the destination. According to an embodiment, a method comprises: receiving a data unit at a network device via a first communication link; receiving a telemetry data unit at the network device via the first communication link, the telemetry data unit being an at least partial clone of the data unit, including same header data as the data unit; at the network device, extracting, from the telemetry data unit, telemetry data generated by one or more other network devices in a path traversed by both the data unit and the telemetry data unit; sending, by the network device, telemetry report data derived from the telemetry data to a telemetry collector. In an embodiment, the first network device is a switch, the one or more network devices in the path include one or more additional switches, and the data unit is an Internet Protocol packet. In an embodiment, the one or more other network devices include at least two network devices. In an embodiment, the same header data includes at least a same source address field, a same destination address field, a same source port field, a same destination port field, and a same type field in the data unit headers of the data unit and the telemetry data unit. In an embodiment, the same header data is the entire header of the data unit and of the telemetry data unit. In an embodiment, the network device is an egress edge telemetry node, wherein the one or more other network devices include an ingress edge telemetry node, the method further comprising: receiving the data unit at the ingress edge telemetry node; generating the telemetry data unit at the ingress edge telemetry node by at least partially cloning the data unit; at the ingress edge telemetry node, inserting first telemetry data of the telemetry data in the telemetry data unit; and based on same destination address information of the header data in the data unit and the telemetry data unit, forwarding the data unit and the telemetry data unit from the ingress edge telemetry node to a next network device in the path. In an embodiment, the method comprises generating the telemetry data unit at the ingress edge telemetry node is responsive to determining that one or more telemetry triggering conditions have been met at a time that the ingress edge telemetry node is processing the data unit. In an embodiment, the one or more other network devices include a transit telemetry node, and the method further comprises: receiving the data unit and the telemetry data unit at the transit telemetry node; at the transit telemetry node, inserting second telemetry data of the telemetry data in the telemetry data unit; based on same destination address information of the header data in the data unit and the telemetry data unit, forwarding the data unit and the telemetry data unit from the ingress edge telemetry node to a further hop in the path. In an embodiment, the method further comprises the ingress edge telemetry data unit inserting a telemetry marker in the telemetry data unit, and the egress edge telemetry data unit determining that the telemetry data unit comprises the telemetry data based on detecting the telemetry marker in the telemetry data unit. In an embodiment, the method further comprises: at least partially cloning the data unit from a buffer of a data unit processing pipeline of the ingress edge telemetry node, before forwarding the data unit to the next network device; injecting the telemetry data unit into the data unit processing pipeline of the ingress edge telemetry node for forwarding to the next network device. In an embodiment, injecting the telemetry data unit into the data unit processing pipeline occurs prior to the data unit being forwarded to the next network device. In an embodiment, the method further comprises: enqueuing the data unit in an egress queue of the data unit processing pipeline; wherein injecting the telemetry data unit into the data unit processing pipeline comprises forwarding the telemetry data unit to a traffic manager of the data unit processing pipeline via an internal port for enqueuing within the same egress queue as the data unit. In an embodiment, for at least a first device of the one or more network devices in the path, the telemetry data includes one or more of: an identifier of the first device, an ingress port identifier, an egress port identifier, an ingress timestamp, an egress timestamp, a queue identifier, a queue length, a congestion measure, a buffer fill measure, and/or a delay measure. In an embodiment, the method further comprises forwarding the data unit out a second communication link of the network device, the second communication link mapped to a destination indicated by the header data in the data unit; the network device discarding the telemetry data unit without forwarding the telemetry data unit out the second communication link. In an embodiment, the network device is a destination device indicated by the header data in the data unit. In an embodiment, the telemetry report data includes an aggregate delay measured for the telemetry data unit along the path. In an embodiment, one or more non-transitory computer-readable media store instructions that, when executed by one or more computing devices, cause performance of the foregoing. In an embodiment, a network apparatus comprises: a plurality of communication interfaces; ingress edge telemetry logic configured to generate a first telemetry data unit by: at least partially cloning a first data unit received via a first interface of the communication interfaces, and inserting telemetry marking data therein; packet-switching logic configured to forward the first data unit and the first telemetry data unit out a second interface of the communication interfaces, based on first header data found in both the data unit and the telemetry data unit; egress edge telemetry logic configured to: detect a second telemetry data unit that was received via a third interface of the communication interfaces, the second telemetry data unit also including the telemetry marking data, the second telemetry data unit being an at least partial clone of a second data unit received via the third interface; extract, from the second telemetry data unit, telemetry data generated by one or more other network devices in a path traversed by both the second data unit and the second telemetry data unit; send telemetry report data derived from the telemetry data to a telemetry collector. In an embodiment, the packet-switching logic is further configured to forward the second data unit out a fourth interface of the communication interfaces, based on second header data found in both the second data unit and the second telemetry data unit; wherein the egress edge telemetry logic is configured discard the second telemetry data unit before the packet-switching logic can forward the second telemetry data unit. In an embodiment, the second interface and the third interface are a same interface, wherein the first interface and the fourth interface are another same interface. In an embodiment, the ingress edge telemetry logic is configured to generate the first telemetry data unit responsive to determining that one or more telemetry triggering conditions have been met at a time that the packet-switching logic is processing the first data unit. In an embodiment, the apparatus further comprises telemetry insertion logic configured to: detect when a data unit being processed by the packet-switching logic is a telemetry data unit, based on telemetry marking data being found therein; update a telemetry data portion of the telemetry data unit based on telemetry data generated by the network apparatus. In an embodiment, the packet-switching logic includes a data unit processing pipeline, the data unit processing pipeline including one or more buffers that store data units prior to the data units being forwarded out interfaces coupled to the data unit processing pipeline, and the ingress edge telemetry logic is configured to: perform said cloning from the one or more buffers before the first data unit is forwarded out the second interface; inject the telemetry data unit into the data unit processing pipeline for forwarding. In an embodiment, for at least a first device of the one or more network devices in the path, the telemetry data includes one or more of: an identifier of the first device, an ingress port identifier, an egress port identifier, an ingress timestamp, an egress timestamp, a queue identifier, a queue length, a congestion measure, a buffer fill measure, and/or a delay measure. Yet other example embodiments are described in other sections herein. 5.0. Example Network Device This section describes, in greater detail, example network devices in which the components and techniques described herein may be implemented. However, the techniques described herein are also useful in switches and contexts other than those described in this section. FIG.5is an illustrative view of various aspects of an example network device500in which techniques described herein may be practiced, according to an embodiment. Network device500is a computing device comprising any combination of hardware and software configured to implement the various logical components described herein, including components510-590. For example, the apparatus may be a single networking computing device, such as a router or switch, in which some or all of the components510-590described herein are implemented using application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). As another example, an implementing apparatus may include one or more memories storing instructions for implementing various components described herein, one or more hardware processors configured to execute the instructions stored in the one or more memories, and various data repositories in the one or more memories for storing data structures utilized and manipulated by various components510-590. Device500is generally configured to receive and forward data units505to other devices in a network, such as network100, by means of a series of operations performed at various components within the device500. Note that certain nodes110in a system such as network100may each be or include a separate network device500. In an embodiment, a node110may include more than one device500. In an embodiment, device500may itself be one of a number of components within a node110. For instance, network device500may be an integrated circuit, or “chip,” dedicated to performing switching and/or routing functions within a network switch or router. The network switch or router may further comprise one or more central processor units, storage units, memories, physical interfaces, LED displays, or other components external to the chip, some or all of which may communicate with the chip. A non-limiting example flow of a data unit505through various subcomponents of the switching logic of device500is as follows. After being received via a port510, a data unit505may be buffered by an arbiter (not depicted) until the data unit505can be processed by an ingress packet processor550, and then delivered to an interconnect (not depicted). From the interconnect, the data unit505may be forwarded to a traffic manager540. The traffic manager540may store the data unit505in a buffer544and assign the data unit505to a queue545. The traffic manager540manages the flow of the data unit505through the queue545until the data unit505is released to an egress packet processor550. Depending on the processing, the traffic manager540may then assign the data unit505to another queue so that it may be processed by yet another egress processor550, or the egress packet processor550may send the data unit505to an transmit buffer (not depicted) from which the data unit505is finally forwarded out another port590. Of course, depending on the embodiment, the switching logic may omit some of these subcomponents and/or include other subcomponents in varying arrangements. Example components of a device500are now described in further detail. 5.1. Ports Network device500includes ports510/590, by which the packet-handling logic is coupled to various network interfaces that receive and transmit data units505. Ports510, including ports510a-n, are inbound (“ingress”) ports by which data units505are received over a network, such as network100. Ports590, including ports590a-n, are outbound (“egress”) ports by which at least some of the data units505are sent out to other destinations within the network, after having been processed by the network device500. Data units505may be of any suitable PDU type, such as packets, cells, frames, etc. In an embodiment, data units505are packets. However, the individual atomic data units upon which the depicted components operate may actually be subunits of the data units505. For example, at the port level, data units505may be received, acted upon, and even transmitted as cells or other TDUs. These TDUs may be logically linked together as the data units505(e.g. PDUs) to which they respectively belong for purposes of determining how to handle the cells or frames. However, the subunits may not actually be assembled into data units505within device500, particularly if the subunits are being forwarded to another destination through device500. Ports510/590are depicted as separate ports for illustrative purposes, but may actually correspond to the same physical interface hardware on the network device510. That is, a network device500may both receive data units505and send data units505over a single physical interface, and the single physical interface may thus be logically coupled to both an ingress port510and egress port590. Nonetheless, for various functional purposes, certain logic of the network device500may view a single physical interface as a separate ingress port510and egress port590. Moreover, for various functional purposes, certain logic of the network device500may subdivide a single physical interface into multiple ingress ports510or egress ports590, or aggregate multiple physical interfaces into a single ingress port510or egress port590. Hence, in various embodiments, ports510and590should be understood as distinct logical constructs that can be mapped to physical interfaces rather than simply as distinct physical constructs. In some embodiments, the physical interfaces to which the ports510/590of a device500are coupled may include various interface components from a MAC layer and/or Physical layer, including one or more transceivers, Serializer/Deserializer (“SerDes”) blocks, interface controllers, and/or other suitable components, by which device500receives, decodes, descrambles, deserializes, encodes, scrambles, serializes, sends, and/or otherwise processes data units on their way in or out of ports510/590. 5.2. Packet Processors A device500comprises one or more packet processing components550. These packet processors550may be any suitable combination of fixed circuitry and/or software-based logic, such as specific logic components implemented by one or more Field Programmable Gate Arrays (FPGAs) or Application-Specific Integrated Circuits (ASICs), or a general-purpose processor executing software instructions. Different packet processors550may be configured to perform different packet processing tasks. These tasks may include, for example, identifying paths along which to forward data units505, forwarding data units505to egress ports590, implementing flow control and/or other policies, manipulating packets, performing statistical or debugging operations, and so forth. A device500may comprise any number of packet processors550configured to perform any number of processing tasks. In an embodiment, the packet processors550within a device500may be arranged such that the output of one packet processor550may, eventually, be inputted into another packet processor550, in such a manner as to pass data units505from certain packet processor(s)550to other packet processor(s)550in a sequence of stages, until finally disposing of the data units505(e.g. by sending the data units505out an egress port590, “dropping” the data units505, etc.). The exact set and/or sequence of packet processors550that process a given data unit505may vary, in some embodiments, depending on attributes of the data unit505and/or the state of the device500. Any number of packet processors550may be chained together in this manner. Based on decisions made while processing a data unit505, a packet processor550may, in some embodiments, and/or for certain processing tasks, manipulate a data unit505directly. For instance, the packet processor550may add, delete, or modify information in a data unit header or payload. In other embodiments, and/or for other processing tasks, a packet processor550may generate control information that accompanies the data unit505, or is merged with the data unit505, as the data unit505continues through the device500. This control information may then be utilized by other components of the device500to implement decisions made by the packet processor550. In an embodiment, a packet processor550need not necessarily process an entire data unit505, but may rather only receive and process a subunit of a data unit505, such as a TDU comprising header information for the data unit. For instance, if the data unit505is a packet comprising multiple cells, the first cell, or a first subset of cells, might be forwarded to a packet processor550, while the remaining cells of the packet (and potentially the first cell(s) as well) are forwarded in parallel to a merger component where they await results of the processing. Ingress and Egress Processors In an embodiment, a packet processor may be generally classified as an ingress packet processor550or an egress packet processor550. Generally, an ingress processor550resolves destinations for a traffic manager540to determine which ports590and/or queues545a data unit505should depart from. There may be any number of ingress processors550, including just a single ingress processor550. In an embodiment, an ingress processor550performs certain intake tasks on data units505as they arrive. These intake tasks may include, for instance, and without limitation, parsing data units505, performing routing related lookup operations, categorically blocking data units505with certain attributes and/or when the device500is in a certain state, duplicating certain types of data units505, making initial categorizations of data units505, and so forth. Once the appropriate intake task(s) have been performed, the data units505are forwarded to an appropriate traffic manager540, to which the ingress processor550may be coupled directly or via various other components, such as an interconnect component. The egress packet processor(s)550of a device500, by contrast, may be configured to perform non-intake tasks necessary to implement the switching logic of the device500. These tasks may include, for example, tasks such as identifying paths along which to forward the data units505, implementing flow control and/or other policies, manipulating data units, performing statistical or debugging operations, and so forth. In an embodiment, there may be different egress packet processors(s)550assigned to different flows or other categories of traffic, such that not all data units505will be processed by the same egress packet processor550. In an embodiment, each egress processor550is coupled to a different group of egress ports590to which they may send data units505processed by the egress processor550. In an embodiment, access to a group of ports590may be regulated via an egress arbiter coupled to the egress packet processor550. In some embodiments, an egress processor550may also or instead be coupled to other potential destinations, such as an internal central processing unit, a storage subsystem, or a traffic manager540. 5.3 Buffers Since not all data units505received by the device500can be processed by the packet processor(s)550at the same time, various components of device500may temporarily store data units505in one or more buffers544while the data units505are waiting to be processed. For example, a certain packet processor550may only be capable of processing a certain number of data units505, or portions of data units505, in a given clock cycle, meaning that other data units505, or portions of data units505, destined for the packet processor550must either be ignored (e.g., dropped) or stored. At any given time, a large number of data units505may be stored in the buffers544of the device500, depending on network traffic conditions. A device500may include a variety of buffers544, each utilized for varying purposes and/or components. Generally, a data unit505awaiting processing by a component is held in a buffer544associated with that component until the data unit505is “released” to the component for processing. For example, a traffic manager540will typically have a relatively large buffer544, referred to as an egress buffer, in which it buffers data units505prior to releasing those data units550to an egress packet processor550. A buffer544may be implemented using a single physical memory unit (e.g. SRAM, DRAM, etc.), a designated portion of a memory unit, or a collection of memory units, depending on an embodiment. The buffer544is divided into addressable units, or entries, that store SDUs, one or more of which form a TDU, as explained elsewhere. Each TDU stored in a buffer544belongs to a PDU. However, the data for the SDUs and/or TDUs that belong to a PDU may not necessarily be stored adjacent to each other. If one wishes to reconstruct a PDU based on the buffered SDUs, one might be unable to do so using the buffer memory alone. Therefore, in an embodiment, a buffer544may further store or be associated with linking data that indicates which SDUs belong to a given TDU and/or which TDUs belong to a given PDU, also referred to as intra-packet link data. For each PDU, buffer544may further store or be associated with various PDU metadata. The PDU metadata may include any suitable information about a PDU, such as a PDU identifier, location(s) of linking data for the PDU (e.g. the address(es) of intra-packet entr(ies) at which the linked list(s) for the PDU start), a count of TDUs in the PDU, source information, destination information, control information, timestamps, statistics, an assigned queue, flow control information, and so forth. 5.4. Queues In an embodiment, to manage the order in which data units505are processed from buffers544, various components of a device500may implement queueing logic. For example, the flow of data units505through the egress buffers544of traffic manager540may be managed using egress queues while the flow of data units505through the buffers of an ingress arbiter might be managed using ingress queues. A queue545is a set of nodes arranged in some order by metadata describing the queue545. The queue545includes a head node, or head, which is typically the next node to be processed, and a tail node, or tail, which is typically the node most recently added to the queue. A node will typically progress from the tail to the head over time as other nodes are processed and removed from the queue. In the case of queue545, the nodes are data unit505, or the buffer locations(s) at which the data unit505begins. A data unit505that has been added to a queue545is said to be “linked” to that queue545. A data unit505may belong to one or more queues545. In many embodiments, the sequence in which the queue545arranges its constituent data units505generally corresponds to the order in which the data units505or data unit portions in the queue545will be released and processed. Such queues545are known as first-in-first-out (“FIFO”) queues, though in other embodiments other types of queues may be utilized. In some embodiments, the number of data units505or data unit portions assigned to a given queue545at a given time may be limited, either globally or on a per-queue basis, and this limit may change over time. In an embodiment, queues545are implemented using a linking memory referred to an “inter-packet” link memory, which is separate from the associated buffer memory544. Each entry in the link memory is said to be a node in the queue. Each link entry points comprises a data pointer, which, when the link entry is occupied, points to a memory location in the buffer memory544at which a corresponding data unit (or at least the start of the data unit) is found (e.g. a buffer entry, a first entry for the data unit in an intra-packet link memory, etc.). Each entry in the link memory further may further comprises a link pointer to another link entry, which corresponds to the next node in the queue. Of course, in other embodiments, other types of linking memories and/or other structures may instead be utilized instead to represent the queue. 5.5 Traffic Management According to an embodiment, a device500further includes one or more traffic managers540configured to control the flow of data units505to one or more packet processor(s)550. A traffic manager540may receive data units505directly from a port510, from an ingress processor550, and/or other suitable components of device500. In an embodiment, the traffic manager540is configured to receive up to one TDU from each possible source (e.g. each port510) each clock cycle of the traffic manager540. Traffic manager540may include or be coupled to buffers544for buffering data units505prior to sending those data units505to their respective processor(s)550. A buffer manager within the traffic manager540may temporarily store data units505in buffers544as they await processing by processor(s)550. A data unit505or data unit portion in a buffer544may eventually be “released” to one or more processor(s)550for processing, by reading the data unit505from the buffer544and sending the data unit505to the processor(s)550. In an embodiment, traffic manager540may release up to a certain number of data units505from buffers544to processors550each clock cycle. Beyond managing the use of buffers544to store data units505(or copies thereof), a traffic manager540may include queue management logic configured to assign data units505to queues545and manage the flow of data units505through queues545. The traffic manager540may, for instance, “enqueue” a PDU that has been fully buffered by identifying a specific queue545to assign the PDU to, and then linking a PDU identifier or other PDU metadata to the assigned queue. The traffic manager540may further determine when to release—also referred to as dequeuing—data units505from queues545by sending instructions to the buffer manager544to read/release the data units505and then providing the data read from the buffer544to specific packet processor(s)550. In an embodiment, different queues545may exist for different sources or destinations. For example, each port510and/or port590may have its own set of queues545. The queue545to which an incoming data unit505is assigned and linked may, for instance, be selected based on forwarding information indicating which port590the data unit505should depart from. In an embodiment, a different egress processor550may be associated with each different set of one or more queues545. In an embodiment, the current processing context of the data unit505may be used to select which queue545a data unit505should be assigned to. In an embodiment, there may also or instead be different queues545for different flows or sets of flows. That is, each identifiable traffic flow or group of traffic flows is assigned its own set of queues545to which its data units505are respectively assigned. In an embodiment, different queues545may correspond to different classes of traffic or quality-of-service (QoS) levels. Different queues545may also or instead exist for any other suitable distinguishing properties of the data units505, such as source address, destination address, packet type, and so forth. Though only one packet processor550and traffic manager540are depicted, a device500may comprise any number of packet processors550and traffic managers540. For instance, different sets of ports510and/or ports590may have their own traffic manager540and packet processors550. As another example, in an embodiment, the traffic manager540may be duplicated for some or all of the stages of processing a data unit. For example, system500may include a traffic manager540and egress packet processor550for an egress stage performed upon the data unit505exiting the system500, and/or a traffic manager540and packet processor550for any number of intermediate stages. The data unit505may thus pass through any number of traffic managers540and/or packet processors550prior to exiting the system500. In other embodiments, only a single traffic manager540is needed. If intermediate processing is needed, flow of a data unit505may loop back to the traffic manager540for buffering and/or queuing after each stage of intermediate processing. In an embodiment, a traffic manager540is coupled to the output of an ingress packet processor(s)550, such that data units505(or portions thereof) are assigned to buffers544only upon being initially processed by an ingress packet processor550. Once in an egress buffer544, a data unit505(or portion thereof) may be “released” to one or more egress packet processor(s)550for processing. In the course of processing a data unit505, a device500may replicate a data unit505one or more times for purposes such as, without limitation, multicasting, mirroring, debugging, and so forth. For example, a single data unit505may be replicated to multiple egress queues545. For instance, a data unit505may be linked to separate queues545for each of ports1,3, and5. As another example, a data unit505may be replicated a number of times after it reaches the head of a queue545(e.g. for different egress processors550). Hence, though certain techniques described herein may refer to the original data unit505that was received by the device500, it will be understood that those techniques will equally apply to copies of the data unit505that have been generated for various purposes. A copy of a data unit505may be partial or complete. Moreover, there may be an actual physical copy of the data unit505in buffers544, or a single copy of the data unit505may be linked from a single buffer location to multiple queues545at the same time. 5.6. Forwarding Logic The logic by which a device500determines how to handle a data unit505—such as where and whether to send a data unit505, whether to perform additional processing on a data unit505, etc.—is referred to as the forwarding logic of the device500. This forwarding logic is collectively implemented by a variety of the components of the device500, such as described above. For example, an ingress packet processor550may be responsible for resolving the destination of a data unit505and determining the set of actions/edits to perform on the data unit505, and an egress packet processor550may perform the edits. Also, there may be embodiments where the ingress packet processor550performs edits as well. The forwarding logic may be hard-coded and/or configurable, depending on the embodiment. For example, the forwarding logic of a device500, or portions thereof, may, in some instances, be at least partially hard-coded into one or more ingress processors550and/or egress processors550. As another example, the forwarding logic, or elements thereof, may also be configurable, in that the logic changes over time in response to analyses of state information collected from, or instructions received from, the various components of the device500and/or other nodes in the network in which the device500is located. In an embodiment, a device500will typically store in its memories one or more forwarding tables (or equivalent structures) that map certain data unit attributes or characteristics to actions to be taken with respect to data units505having those attributes or characteristics, such as sending a data unit505to a selected path, or processing the data unit505using a specified internal component. For instance, such attributes or characteristics may include a Quality-of-Service level specified by the data unit505or associated with another characteristic of the data unit505, a flow control group, an ingress port510through which the data unit505was received, a tag or label in a packet's header, a source address, a destination address, a packet type, or any other suitable distinguishing property. The forwarding logic may, for example, read such a table, determine one or more ports590to send a data unit505to based on the table, and add or associate the data unit505with information that indicates to downstream logic to send the data unit505along a path that includes a specific traffic manager540and/or egress processor550that is coupled to the one or more ports590. According to an embodiment, the forwarding tables describe groups of one or more addresses, such as subnets of IPv4 or IPv6 addresses. Each address is an address of a network device on a network, though a network device may have more than one address. Each group is associated with a potentially different set of one or more actions to execute with respect to data units that resolve to (e.g. are directed to) an address within the group. Any suitable set of one or more actions may be associated with a group of addresses, including without limitation, forwarding a message to a specified “next hop,” duplicating the message, changing the destination of the message, dropping the message, performing debugging or statistical operations, applying a quality of service policy or flow control policy, and so forth. For illustrative purposes, these tables are described as “forwarding tables,” though it will be recognized that the extent of the action(s) described by the tables may be much greater than simply where to forward the message. For example, in an embodiment, a table may be a basic forwarding table that simply specifies a next hop for each group. In other embodiments, a table may describe one or more complex policies for each group. Moreover, there may be different types of tables for different purposes. For instance, one table may be a basic forwarding table that is compared to the destination address of each packet, while another table may specify policies to apply to packets upon ingress based on their destination (or source) group, and so forth. In an embodiment, forwarding logic may read port state data for ports510/590. Port state data may include, for instance, flow control state information describing various traffic flows and associated traffic flow control rules or policies, link status information indicating links that are up or down, port utilization information indicating how ports are being utilized (e.g. utilization percentages, utilization states, etc.). Forwarding logic may be configured to implement the associated rules or policies associated with the flow(s) to which a given packet belongs. As data units505are routed through different nodes in a network, the nodes may, on occasion, discard, fail to send, or fail to receive certain data units505, thus resulting in the data units505failing to reach their intended destination. The act of discarding of a data unit505, or failing to deliver a data unit505, is typically referred to as “dropping” the data unit. Instances of dropping a data unit505, referred to herein as “drops” or “packet loss,” may occur for a variety of reasons, such as resource limitations, errors, or deliberate policies. Different components of a device500may make the decision to drop a data unit505for various reasons. For instance, a traffic manager540may determine to drop a data unit505because, among other reasons, buffers544are overutilized, a queue545is over a certain size, and/or a data unit505has a certain characteristic. 5.7. Multi-Pipeline Architecture In an embodiment, a network device may include multiple pipelines of data unit processing components such as those described above.FIG.6illustrates an example of one such network device with multiple packet processing pipelines, according to an embodiment. Network device600includes a plurality of ingress ports610and egress ports690, similar to the ingress ports510and egress ports590of device500. The ingress ports610are divided into port groups610a-n, and each group of ports610feeds data units to a different pipeline602of processing components. There may be any number of groups of ports610, and hence any number of corresponding pipelines602. Each pipeline includes an ingress arbiter620. Each ingress arbiter620is coupled to a corresponding group of ingress ports610, and receives data units from those ports610. In some respects, each ingress arbiter620may be viewed as an ingress version of traffic manager540. An ingress arbiter620is responsible for determining when data units are sent to downstream components, and in particular to an ingress packet processor630that is coupled to the ingress arbiter620within a corresponding pipeline602. An ingress arbiter620may or may not include its own buffer memory in which it buffers data unit that await processing, depending on the embodiment. In an embodiment, the data units sent by the ingress arbiter620are actually subunits, such as cells, frames, segments, or other TDUs, of larger parent data units, such as packets or other PDUs. The ingress arbiter620may, in an embodiment, divide a larger data unit (or ensure that the larger data unit is divided) into these subunits prior to transmitting the data units to the corresponding ingress packet processor630. In an embodiment, a packet or other PDU may arrive at the ingress arbiter620as a set of TDUs. For convenience, examples are given herein where the TDU is a cell, and the PDU is a packet, but it will be appreciated that the cell may in fact be any type of subunit, and the packet may in fact be any larger data unit that comprises those subunits. Each pipeline602further includes an ingress packet processor630to which its ingress arbiter620eventually sends these data units. Each ingress packet processor630, meanwhile, functions in similar manner as an ingress packet processor550described above. In particular, an ingress packet processor630performs forwarding tasks such as resolving the data unit destination, adding or removing headers, and so forth. For instance, the ingress packet processor630may be responsible for generating control information that instructs downstream components of the pipelines602on how to handle the data unit, and this control information may either be inserted into the data unit, or be conveyed along with the data unit as sideband information. Each pipeline602further includes an egress traffic manager640, which functions in similar manner to the traffic manager540. A common interconnect638is coupled to each ingress packet processor630on one end and each egress traffic manager640on the other. The interconnect638conveys data units to traffic manager(s)640indicated by the ingress packet processors630(e.g. based on the control information), thereby allowing data units to “switch” pipelines602should their destination(s) include a port690that is not on the same pipeline602as the ingress port610through which the data unit was received. Ingress packet processors630may be coupled to the interconnect638directly, or indirectly via other components such as a merger unit (e.g. that merges a control portion of the data unit processed by the packet processor630with a data portion of the data unit that bypasses the packet processor630). A pipeline's egress traffic manager640then regulates the flow of data units to the pipeline's egress packet processor650, in similar manner as described with respect to traffic manager540. The egress packet processor650processes data units in similar manner as described with respect egress packet processors550. The egress packet processors then forward the processed data units to a transmit buffer for the pipeline's egress port transmit unit660, which is responsible for transmission of data units out a set of one or more egress ports690belonging to the pipeline602. The set of egress ports690for a pipeline corresponds to the pipeline's ingress ports610. In yet other embodiments, an egress traffic manager640and/or other components may be shared between such pipelines. 5.8. Miscellaneous Devices500and600illustrate only several of many possible arrangements of components in a network device configured to provide the functionality described herein. Other arrangements may include fewer, additional, or different components, and the division of work between the components may vary depending on the arrangement. For simplification, the traffic managers, packet processors, and other components are on occasion described herein as acting upon or with respect to a data unit, when in fact only a portion of the data unit may be sent to or otherwise available to the component. For instance, a packet processor may be said to process a data unit, when in fact only the data unit control portion is available to the packet processor. In such contexts, it will be understood that the information about the data unit that is actually available to the component is a representation of the data unit to that component. Any actions described herein as having been taken by the component on or with respect to the data unit may occur with respect to the representation of the data unit, and not necessarily the complete data unit. It will be appreciated that the actual physical representation of a data unit may change as a result of the processes described herein. For instance, a data unit may be converted from a physical representation at a particular location in one memory to a signal-based representation, and back to a physical representation at a different location in a potentially different memory, as it is moved from one component to another within a network device or even between network devices. Such movement may technically involve deleting, converting, and/or copying some or all of the data unit any number of times. For simplification, however, the data unit is logically said to remain the same data unit as it moves through the device, even if the physical representation of the data unit changes. Similarly, the contents and/or structure of a data unit may change as it is processed, such as by adding or deleting header information, adjusting cell boundaries, or even modifying payload data. A modified data unit is nonetheless still said to be the same data unit, even after altering its contents and/or structure. 6.0. Implementation Mechanism—Hardware Overview According to one embodiment, the techniques described herein are implemented by one or more special-purpose computing devices. The special-purpose computing devices may be desktop computer systems, portable computer systems, handheld devices, networking devices, or any other device that incorporates hard-wired and/or program logic to implement the techniques. The special-purpose computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques. Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, FPGAs, or other circuitry with custom programming to accomplish the techniques. Though certain foregoing techniques are described with respect to a hardware implementation, which provides a number of advantages in certain embodiments, it will also be recognized that, in other embodiments, the foregoing techniques may still provide certain advantages when performed partially or wholly in software. Accordingly, in such an embodiment, a suitable implementing apparatus comprises a general-purpose hardware processor and is configured to perform any of the foregoing methods by executing program instructions in firmware, memory, other storage, or a combination thereof. FIG.7is a block diagram that illustrates an example computer system700that may be utilized in implementing the above-described techniques, according to an embodiment. Computer system700may be, for example, a desktop computing device, laptop computing device, tablet, smartphone, server appliance, computing mainframe, multimedia device, handheld device, networking apparatus, or any other suitable device. In an embodiment,FIG.7constitutes a different view of the devices and systems described in previous sections. Computer system700may include one or more ASICs, FPGAs, or other specialized circuitry703for implementing program logic as described herein. For example, circuitry703may include fixed and/or configurable hardware logic blocks for implementing some or all of the described techniques, input/output (I/O) blocks, hardware registers or other embedded memory resources such as random-access memory (RAM) for storing various data, and so forth. The logic blocks may include, for example, arrangements of logic gates, flip-flops, multiplexers, and so forth, configured to generate an output signals based on logic operations performed on input signals. Additionally, and/or instead, computer system700may include one or more hardware processors704configured to execute software-based instructions. Computer system700may also include one or more busses702or other communication mechanism for communicating information. Busses702may include various internal and/or external components, including, without limitation, internal processor or memory busses, a Serial ATA bus, a PCI Express bus, a Universal Serial Bus, a HyperTransport bus, an InfiniBand bus, and/or any other suitable wired or wireless communication channel. Computer system700also includes one or more memories706, such as a RAM, hardware registers, or other dynamic or volatile storage device for storing data units to be processed by the one or more ASICs, FPGAs, or other specialized circuitry703. Memory706may also or instead be used for storing information and instructions to be executed by processor704. Memory706may be directly connected or embedded within circuitry703or a processor704. Or, memory706may be coupled to and accessed via bus702. Memory706also may be used for storing temporary variables, data units describing rules or policies, or other intermediate information during execution of program logic or instructions. Computer system700further includes one or more read only memories (ROM)708or other static storage devices coupled to bus702for storing static information and instructions for processor704. One or more storage devices710, such as a solid-state drive (SSD), magnetic disk, optical disk, or other suitable non-volatile storage device, may optionally be provided and coupled to bus702for storing information and instructions. A computer system700may also include, in an embodiment, one or more communication interfaces718coupled to bus702. A communication interface718provides a data communication coupling, typically two-way, to a network link720that is connected to a local network722. For example, a communication interface718may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the one or more communication interfaces718may include a local area network (LAN) card to provide a data communication connection to a compatible LAN. As yet another example, the one or more communication interfaces718may include a wireless network interface controller, such as an 802.11-based controller, Bluetooth controller, Long Term Evolution (LTE) modem, and/or other types of wireless interfaces. In any such implementation, communication interface718sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information. Network link720typically provides data communication through one or more networks to other data devices. For example, network link720may provide a connection through local network722to a host computer724or to data equipment operated by a Service Provider726. Service Provider726, which may for example be an Internet Service Provider (ISP), in turn provides data communication services through a wide area network, such as the world-wide packet data communication network now commonly referred to as the “Internet”728. Local network722and Internet728both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link720and through communication interface718, which carry the digital data to and from computer system700, are example forms of transmission media. In an embodiment, computer system700may send and receive data units through the network(s), network link720, and communication interface718. In some embodiments, this data may be data units that the computer system700has been asked to process and, if necessary, redirect to other computer systems via a suitable network link720. In other embodiments, this data may be instructions for implementing various processes related to the described techniques. For instance, in the Internet example, a server730might transmit a requested code for an application program through Internet728, ISP726, local network722and communication interface718. The received code may be executed by processor704as it is received, and/or stored in storage device710, or other non-volatile storage for later execution. As another example, information received via a network link720may be interpreted and/or processed by a software component of the computer system700, such as a web browser, application, or server, which in turn issues instructions based thereon to a processor704, possibly via an operating system and/or other intermediate layers of software components. Computer system700may optionally be coupled via bus702to one or more displays712for presenting information to a computer user. For instance, computer system700may be connected via a High-Definition Multimedia Interface (HDMI) cable or other suitable cabling to a Liquid Crystal Display (LCD) monitor, and/or via a wireless connection such as peer-to-peer Wi-Fi Direct connection to a Light-Emitting Diode (LED) television. Other examples of suitable types of displays712may include, without limitation, plasma display devices, projectors, cathode ray tube (CRT) monitors, electronic paper, virtual reality headsets, braille terminal, and/or any other suitable device for outputting information to a computer user. In an embodiment, any suitable type of output device, such as, for instance, an audio speaker or printer, may be utilized instead of a display712. One or more input devices714are optionally coupled to bus702for communicating information and command selections to processor704. One example of an input device714is a keyboard, including alphanumeric and other keys. Another type of user input device714is cursor control716, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor704and for controlling cursor movement on display712. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. Yet other examples of suitable input devices714include a touch-screen panel affixed to a display712, cameras, microphones, accelerometers, motion detectors, and/or other sensors. In an embodiment, a network-based input device714may be utilized. In such an embodiment, user input and/or other information or commands may be relayed via routers and/or switches on a Local Area Network (LAN) or other suitable shared network, or via a peer-to-peer network, from the input device714to a network link720on the computer system700. As discussed, computer system700may implement techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs703, firmware and/or program logic, which in combination with the computer system causes or programs computer system700to be a special-purpose machine. According to one embodiment, however, the techniques herein are performed by computer system700in response to processor704executing one or more sequences of one or more instructions contained in main memory706. Such instructions may be read into main memory706from another storage medium, such as storage device710. Execution of the sequences of instructions contained in main memory706causes processor704to perform the process steps described herein. The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operate in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device710. Volatile media includes dynamic memory, such as main memory706. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge. Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus702. Transmission media may also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor704for execution. For example, the instructions may initially be carried on a magnetic disk or solid-state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and use a modem to send the instructions over a network, such as a cable network or cellular network, as modulated signals. A modem local to computer system700may receive the data on the network and demodulate the signal to decode the transmitted instructions. Appropriate circuitry may then place the data on bus702. Bus702carries the data to main memory706, from which processor704retrieves and executes the instructions. The instructions received by main memory706may optionally be stored on storage device710either before or after execution by processor704. 7.0. Extensions and Alternatives As used herein, the terms “first,” “second,” “certain,” and “particular” are used as naming conventions to distinguish queries, plans, representations, steps, objects, devices, or other items from each other, so that these items may be referenced after they have been introduced. Unless otherwise specified herein, the use of these terms does not imply an ordering, timing, or any other characteristic of the referenced items. In the drawings, the various components are depicted as being communicatively coupled to various other components by arrows. These arrows illustrate only certain examples of information flows between the components. Neither the direction of the arrows nor the lack of arrow lines between certain components should be interpreted as indicating the existence or absence of communication between the certain components themselves. Indeed, each component may feature a suitable communication interface by which the component may become communicatively coupled to other components as needed to accomplish any of the functions described herein. In the foregoing specification, embodiments of the inventive subject matter have been described with reference to numerous specific details that may vary from implementation to implementation. Thus, the sole and exclusive indicator of what is the inventive subject matter, and is intended to be the inventive subject matter, is the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. In this regard, although specific claim dependencies are set out in the claims of this application, it is to be noted that the features of the dependent claims of this application may be combined as appropriate with the features of other dependent claims and with the features of the independent claims of this application, and not merely according to the specific dependencies recited in the set of claims. Moreover, although separate embodiments are discussed herein, any combination of embodiments and/or partial embodiments discussed herein may be combined to form further embodiments. Any definitions expressly set forth herein for terms contained in such claims shall govern the meaning of such terms as used in the claims. Hence, no limitation, element, property, feature, advantage or attribute that is not expressly recited in a claim should limit the scope of such claim in any way. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
130,473
11943129
The figures are not exhaustive and do not limit the present disclosure to the precise form disclosed. DETAILED DESCRIPTION Wide area networks (WANs) may exist across multiple domains, whether they be geographic domains, administrative domains, or organizational domains. An administrator of the WAN may not have administrative access to all of the domains that the WAN spans. One example of such a topology is a software defined WAN (SD-WAN), wherein a branch network (e.g. a campus of an enterprise) is connected to a core network (e.g. a headquarters of an enterprise) via one or more secure tunnels through the Internet. While a network administrator for such an SD-WAN may have administrative control over network infrastructure devices (e.g. controllers, switches, routers, gateways, etc.) in the branch network and in the core network, the administrator does not have administrative control over the intervening network infrastructure devices throughout the Internet. This lack of full administrative control over the intervening Internet links of the WAN makes it impossible to directly measure the performance of network paths. When there is more than one path to transceive data between two devices in a network, performance data for each of those paths can be used to select a preferred path for forwarding such data. This may reduce latency, increase throughput across all network paths, and reduce the likelihood of dropped packets due to network infrastructure devices Referring again to the SD-WAN example, it may be impossible to directly measure the performance of the one or more secure tunnels connecting devices (e.g. user devices, servers, etc) in the branch network with devices in the core network. However, determining which secure tunnel to use may improve network efficiency by reducing the likelihood of saturating one of the tunnels (i.e. sending more data than the tunnel can handle without degraded performance). When direct measurements are not possible, bandwidth estimation may be performed from two devices that can be controlled and used for measurements. This is usually performed by probing the network path with specially crafted probe packets sent from a device on one end of the path (sender) to a device on the other end of the path (receiver). The receiver end device measures the receive time of the packets and the changes to the packet one-way delay/time pattern to estimate the path characteristics, such as the path capacity, the available bandwidth and/or the bulk transfer capacity. Path capacity is the maximum amount of traffic volume per unit time that can be sent if the network path is idle, i.e. without any competing traffic. Available bandwidth (ABW) is the remaining/residual path capacity, i.e. the capacity not currently used by other traffic. Bulk Transfer Capacity (BTC) is the bandwidth a TCP connection would get if placed on this network path. Latency is the one-way delay (OWD) from the sender to the receiver; and round-trip time (RTT) is the two-way delay between the sender and the receiver. The primary approach for end-to-end network path estimation is active probing. A device at one end of the Internet path sends special probe packets to a device at the other end of the Internet path. Those packets are used only to estimate bandwidth, and do not carry actual data beyond the data payload required by network path estimation itself. With active probing, the sender sends a series of specially crafted probe packet patterns. The packet pattern is defined by the estimation technique and designed to trigger specific behaviors from the network elements on the network path. For example, in many cases the packet pattern is a probe train. The packets and interval between packet transmissions are chosen to probe various bandwidths across the packet pattern. The receiver measures the received time of the packets, computes a one-way delay of each packet (e.g., the time taken by a packet to reach from sender device to receiver device), and examines the changes to the packet pattern. The estimation technique uses a simplified network model to convert those measurements into estimates of various network path characteristics. There are two main classes of bandwidth estimation techniques, those using a probe gap model and those using the probe rate model. For the probe gap model, the assumption is that two closely sent packets will see the gap between them increase in proportion to the load on the most loaded queue along the path, due to queuing delay on that queue. For the probe rate model, the assumption is that when packets are sent at a rate lower than the bottleneck bandwidth, the traffic pattern will be mostly unchanged, whereas when packets are sent at a rate greater than the bandwidth available on the path bottleneck, those packet will suffer extra queuing delay due to congestion. One technique for available bandwidth estimation is named PathCos++. The PathCos++ technique is described in the document “A Novel Hybrid Probing Technique for End-to-End Available Bandwidth Estimation” by Lin et al., 35th Annual IEEE Conference on Local Computer Networks, 2010. According to PathCos++, a train of probes is sent at decreasing rates over the path to be measured. A receiver measures the receive probe rate between two points of equal congestion.FIG.3is an example of a graph of relative OWDs for a probe train of 18 probes sent over a network path at decreasing rates, that is, at increasing IDTs. As can be seen inFIG.3, the graph depicts a “Bump.” Points of equal congestion are points on opposite sides of the bump having similar OWD measurements. PathCos++ and some other bandwidth estimation techniques use a Bump Detection Algorithm for selecting those two points of equal congestion. However, the Bump Detection Algorithm is quite sensitive to noise in packet receive timing. In many cases this noise prevents the algorithm from finding the right pair of points, which may lead to large errors in the bandwidth estimates. FIG.1is a block diagram of an example system100for estimating available bandwidth using a probe train. The system100in this example includes a sender102connected by a network path104in a network106to a receiver108. The example system100also includes a network controller110. Each of the sender102and the receiver108may be implemented as a network device, for example such as a switch, a router, a gateway, a server, a client, or the like. In operation, sender102generates a probe train comprising a plurality of packets, at112, and sends the packets over the network path, at114. The receiver108receives the packets, at116, and estimates the available bandwidth based on the packets, at118. The receiver108provides the available bandwidth estimate to the network controller110. The network controller110may utilize the available bandwidth estimate to perform network traffic engineering and the like, at120. FIG.2shows timing characteristics of a probe train according to one embodiment. Three packets are shown: P.1, P.2, and P.3. These three packets are shown both in the upper part ofFIG.1, when departing the sender, and in the lower part of the diagram, when arriving at the receiver.FIG.2illustrates Inter Departure Time (IDT), Inter Arrival Time (IAT), and One Way Delay (OWD). The IDT of a packet is the amount of time between sending the packet and sending the previous packet. The IDT can be computed from the desired probing rate as IDT=Packet-Size/Probed-Rate. The sender is rarely totally accurate in sending packets, so the actual IDT of a packet may differ from the desired/theoretical IDT. The disclosed technology in various embodiments works using both the desired IDT and the actual IDT, but generally provides better results with the actual IDT. The Inter Arrival Time (IAT) of a packet is the amount of time between receiving the packet and receiving the previous packet. The OWD of a packet is governed mostly by the propagation delay of the network path, the transmission time of the slowest link of the path, and the accumulated queuing delay in all the network elements in the path. For each packet i, the OWD can be computed as: OWD(i)=pd+st(size)+sum(qd(e,i)) with:pd→total propagation delay;st(size)→slowest transmission time for this packet size; andqd(e,i)→queuing delay at element e for packet i. These network path estimation techniques may assume a fairly simple queuing model where qd(e,i) is a function of the congestion at element e when packet i arrives. To measure packet departure time and arrival time the sender and receiver use different clocks, which may not be perfectly synchronized. Therefore, it may be difficult to accurately measure the OWD of packets. Fortunately, in most cases it is not the absolute OWD of a packet that matters, but rather the difference between the OWDs of different packets. Over the short time of a probe train, clock drift is negligible, and therefore the difference between clocks can be within an acceptable range of accuracy. The disclosed technology may use this relative OWD for packet measurement. For each packet, the relative OWD of a packet may be the difference between the OWD of that packet and a per probe train fixed constant. For example, it may be the difference between the OWD of that packet and the OWD of the first packet of the probe train. This relative OWD estimates the extra queuing delay of a packet relative to another packet, and may be a positive or a negative value. If there are no packet losses and no packet reordering, packets are sent and received in the same order. In this case, the IAT, IDT and OWD of each packet are directly related. If OWDp is the One Way Delay of the previous packet, and OWDc the One Way Delay of the current packet, the IAT can be given by: IAT=OWDc−OWDp+IDT FIG.3is an example of a graph of relative OWDs for a probe train of 18 probes sent over a network path at decreasing rates, that is, at increasing IDTs. In some embodiments, each probe includes only one probe packet. In other embodiments, as discussed below, each probe includes multiple probe packets. InFIG.3it can be seen that the graph forms a “bump” and Bump Detection Algorithms (BDA) can be used to locate the bump. The top of the bump represents the maximum relative OWD and indicates the bottleneck bandwidth. To compute the bottleneck bandwidth a pair of packets is required. The goal of BDA is to select a representative pair of packets with similar OWD on both sides of a big bump in OWD. Because the pair of packets is used to estimate the available bandwidth, the quality of the available bandwidth estimate is only as good as the selection of those packets. The probe train is sent at a decreasing rate. Therefore the probe train first creates congestion of the network path (being sent at a rate higher than the bottleneck), and then decongests that network path (being sent at a rate lower than the bottleneck). This means that across the probe train, OWD of packets increases initially due to congestion, and then decreases due to decongestion. The peak of the bump in OWD represents the time of maximum congestion. Packets with similar OWD should have experienced similar congestion (similar amount of queuing). The BDA preferably selects packets parts of the same congestion period. If they are not part of the same congestion period, there may be some idle time between those two packets, and in this case not much can be said apart from the fact that the rate of packet probe is lower than the available bandwidth. If two packets are part of the same congestion period and have similar congestion, the overall rate of probes between these two packets have a fairly neutral contribution to congestion, which means the overall rate of probes is close to the available bandwidth. To avoid issues with the granularity of OWD, and to have a better averaging over time, the BDA may be implemented to locate a pair of packets as far apart as possible. Accordingly, it may look for packets with the maximum time between them in the probe train. However, the network path must remain congested between those two packets to satisfy the model requirement, which means that the two packets must be part of the OWD bump. In practice, the BDA tries to find two packets around the base of the OWD bump. An increase in OWD in the probe train in theory indicates increased congestion, however in reality OWD measurements are very noisy. This noise is due to the imperfection and non-ideal behavior of the various network elements and other packet scheduling constraints. Such noise in the OWD directly impacts the BDA algorithm. Noisy OWD values may lead the BDA to select two packets with similar OWD that in reality did not experience the same level of congestion, or packets that are not part of the same congested period. This leads to improper available bandwidth estimates. The BDA of PathCos++ may be challenged by typical OWDs measured on a common network path. For example, on an idle Ethernet link, which is the best-case scenario, the BDA has a fairly high error rate. Also, PathCos++ often underestimates the available bandwidth, and in some instances the pair of packets selected are not part of the same congested period. A large source of measurement noise is packet batching. Packet batching happens when in a queue, packets are processed as a group, instead of individually. This packet batching may impact drastically the OWD of the packets. The earlier packets of the group have to wait for the group to be processed and see their OWD artificially inflated, whereas the last packet of the group waits less time and so has a lower OWD. The authors of PathChirp, a very popular bandwidth estimation technique, identified a common source of packet batching, interrupt coalescence in the receiver Network Interface Card (NIC). Their solution employs multiple packets per probe. A probe train is composed of a number of probes, each probe testing a specific rate on the network path. When batching mitigation is enabled, the sender sends multiple packet for each probe, instead of a single packet per probe. All packets of a probe are sent at the same rate, and therefore have the same IDT. The receiver filters the packets, and for each probe selects only a single packet, the packet that is estimated to have suffered the least from packet batching. The BDA is run only on the selected packets, not on all received packets. FIG.4is a block diagram of an example computing component or device400for estimating available bandwidth on a network path in accordance with one embodiment. Computing component400may be, for example, a server computer, a controller, or any other similar computing component capable of processing data. In the example implementation ofFIG.4, the computing component400includes a hardware processor,402, and machine-readable storage medium,404. In some embodiments, computing component400may be an embodiment of the sender102, the receiver108, or the network controller110ofFIG.1, or some combination thereof. Hardware processor402may be one or more central processing units (CPUs), semiconductor-based microprocessors, and/or other hardware devices suitable for retrieval and execution of instructions stored in machine-readable storage medium,404. Hardware processor402may fetch, decode, and execute instructions, such as instructions406-414, to control processes or operations for estimating available bandwidth on the network path. As an alternative or in addition to retrieving and executing instructions, hardware processor402may include one or more electronic circuits that include electronic components for performing the functionality of one or more instructions, such as a field programmable gate array (FPGA), application specific integrated circuit (ASIC), or other electronic circuits. A machine-readable storage medium, such as machine-readable storage medium404, may be any electronic, magnetic, optical, or other physical storage device that contains or stores executable instructions. Thus, machine-readable storage medium404may be, for example, Random Access Memory (RAM), non-volatile RAM (NVRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a storage device, an optical disc, and the like. In some embodiments, machine-readable storage medium404may be a non-transitory storage medium, where the term “non-transitory” does not encompass transitory propagating signals. As described in detail below, machine-readable storage medium404may be encoded with executable instructions, for example, instructions406-414. Hardware processor402may execute instruction406to receive one-way delay measurements for each probe packet in a probe train sent over the telecommunications path. The OWD measurements may be calculated by any method. For example, the OWD of each packet in a probe train may be measured relative to the OWD of the first packet in the probe train. This approach may reduce the effect of synchronization errors between the sending and receiving clocks, and may also reduce the bit lengths of the OWD measurements. The disclosed technology may include a novel technique to mitigate OWD noise in the Bump Detection Algorithm for available bandwidth estimation. The improved BDA prefers a pair of packets that has a OWD larger than the OWD noise after the bump. OWD noise is typically not the same across the probe train. OWD noise tends to be greater when the network path is not congested, and also tends to be less when the network path is congested. When the network path is congested, scheduling is more predictable and there is less jitter. Moreover, with larger OWD, this noise decreases even more in proportion of the OWD. Previous techniques also tend to pick packets too close to the base of the bump, bump i.e. pick the widest pair with similar OWD, and run the risk that those packets might not be in the same congested period, as shown inFIG.5. In other words, there is an idle gap on the bottleneck between these two packets. The disclosed technology prefers pairs of packets higher than the base of the bump to avoid OWD noise. We use a statistical measure of the uncongested OWD after the bump as a OWD threshold, and we prefer pairs of packets having a OWD greater than that threshold. Hardware processor402may execute instruction408to group the probe packets into a plurality of pairs based on the one-way delay measurements. The disclosed technology selects a number of candidate pairs of packets after packet batching filtering and other validity filtering, i.e. packets that are eligible to perform bandwidth estimation. Each packet pair has a first packet and a second packet, mostly selected based on their OWD, with the first packet transmitted before the second packet. For example, two of the probe packets may be grouped into a pair when the two of the probe packets have similar one-way delay measurements, and a probe packet having a maximum one-way delay measurement for the probe train was transmitted between the two of the probe packets. Hardware processor402may execute instruction410to compute, for each pair, a respective noise threshold based on the one-way delay measurements of all the probe packets transmitted after the later-transmitted probe packet of the pair. For each pair of packets, the system computes a statistic using the OWD of all packets of the probe train after the end packet of the pair (the second packet, in decongestion). These packets follow the congestion, and so are representative of the OWD noise. In some embodiments, the system computes both the average (mean) and the mean absolute deviation (MAD) of the OWD of those trailing packets, and combine both to define an OWD threshold. Our current OWD threshold is the average plus 1.5 time the deviation, which is approximately the 85% percentile of the OWD noise. Other OWD thresholds may be used. In some embodiments, computing a respective noise threshold for a pair includes computing a mean average distance of the one-way delay measurements of all the probe packets transmitted after the later-transmitted probe packet of the pair. In some embodiments, computing a respective noise threshold for a pair includes computing a standard deviation of the one-way delay measurements of all the probe packets transmitted after the later-transmitted probe packet of the pair. In some embodiments, computing a respective noise threshold for a pair includes computing a cumulative distribution function of the one-way delay measurements of all the probe packets transmitted after the later-transmitted probe packet of the pair.FIG.5shows the noise thresholds502for the probe train ofFIG.3. The OWD averages504are also shown, for reference. At this point, the original BDA may filter out candidate packet pairs that don't meet the bump filter (the bump filter looks at the bump's highest OWD, the bump height and width), and if there are any remaining pairs, picks the widest one. Hardware processor402may execute instruction412to select one of the pairs according to the noise thresholds502and the one-way delay measurements for the probe packets of the pairs. In some embodiments, the pair having the lowest one-way delay measurement that exceeds the respective noise threshold502is selected. If there are pairs of packets that meet the bump filter and for which the OWD of the end packet is higher than the OWD threshold, the widest packet pair is selected. Otherwise, if there are pairs of packets that meet the bump filter, but none have the OWD of the end packet higher than the OWD threshold, the packet pair with the highest OWD is selected. In the example ofFIG.5, pair506,508is selected. For comparison, previous techniques would choose pair510,512. Note that pair probe512is not part of the bump, so probes510and512are not part of the same congestion period, and would therefore produce an inaccurate bandwidth estimate. Hardware processor402may execute instruction414to estimate the available bandwidth on the telecommunications path based on transmission times of the probe packets in the selected pair506,508. Many such bandwidth estimation techniques exist, such as, for example, the original PathCos++ method. One example is to measure the receive rate of probe packets between the two packets of the selected pair. The bandwidth estimate for the telecommunications path may be used to perform network traffic engineering for the network comprising the telecommunications path. The network traffic engineering may include routing packets through the network, performing load balancing for the network, and the like. The bandwidth estimate for the telecommunications path may be used to select a transmission rate for transport control protocol (TCP) packets to be transmitted over the telecommunications path. The bandwidth estimate for the telecommunications path may be used to select a video encoding rate for video to be transmitted over the telecommunications path. The bandwidth estimate for the telecommunications path may be reported to a user of the telecommunications path. The main benefit of the disclosed technology is its performance. It improves the selection of packet pairs and leads to much better available bandwidth estimates. Another benefit is that it's a simple modification of the Bump Detection Algorithm (BDA) used by many of the best techniques to measure available bandwidth. FIG.6depicts a block diagram of an example computer system600in which various of the embodiments described herein may be implemented. The computer system600includes a bus602or other communication mechanism for communicating information, one or more hardware processors604coupled with bus602for processing information. Hardware processor(s)604may be, for example, one or more general purpose microprocessors. The computer system600also includes a main memory606, such as a random access memory (RAM), cache and/or other dynamic storage devices, coupled to bus602for storing information and instructions to be executed by processor604. Main memory606also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor604. Such instructions, when stored in storage media accessible to processor604, render computer system600into a special-purpose machine that is customized to perform the operations specified in the instructions. The computer system600further includes a read only memory (ROM)608or other static storage device coupled to bus602for storing static information and instructions for processor604. A storage device610, such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), etc., is provided and coupled to bus602for storing information and instructions. The computer system600may be coupled via bus602to a display612, such as a liquid crystal display (LCD) (or touch screen), for displaying information to a computer user. An input device614, including alphanumeric and other keys, is coupled to bus602for communicating information and command selections to processor604. Another type of user input device is cursor control616, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor604and for controlling cursor movement on display612. In some embodiments, the same direction information and command selections as cursor control may be implemented via receiving touches on a touch screen without a cursor. The computing system600may include a user interface module to implement a GUI that may be stored in a mass storage device as executable software codes that are executed by the computing device(s). This and other modules may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. In general, the word “component,” “engine,” “system,” “database,” data store,” and the like, as used herein, can refer to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Java, C or C++. A software component may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software components may be callable from other components or from themselves, and/or may be invoked in response to detected events or interrupts. Software components configured for execution on computing devices may be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution). Such software code may be stored, partially or fully, on a memory device of the executing computing device, for execution by the computing device. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware components may be comprised of connected logic units, such as gates and flip-flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors. The computer system600may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system600to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system600in response to processor(s)604executing one or more sequences of one or more instructions contained in main memory606. Such instructions may be read into main memory606from another storage medium, such as storage device610. Execution of the sequences of instructions contained in main memory606causes processor(s)604to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. The term “non-transitory media,” and similar terms, as used herein refers to any media that store data and/or instructions that cause a machine to operate in a specific fashion. Such non-transitory media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device610. Volatile media includes dynamic memory, such as main memory606. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, and networked versions of the same. Non-transitory media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between non-transitory media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus602. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. The computer system600also includes a communication interface618coupled to bus602. Network interface618provides a two-way data communication coupling to one or more network links that are connected to one or more local networks. For example, communication interface618may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, network interface618may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component to communicated with a WAN). Wireless links may also be implemented. In any such implementation, network interface618sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. A network link typically provides data communication through one or more networks to other data devices. For example, a network link may provide a connection through local network to a host computer or to data equipment operated by an Internet Service Provider (ISP). The ISP in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet.” Local network and Internet both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link and through communication interface618, which carry the digital data to and from computer system600, are example forms of transmission media. The computer system600can send messages and receive data, including program code, through the network(s), network link and communication interface618. In the Internet example, a server might transmit a requested code for an application program through the Internet, the ISP, the local network and the communication interface618. The received code may be executed by processor604as it is received, and/or stored in storage device610, or other non-volatile storage for later execution. Each of the processes, methods, and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code components executed by one or more computer systems or computer processors comprising computer hardware. The one or more computer systems or computer processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). The processes and algorithms may be implemented partially or wholly in application-specific circuitry. The various features and processes described above may be used independently of one another, or may be combined in various ways. Different combinations and sub-combinations are intended to fall within the scope of this disclosure, and certain method or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate, or may be performed in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed example embodiments. The performance of certain of the operations or processes may be distributed among computer systems or computers processors, not only residing within a single machine, but deployed across a number of machines. As used herein, a circuit might be implemented utilizing any form of hardware, software, or a combination thereof. For example, one or more processors, controllers, ASICs, PLAs, PALs, CPLDs, FPGAs, logical components, software routines or other mechanisms might be implemented to make up a circuit. In implementation, the various circuits described herein might be implemented as discrete circuits or the functions and features described can be shared in part or in total among one or more circuits. Even though various features or elements of functionality may be individually described or claimed as separate circuits, these features and functionality can be shared among one or more common circuits, and such description shall not require or imply that separate circuits are required to implement such features or functionality. Where a circuit is implemented in whole or in part using software, such software can be implemented to operate with a computing or processing system capable of carrying out the functionality described with respect thereto, such as computer system600. As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, the description of resources, operations, or structures in the singular shall not be read to exclude the plural. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. Adjectives such as “conventional,” “traditional,” “normal,” “standard,” “known,” and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. The presence of broadening words and phrases such as “one or more,” “at least,” “but not limited to” or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent. Software Defined Networking (SDN) is an approach for managing networks. SDN defines APIs which allow decoupling the datapath (packet forwarding) and the control plane (protocol intelligence) of network elements. In other words, a network controller, an entity outside the network element, can have fine grained control and visibility over that network element. This can be used by the network controller to change the policy of network elements dynamically, or to centralize the control plane and decision making of the network. The SDN approach is also very well suited for network traffic engineering. The SDN APIs usually define both measurement and control, and thus enables the network controller to measure the network and dictate a distribution of traffic. One of the limitations of SDN is that it assumes a tight coupling between the network controller and the network elements. This can work well at small to medium scale, but usually won't scale to larger networks. Its efficiency is also diminished if the network between the network controller and network elements has limited performance (e.g., low bandwidth or high latency). Moreover, the SDN approach usually does not allow crossing administrative domain boundaries, because different entities can only trust controlled and limited interactions between each other. Software Defined WAN (SD-WAN) technology proposes to use SDN principles to manage WAN connectivity. This can offer centralized visibility and control over the WAN connectivity of the organization. Another goal of SD-WAN is to reduce the cost of WAN connectivity. The way SD-WAN reduces cost is to replace those dedicated WAN links by tunnels over the Internet. In this case, each branch and location has multiple WAN links connected to the Internet, usually using inexpensive consumer WAN technologies such as DSL, CableModem or Wireless 3G. A special SD-WAN gateway in each branch and location creates private tunnels (VPN tunnels) to connect securely to other branches and locations over the WAN links and the Internet. Replacing the dedicated WAN links enables reduction in the cost of WAN connectivity. However, consumer WAN technology and the Internet do not provide the same high availability and quality of service guarantees as legacy dedicated WAN links. The common way SD-WAN solutions deal with this issue is for the SD-WAN gateway in each branch and location to have multiple WAN links to the Internet. The SD-WAN gateway creates parallel tunnels over the Internet using each WAN link, implements tunnel handoff, and uses network traffic engineering to direct traffic to the most appropriate network tunnel with the goal of optimally using the available network capacity. For example, if the SD-WAN gateway detects that a WAN link is down, it will direct traffic away from that WAN link to a tunnel not using that particular WAN link. The SD-WAN gateway can perform more advanced traffic engineering. For example, it can monitor the performance of each tunnel, in terms of latency, available bandwidth and throughput, and then load balance traffic, or map each traffic type to the most appropriate tunnel for that traffic. Network traffic engineering typically looks at the properties of the network traffic and the network elements and their connectivity to both help design the network and direct traffic onto different paths in that network. Proper traffic engineering helps to increase the network performance and lower its cost. One of the simplest forms of network traffic engineering is to add a backup link to the network to be used in case of failure of the primary link. Multi-rooted trees, such as fat trees, are topologies that offer many such parallel paths. They have been extensively studied, and many approaches have been proposed to load balance packets on such topologies. The Internet is a more complex network that is split across many entities and administrative domains, and therefore traffic engineering for the Internet is much more complex. On the Internet, each entity usually does fairly advanced traffic engineering within its own domain, and cross domain interactions are typically managed through Border Gateway Protocol (BGP) routing, which allows only for fairly coarse traffic engineering. Most techniques for traffic engineering are composed of three parts. The first part is measurement, where some attributes of the traffic and/or network are measured. The second part is optimization, where an optimal distribution of traffic is computed. The third part is control, where the network is reconfigured to implement the desired distribution of traffic. A key component of such traffic engineering is a way to measure the performance of each network tunnel. Each tunnel defines a network path across the Internet. The tunneled packets are processed by a number of network elements along the network path. The network path used by a tunnel (outside the tunnel) and the network path inside a tunnel are logically different (they have different end point addresses). However, those two network paths go through the same network elements, and in most cases have almost the same performance, and strongly correlated performance characteristics. Therefore, measuring the performance of an network tunnel can be done by measuring the network path outside or inside the tunnel. For purposes of the disclosed technology, they are considered the same network path.
41,183
11943130
DETAILED DESCRIPTION For simplicity and illustrative purposes, the present disclosure is described by referring mainly to examples. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be readily apparent however, that the present disclosure may be practiced without limitation to these specific details. In other instances, some methods and structures have not been described in detail so as not to unnecessarily obscure the present disclosure. Throughout the present disclosure, the terms “a” and “an” are intended to denote at least one of a particular element. As used herein, the term “includes” means includes but not limited to, the term “including” means including but not limited to. The term “based on” means based at least in part on. Aggregation-based speed testing apparatuses, methods for aggregation-based speed testing, and non-transitory computer readable media for aggregation-based speed testing are disclosed herein. The apparatuses, methods, and non-transitory computer readable media disclosed herein provide for implementation of two or more simultaneous speed tests over different physical interfaces to overcome the technical challenge of a maximum port speed associated with a port of a device that is operatively connected to a data transmission link, or a maximum processing speed associated with a test processor of a test instrument, and yet delivering a speed test that includes a higher overall speed compared to any port of the device, or any test processor of the test instrument. For example, assuming that a test device includes ports that are rated at 1 gigabit per second (Gbps) maximum, whereas the test device is connected to a data transmission link that includes a maximum specified data transfer rate of 2 Gbps, the apparatuses, methods, and non-transitory computer readable media disclosed herein provide for implementation of two or more simultaneous speed tests (e.g., 1 Gbps each, or another test range as disclosed herein) to deliver a 2 Gbps speed test. Examples of speed tests that may be performed may include Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and other types of speed tests. For the apparatuses, methods, and non-transitory computer readable media disclosed herein, generally, a specified test speed n that corresponds to a maximum specified data transfer rate for a data transmission link that is to be tested may be divided into x test ranges as disclosed herein based, for example, on factors such as a maximum specified port speed for ports that are to be utilized, a maximum specified processing speed for test processors that are to be utilized, and other such speed limitations for components that are to be utilized. For example, a speed test for a test speed of n may be divided equally into x test ranges (e.g., x=n/2, x=n/3, . . . , x=n/y, where y is an integer greater than 1). Alternatively, a speed test for a test speed of n may be divided into x unequal test ranges. Alternatively, a speed test for a test speed of n may be divided into x equal and y unequal test ranges. Generally, a speed test for a test speed of n may be divided into any combination of equal or unequal test ranges. For the apparatuses, methods, and non-transitory computer readable media disclosed herein, a speed test may be divided based on the capabilities of the test instrument (e.g., test processors of the test instrument). For example, if a test instrument is capable of 1 Gbps and 0.5 Gbps speed tests, for a total speed of 1.2 Gbps, the test instrument may be utilized at 1 Gbps and 0.2 Gbps to obtain the final results of the 1.2 Gbps speed test. For the apparatuses, methods, and non-transitory computer readable media disclosed herein, the speed test may be specified to exceed the test speed to thus assure that the test instrument as disclosed herein measures at least the test speed that is to be measured. Thus resources of the apparatus100may be controlled to exceed the test speed that is to be measured. For the apparatuses, methods, and non-transitory computer readable media disclosed herein, the test instrument as disclosed herein may be utilized to certify a true speed of a device. For example, assuming that a test speed to be measured is specified at 1 Gbps, but is actually at 1.5 Gbps, then the test instrument may be operated at a relatively higher speed (e.g., 1.2 Gbps) to determine the actual speed. If the actual speed is determined to be 1.2 Gbps, it can be assumed that the actual speed may be higher. In this regard, the test instrument may be operated at a further higher speed (e.g., 2.0 Gbps) to determine the actual speed. If the actual speed is determined to be 1.5 Gbps, it can be assumed that the actual speed is actually 1.5 Gbps. For the apparatus, methods, and non-transitory computer readable media disclosed herein, the elements of the apparatus, methods, and non-transitory computer readable media disclosed herein may be any combination of hardware and programming to implement the functionalities of the respective elements. In some examples described herein, the combinations of hardware and programming may be implemented in a number of different ways. For example, the programming for the elements may be processor executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the elements may include a processing resource to execute those instructions. In these examples, a computing device implementing such elements may include the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separately stored and accessible by the computing device and the processing resource. In some examples, some elements may be implemented in circuitry. FIG.1illustrates an architectural layout of an aggregation-based speed testing apparatus (hereinafter also referred to as “apparatus100”) in accordance with an example of the present disclosure. Referring toFIG.1, the apparatus100may include a test speed analyzer102that is executed by at least one hardware processor (e.g., the hardware processor802ofFIG.8, and/or the hardware processor1004ofFIG.10), to ascertain a test speed104that corresponds to a maximum specified data transfer rate for a data transmission link106that is to be tested. According to examples disclosed herein, the test speed analyzer102may ascertain the test speed104that corresponds to the maximum specified data transfer rate for an Internet connection. A test range generator108that is executed by at least one hardware processor (e.g., the hardware processor802ofFIG.8, and/or the hardware processor1004ofFIG.10) may determine a maximum specified port speed110of at least one test port112(illustrated inFIG.1as test port (1), . . . , test port (n)) of a device114connected to the data transmission link106that is to be tested. Alternatively, the maximum specified port speed110may be for a test instrument132itself as disclosed herein. The test range generator108may determine whether the maximum specified port speed110is less than the test speed104. Based on a determination that the maximum specified port speed110is less than the test speed104, the test range generator108may divide the test speed104into a plurality of test ranges116. A total value of the plurality of test ranges116may be at least equal to the test speed104. According to examples disclosed herein, the test range generator108may divide the test speed104into the plurality of test ranges116by dividing the test speed104into the plurality of test ranges116that include equal test ranges. According to examples disclosed herein, the test range generator108may divide the test speed104into the plurality of test ranges116by dividing the test speed104into the plurality of test ranges116that include unequal test ranges. According to examples disclosed herein, the test range generator108may divide the test speed104into the plurality of test ranges116by dividing the test speed104into the plurality of test ranges116that include random test ranges. According to examples disclosed herein, the total value of the plurality of test ranges116may be exactly equal to the test speed104. According to examples disclosed herein, the total value of the plurality of test ranges116may be greater than the test speed104. A test performer118that is executed by at least one hardware processor (e.g., the hardware processor802ofFIG.8, and/or the hardware processor1004ofFIG.10) may perform a test120corresponding to each test range of the plurality of test ranges116. The test performer118may obtain an intermediate test result122for each test range of the plurality of test ranges116. According to examples disclosed herein, the test performer118may perform the test120corresponding to each test range of the plurality of test ranges116by initiating the test120corresponding to each test range of the plurality of test ranges116at a same time (or within a specified time difference, such as <5 seconds apart). Further, the test performer118may perform the test120corresponding to each test range of the plurality of test ranges116for a same time duration. A test result generator124that is executed by at least one hardware processor (e.g., the hardware processor802ofFIG.8, and/or the hardware processor1004ofFIG.10) may generate, based on aggregation of each intermediate test result122for each test range of the plurality of test ranges116, an aggregated test result126that represents an actual speed associated with the data transmission link106. According to examples disclosed herein, the test range generator108may determine a maximum specified processing speed128of at least one test processor130of the test instrument132. This aspect may provide for testing of the data transmission link106based on the maximum specified processing speed128of the at least one test processor130of the test instrument132. In this regard, the test range generator108may determine whether the maximum specified processing speed128of the at least one test processor130of the test instrument132is less than the test speed104. Further, based on a determination that the maximum specified processing speed128of the at least one test processor130of the test instrument132is less than the test speed104, the test range generator108may divide the test speed104into a plurality of test ranges116corresponding to the at least one test processor130. In this regard, a total value of the plurality of test ranges116may be at least equal to the test speed104. According to examples disclosed herein, the test performer118may perform, for the at least one test processor130, a test120corresponding to each test range of the plurality of test ranges116. The test performer118may obtain an intermediate test result122for each test range of the plurality of test ranges116. According to examples disclosed herein, the test result generator124may generate, based on aggregation of each intermediate test result122for each test range of the plurality of test ranges116, an aggregated test result126that represents an actual speed associated with the data transmission link106. In order to test operation of the device114(e.g., determine whether the device114is operating correctly or incorrectly), once the data transmission link106is tested based on the maximum specified processing speed128of the at least one test processor130of the test instrument132, the test speed analyzer102may determine the maximum specified port speed110of at least one test port112of the device114connected to the data transmission link106. In this regard, the test range generator108may determine whether the maximum specified port speed110is less than the test speed104. Based on a determination that the maximum specified port speed110is less than the test speed104, the test range generator108may divide the test speed104into a plurality of further test ranges (e.g., similar or different from the test ranges116). A total further value of the plurality of further test ranges may be at least equal to the test speed104. The test performer118may perform a further test corresponding to each further test range of the plurality of further test ranges. The test performer118may obtain an intermediate further test result for each further test range of the plurality of further test ranges. The test result generator124may generate, based on aggregation of each intermediate further test result for each further test range of the plurality of further test ranges, an aggregated further test result that represents an actual further speed associated with the data transmission link106. A device operation analyzer134that is executed by at least one hardware processor (e.g., the hardware processor802ofFIG.8, and/or the hardware processor1004ofFIG.10) may determine whether the actual further speed is less than the actual speed. Based on a determination that the actual further speed is less than the actual speed, the device operation analyzer134may generate an indication that the device114is operating incorrectly. According to examples disclosed herein, the device operation analyzer134may determine whether the actual further speed is greater than or equal to the actual speed. Based on a determination that the actual further speed is greater than or equal to the actual speed, the device operation analyzer134may generate an indication that the device114is operating correctly. Operation of the apparatus100is described in further detail with reference toFIGS.1-7. FIGS.2-5illustrate performance of speed tests including a device connected to a data transmission link to illustrate operation of the aggregation-based speed testing apparatus ofFIG.1in accordance with an example of the present disclosure. Referring toFIGS.1-5, and particularlyFIG.2, an example of implementation of the apparatus100for a network and service companion (NSC)200is disclosed. The NSC200may include a 1 Gbps port202(e.g., an RJ-45, or another type of port), and additionally support a 1 Gbps optical/electrical Ethernet small form-factor pluggable (SFP)204in the SFP cage. Both of these interfaces may include a 1 Gbps path to one processor (e.g., an i.mx 8, or another type of 2 Gbps processor) at206. Thus, the processor may run at least a 2 Gbps speed test. In this regard, as disclosed herein, the test speed analyzer102may ascertain a test speed104of 2 Gbps that corresponds to a maximum specified data transfer rate of 2 Gbps for the data transmission link106that is to be tested. It is noted that the 2 Gbps maximum specified data transfer rate is provided as an example, and the maximum specified data transfer rate may be higher or lower than 2 Gbps. For example, the maximum specified data transfer rate may be specified at 10 Gbps (or more) for an optical Ethernet link. The test range generator108may determine a maximum specified port speed110of 1 Gbps of at least one test port112(e.g., ports208and210) of device114connected to the data transmission link106that is to be tested. In this regard, the device114may be a modem, a residential gateway (RG), an optical network terminal (ONT), or another such device. Alternatively, the maximum specified port speed110may be for the test instrument132itself as disclosed herein (e.g., for ports202and204). The test range generator108may determine whether the maximum specified port speed110is less than the test speed104(e.g., 1 Gbps<2 Gbps). Based on a determination that the maximum specified port speed110is less than the test speed104, the test range generator108may divide the test speed104into a plurality of test ranges116(e.g., test ranges of 1 Gbps each, or another test range as disclosed herein). A total value of the plurality of test ranges116may be at least equal to the test speed104(e.g., 1 Gbps+1 Gbps=2 Gbps). According to examples disclosed herein, the test range generator108may divide the test speed104into the plurality of test ranges116by dividing the test speed104into the plurality of test ranges116that include equal test ranges (e.g., two test ranges of 1 Gbps each). According to examples disclosed herein, the test range generator108may divide the test speed104into the plurality of test ranges116by dividing the test speed104into the plurality of test ranges116that include unequal test ranges (e.g., 3 test ranges of 1 Gbps, 0.6 Gbps, and 0.4 Gbps, assuming that there are further ports). According to examples disclosed herein, the test range generator108may divide the test speed104into the plurality of test ranges116by dividing the test speed104into the plurality of test ranges116that include random test ranges (e.g., 3 test ranges of 0.75 Gbps, 0.55 Gbps, and 0.7 Gbps, assuming that there are further ports). According to examples disclosed herein, the total value of the plurality of test ranges116may be exactly equal to the test speed104(e.g., total value is exactly 2 Gbps). According to examples disclosed herein, the total value of the plurality of test ranges116may be greater than the test speed104(e.g., total value is 2.1 Gbps). The test performer118may perform a test120corresponding to each test range of the plurality of test ranges116. The test120may be performed, for example, based on an internet/scientific standard, RFC etc. The test performer118may obtain an intermediate test result122(e.g., for test ranges of 1 Gbps, a first intermediate test result may be 0.995 Gbps, and second intermediate test result may be 0.990 Gbps) for each test range of the plurality of test ranges116. According to examples disclosed herein, the test performer118may perform the test120corresponding to each test range of the plurality of test ranges116by initiating the test120corresponding to each test range of the plurality of test ranges116at a same time (e.g., at time 4:00 PM). However, for multiple instances of tests, the tests may be performed at different times as needed. Further, the test performer118may perform the test120corresponding to each test range of the plurality of test ranges116for a same time duration (e.g., 15 seconds). The test result generator124may generate, based on aggregation of each intermediate test result122(e.g., for test ranges of 1 Gbps, a first intermediate test result is 0.995 Gbps, and second intermediate test result is 0.990 Gbps) for each test range of the plurality of test ranges116, an aggregated test result126that represents an actual speed (e.g., 0.995 Gbps+0.990 Gbps=1.985 Gbps) associated with the data transmission link106. The example of the actual speed of 1.985 Gbps described above may be an upstream actual speed, and a downstream actual speed may be similarly determined. The actual speed may thus include upstream and downstream components based on the first intermediate test result and the second intermediate test result both including upstream and downstream components. Referring still toFIG.2, with respect to the test120performed by the test performer118, the test performer118may segregate the traffic that belong to different instances of the speed test from one another, and ensure that they are flowing through the respective correct interface. For example, for the two ports202and204(e.g., RJ-45 and SFP), the test performer118may run two speed tests on the same processor. The first test may send and receive traffic over port202, and the second test may send and receive traffic over port204. Referring toFIGS.1-5, and particularlyFIG.3, another example of implementation of the apparatus100for an OneExpert DSL (ONX)300is disclosed. The ONX may include a 212 Mhz DSL port302(or another type of port) rated at least at 2 Gbps. The DSL port302may include a 1 Gbps path to two imx6 processors at304and306, each including a 1 Gbps processing capability. In this regard, as disclosed herein, the test speed analyzer102may ascertain the test speed104of 2 Gbps that corresponds to a maximum specified data transfer rate of 2 Gbps for the data transmission link106that is to be tested. The test range generator108may determine a maximum specified processing speed128of 1 Gbps of at least one test processor130(e.g., processors304and306) of the test instrument132(e.g., the ONX300). The test range generator108may determine whether the maximum specified processing speed128is less than the test speed104(e.g., 1 Gbps<2 Gbps). Based on a determination that the maximum specified processing speed128is less than the test speed104, the test range generator108may divide the test speed104into a plurality of test ranges116(e.g., test ranges of 1 Gbps each, or another test range as disclosed herein) corresponding to the at least one test processor130(e.g., processors304and306). A total value of the plurality of test ranges116may be at least equal to the test speed104(e.g., 1 Gbps+1 Gbps=2 Gbps). According to examples disclosed herein, as also disclosed herein with reference toFIG.2, the test range generator108may divide the test speed104into the plurality of test ranges116by dividing the test speed104into the plurality of test ranges116that include equal test ranges (e.g., two test ranges of 1 Gbps each). According to examples disclosed herein, the test range generator108may divide the test speed104into the plurality of test ranges116by dividing the test speed104into the plurality of test ranges116that include unequal test ranges (e.g., 3 test ranges of 1 Gbps, 0.6 Gbps, and 0.4 Gbps, assuming that there are further test processors). According to examples disclosed herein, the test range generator108may divide the test speed104into the plurality of test ranges116by dividing the test speed104into the plurality of test ranges116that include random test ranges (e.g., 3 test ranges of 0.75 Gbps, 0.55 Gbps, and 0.7 Gbps, assuming that there are further test processors). According to examples disclosed herein, the total value of the plurality of test ranges116may be exactly equal to the test speed104(e.g., total value is exactly 2 Gbps). According to examples disclosed herein, the total value of the plurality of test ranges116may be greater than the test speed104(e.g., total value is 2.2 Gbps). The test performer118may perform a test120corresponding to each test range of the plurality of test ranges116. The test performer118may obtain an intermediate test result122(e.g., for test ranges of 1 Gbps, a first intermediate test result may be 0.985 Gbps, and second intermediate test result may be 0.990 Gbps) for each test range of the plurality of test ranges116. According to examples disclosed herein, the test performer118may perform the test120corresponding to each test range of the plurality of test ranges116by initiating the test120corresponding to each test range of the plurality of test ranges116at a same time (e.g., at time 3:00 PM). Further, the test performer118may perform the test120corresponding to each test range of the plurality of test ranges116for a same time duration (e.g., 20 seconds). The test result generator124may generate, based on aggregation of each intermediate test result122(e.g., for test ranges of 1 Gbps, a first intermediate test result is 0.985 Gbps, and second intermediate test result is 0.990 Gbps) for each test range of the plurality of test ranges116, an aggregated test result126that represents an actual speed (e.g., 0.985 Gbps+0.990 Gbps=1.975 Gbps) associated with the data transmission link106. Referring still toFIG.3, with respect to the test120performed by the test performer118, since the instances of speed tests are executed on different processors, the test performer118may synchronize the start times of the speed tests, and monitor their lock step progress. ForFIG.3, the processor306may be housed inside the same semiconductor chip as the 212 Mhz modem. The test performer118may establish a speed test1on Processor1at304and direct traffic to/from Processor1and the 212 Mhz modem, and rely on the chipset to multiplex/de-multiplex the traffic belonging to the two speed tests. Referring toFIGS.1-5, and particularlyFIG.4, another example of implementation of the apparatus100for an ONX400is disclosed. The ONX400may include two 1 Gbps ports402and404(e.g., RJ-45, or other types of ports). Both of these interfaces may include a 1 Gbps path to one processor each at406and408(e.g., an i.mx 8, or another type of processor). In this regard, as disclosed herein, the test speed analyzer102may ascertain a test speed104of 2 Gbps that corresponds to a maximum specified data transfer rate of 2 Gbps for the data transmission link106that is to be tested. The test range generator108may determine a maximum specified port speed110of 1 Gbps of at least one test port112(e.g., ports410and412) of device114connected to the data transmission link106that is to be tested. Alternatively, the maximum specified port speed110may be for the test instrument132itself as disclosed herein (e.g., for ports402and404). The test range generator108may determine whether the maximum specified port speed110is less than the test speed104(e.g., 1 Gbps<2 Gbps). Based on a determination that the maximum specified port speed110is less than the test speed104, the test range generator108may divide the test speed104into a plurality of test ranges116(e.g., test ranges of 1 Gbps each, or another test range as disclosed herein), in a similar manner as disclosed herein with respect toFIG.2. Alternatively, assuming that the processors406and408are also capable of 1 Gbps maximum performance, the analysis described above by the test range generator108may be based on the maximum specified processing speed128as disclosed above with respect toFIG.3. The test performer118may perform a test120corresponding to each test range of the plurality of test ranges116, in a similar manner as disclosed herein with respect toFIG.2. In a similar manner as disclosed herein with respect toFIG.2, the test result generator124may generate, based on aggregation of each intermediate test result122(e.g., for test ranges of 1 Gbps, a first intermediate test result is 0.995 Gbps, and second intermediate test result is 0.990 Gbps) for each test range of the plurality of test ranges116, an aggregated test result126that represents an actual speed (e.g., 0.995 Gbps+0.990 Gbps=1.985 Gbps) associated with the data transmission link106. Referring still toFIG.4, with respect to the test120performed by the test performer118, since the instances of speed tests are executed on different processors, the test performer118may synchronize the start times of the speed tests, and monitor their lock step progress. The test performer118may utilize an internal Ethernet switch in the ONX to carry speed test1traffic from processor1at406to/from RJ45 port1at402, and speed test2traffic from processor2at408to/from RJ45 port2at404, to segregate the speed tests. Referring toFIGS.1-5, and particularlyFIG.5, another example of implementation of the apparatus100for an NSC500is disclosed. Compared to the example ofFIG.2that implements two 1 Gbps speed tests, the example ofFIG.5includes a single 2 Gbps speed test. However, forFIG.5that utilizes a link aggregation protocol at502, traffic is divided between interface1at504and interface2at506. Both of these interfaces may include a 1 Gbps path to a 2 Gbps processor at508. In this regard, as disclosed herein, the test speed analyzer102may ascertain a test speed104of 2 Gbps that corresponds to a maximum specified data transfer rate of 2 Gbps for the data transmission link106that is to be tested. The test range generator108may determine a maximum specified port speed110of 1 Gbps of at least one test port112(e.g., ports510and512) of device114connected to the data transmission link106that is to be tested. Alternatively, the maximum specified port speed110may be for the test instrument132itself as disclosed herein (e.g., for interfaces504and506). The test range generator108may determine whether the maximum specified port speed110is less than the test speed104(e.g., 1 Gbps<2 Gbps). Based on a determination that the maximum specified port speed110is less than the test speed104, the test range generator108may divide the test speed104into a plurality of test ranges116(e.g., test ranges of 1 Gbps each, or another test range as disclosed herein), in a similar manner as disclosed herein with respect toFIG.2. The test performer118may perform a test120corresponding to each test range of the plurality of test ranges116, in a similar manner as disclosed herein with respect toFIG.2. In a similar manner as disclosed herein with respect toFIG.2, the test result generator124may generate, based on aggregation of each intermediate test result122(e.g., for test ranges of 1 Gbps, a first intermediate test result may be 0.995 Gbps, and second intermediate test result may be 0.990 Gbps) for each test range of the plurality of test ranges116, an aggregated test result126that represents an actual speed (e.g., 0.995 Gbps+0.990 Gbps=1.985 Gbps) associated with the data transmission link106. FIGS.6and7illustrate performance of speed tests based on a direct connection to the data transmission link to illustrate operation of the apparatus100in accordance with an example of the present disclosure. Referring toFIGS.1and6, and particularlyFIG.6, an example of implementation of the apparatus100for an ONX base600is disclosed. Compared to the configuration ofFIGS.2-5that includes the device114disposed between the test instrument132(e.g., NSC), the configuration ofFIG.6may include no device114disposed between the ONX base600(or another type of base) and the data transmission link106that is to be tested (e.g., the data transmission link106is connected directly to the test instrument132that includes the ONX base and the modem module). In this manner, the device operation analyzer134may determine whether the device114(e.g., as shown inFIG.2) is operating correctly or incorrectly as disclosed herein. Specifically, referring toFIG.6, the ONX base600may include a processor602(e.g., an i.mx 6, or another type of processor) capable of a 1 Gbps speed test, and the modem module may include a processor604(e.g., a Broadcom63158, or another type of processor) similarly capable of a 1 Gbps speed test. In this regard, similar to the example ofFIG.4, the capabilities of the processors602and604may be aggregated as disclosed herein to perform, for example, a 2 Gbps speed test. For example, the test speed analyzer102may ascertain the test speed104of 2 Gbps that corresponds to a maximum specified data transfer rate of 2 Gbps for the data transmission link106that is to be tested. The test range generator108may determine a maximum specified processing speed128of 1 Gbps of at least one test processor130(e.g., processors602and604) of the test instrument132(e.g., the ONX base and the model module). The test range generator108may determine whether the maximum specified processing speed128is less than the test speed104(e.g., 1 Gbps<2 Gbps). Based on a determination that the maximum specified processing speed128is less than the test speed104, the test range generator108may divide the test speed104into a plurality of test ranges116(e.g., test ranges of 1 Gbps each, or another test range as disclosed herein) corresponding to the at least one test processor130(e.g., processors602and604), in a similar manner as disclosed herein with respect toFIG.3. The test result generator124may generate, based on aggregation of each intermediate test result122(e.g., for test ranges of 1 Gbps, a first intermediate test result may be 0.985 Gbps, and second intermediate test result may be 0.990 Gbps) for each test range of the plurality of test ranges116, an aggregated test result126that represents an actual speed (e.g., 0.985 Gbps+0.990 Gbps=1.975 Gbps) associated with the data transmission link106. Comparing the results ofFIGS.6and2(assuming that the actual speed determined forFIG.2was 1.815 Gbps), the device operation analyzer134may determine whether the actual speed determined from the analysis associated withFIG.2is less than the actual speed determined from the analysis associated withFIG.6. Based on a determination that theFIG.2actual speed is less than theFIG.6actual speed, the device operation analyzer134may generate an indication that the device114is operating incorrectly (e.g., due to the drop from 1.975 Gbps to 1.815 Gbps). However, if theFIG.2actual speed is greater than or equal to theFIG.6actual speed, the device operation analyzer134may generate an indication that the device114is operating correctly. Referring next toFIG.7, in a similar manner asFIG.6, the capabilities of the processors602and604may be aggregated as disclosed herein to perform, for example, a 2 Gbps speed test. However, compared toFIG.6, forFIG.7, the data transmission link106may be connected to a device114. The test speed analyzer102may ascertain the test speed104of 2 Gbps that corresponds to a maximum specified data transfer rate of 2 Gbps for the data transmission link106that is to be tested. The test range generator108may determine a maximum specified processing speed128of 1 Gbps of at least one test processor130(e.g., processors602and604) of the test instrument132(e.g., the ONX base and the model module). The test range generator108may determine whether the maximum specified processing speed128is less than the test speed104(e.g., 1 Gbps<2 Gbps). Based on a determination that the maximum specified processing speed128is less than the test speed104, the test range generator108may divide the test speed104into a plurality of test ranges116(e.g., test ranges of 1 Gbps each, or another test range as disclosed herein) corresponding to the at least one test processor130(e.g., processors602and604), in a similar manner as disclosed herein with respect toFIG.3. The test result generator124may generate, based on aggregation of each intermediate test result122(e.g., for test ranges of 1 Gbps, a first intermediate test result may be 0.905 Gbps, and second intermediate test result may be 0.910 Gbps) for each test range of the plurality of test ranges116, an aggregated test result126that represents an actual speed (e.g., 0.905 Gbps+0.910 Gbps=1.815 Gbps) associated with the data transmission link106. In this regard, the =device operation analyzer134may generate an indication that the device114is operating incorrectly (e.g., due to the drop from 1.975 Gbps ofFIG.6to 1.815 Gbps ofFIG.7). FIGS.8-10respectively illustrate an example block diagram800, a flowchart of an example method900, and a further example block diagram1000for aggregation-based speed testing, according to examples. The block diagram800, the method100, and the block diagram1000may be implemented on the apparatus100described above with reference toFIG.1by way of example and not of limitation. The block diagram800, the method900, and the block diagram1000may be practiced in other apparatuses. In addition to showing the block diagram800,FIG.8shows hardware of the apparatus100that may execute the instructions of the block diagram800. The hardware may include a processor802, and a memory804storing machine readable instructions that when executed by the processor cause the processor to perform the instructions of the block diagram800. The memory804may represent a non-transitory computer readable medium.FIG.9may represent an example method for aggregation-based speed testing, and the steps of the method.FIG.10may represent a non-transitory computer readable medium1002having stored thereon machine readable instructions to provide aggregation-based speed testing according to an example. The machine readable instructions, when executed, cause a processor1004to perform the instructions of the block diagram1000also shown inFIG.10. The processor802ofFIG.8and/or the processor1004ofFIG.10may include a single or multiple processors or other hardware processing circuit, to execute the methods, functions and other processes described herein. These methods, functions and other processes may be embodied as machine readable instructions stored on a computer readable medium, which may be non-transitory (e.g., the non-transitory computer readable medium1002ofFIG.10), such as hardware storage devices (e.g., RAM (random access memory), ROM (read only memory), EPROM (erasable, programmable ROM), EEPROM (electrically erasable, programmable ROM), hard drives, and flash memory). The memory804may include a RAM, where the machine readable instructions and data for a processor may reside during runtime. Referring toFIGS.1-8, and particularly to the block diagram800shown inFIG.8, the memory804may include instructions806to ascertain a test speed104that corresponds to a maximum specified data transfer rate for a data transmission link106that is to be tested. The processor802may fetch, decode, and execute the instructions808to determine a maximum specified port speed110of at least one test port112of a device114connected to the data transmission link106that is to be tested. The processor802may fetch, decode, and execute the instructions810to determine whether the maximum specified port speed110is less than the test speed104. Based on a determination that the maximum specified port speed110is less than the test speed104, the processor802may fetch, decode, and execute the instructions812to divide the test speed104into a plurality of test ranges116. The processor802may fetch, decode, and execute the instructions814to perform a test120corresponding to each test range of the plurality of test ranges116. The processor802may fetch, decode, and execute the instructions816to obtain an intermediate test result122for each test range of the plurality of test ranges116. The processor802may fetch, decode, and execute the instructions818to generate, based on aggregation of each intermediate test result122for each test range of the plurality of test ranges116, an aggregated test result126that represents an actual speed associated with the data transmission link106. Referring toFIGS.1-7and9, and particularlyFIG.9, for the method900, at block902, the method may include ascertaining, by at least one hardware processor, a test speed104that corresponds to a maximum specified data transfer rate for a data transmission link106that is to be tested. At block904, the method may include determining, by the at least one hardware processor, a maximum specified processing speed128of at least one test processor130of a test instrument132. At block906, the method may include determining, by the at least one hardware processor, whether the maximum specified processing speed128of the at least one test processor130of the test instrument132is less than the test speed104. Based on a determination that the maximum specified processing speed128of the at least one test processor130of the test instrument132is less than the test speed, at block908, the method may include dividing, by the at least one hardware processor, the test speed104into a plurality of test ranges116corresponding to the at least one test processor130. At block910, the method may include performing, by the at least one hardware processor, for the at least one test processor130, a test120corresponding to each test range of the plurality of test ranges116. At block912, the method may include obtaining, by the at least one hardware processor, an intermediate test result122for each test range of the plurality of test ranges116. At block914, the method may include generating, by the at least one hardware processor, based on aggregation of each intermediate test result122for each test range of the plurality of test ranges116, an aggregated test result126that represents an actual speed associated with the data transmission link106. Referring toFIGS.1-7and10, and particularlyFIG.10, for the block diagram1000, the non-transitory computer readable medium1002may include instructions1006to ascertain a test speed104that corresponds to a maximum specified data transfer rate for a data transmission link106that is to be tested. The processor1004may fetch, decode, and execute the instructions1008to determine a maximum specified processing speed128of at least one test processor130of a test instrument132. The processor1004may fetch, decode, and execute the instructions1010to determine whether the maximum specified processing speed128of the at least one test processor130of the test instrument132is less than the test speed104. Based on a determination that the maximum specified processing speed128of the at least one test processor130of the test instrument132is less than the test speed104, the processor1004may fetch, decode, and execute the instructions1012to divide the test speed104into a plurality of test ranges116corresponding to the at least one test processor130. The processor1004may fetch, decode, and execute the instructions1014to perform, for the at least one test processor130, a test120corresponding to each test range of the plurality of test ranges116. The processor1004may fetch, decode, and execute the instructions1016to obtain an intermediate test result122for each test range of the plurality of test ranges116. The processor1004may fetch, decode, and execute the instructions1018to generate, based on aggregation of each intermediate test result122for each test range of the plurality of test ranges116, an aggregated test result126that represents an actual speed associated with the data transmission link106. What has been described and illustrated herein is an example along with some of its variations. The terms, descriptions and figures used herein are set forth by way of illustration only and are not meant as limitations. Many variations are possible within the spirit and scope of the subject matter, which is intended to be defined by the following claims—and their equivalents—in which all terms are meant in their broadest reasonable sense unless otherwise indicated.
42,072
11943131
DETAILED DESCRIPTION Overview In an embodiment, a method comprises: by a controller configured to communicate with devices of a network, storing classifiers assigned to groups of the devices to identify device commonality that is distinct for each group; associating, to the classifiers, historical probabilities of success with which an automated response executed by one or more of the devices remediates a device alarm event; when a device of the devices reports the device alarm event, identifying each classifier to which the device belongs, each historical probability for each classifier, and a rule with classifier thresholds for evaluating whether to execute the automated response on the device; determining to execute the automated response on the device by evaluating the rule using each historical probability and the classifier thresholds; after the automated response is executed on the device, performing a service test to test a service supported across the devices, and monitoring test results from the service test; and after performing the service test, updating each historical probability using the test results. Example Embodiments Embodiments presented herein provide a closed-loop system configured to measure the success of auto-remediation and self-healing tasks primarily referred to as “automated responses” after they have been executed on an equipment infrastructure, such as a network, to remediate alarm events. The embodiments condition deployment of the automated responses to devices that report alarm events based on evaluations of response rules against computed complex historical confidence scores (also referred to simply as “confidences”) that represent “historical probabilities of success” that the automated responses remediate the alarm events, as measured by improved service performance. The embodiments compute the historical confidence scores within multiple localized/specific classifiers (also referred to as “classifications”) of groups of devices or “scopes” (e.g., devices within the same city, devices that are the same model, devices that match a specific business, and so on) and a global scope that spans the specific classifiers (e.g., across all cities, across all models, and across all businesses). The embodiments reinforce the historical confidence scores/historical probabilities of success of the automated responses, based on historical success rates of the automated responses compared within the specific classifiers (e.g., success rates of the automated responses performed on devices specifically in Tokyo, or for Catalyst 9004 devices by Cisco, but not on ISR 4400 routers by Cisco). The embodiments also provide automatic closed-loop measurements of success using “synthetic” service tests, and using a human reinforcement layer of observation and influence. The embodiments employ a complex rules engine that allows for granular control over the response rules to permit the automated responses to run (i.e., execute) or not run with specificity for the specific classifiers associated with a given device. This permits administrators to control an amount of risk that can be tolerated with specificity for the given device, its location, or other parameters, as not all devices in all environments tolerate the same level of risk. The embodiments further permit role-based human reinforcement of the historical confidence scores. FIG.1Ais a block diagram of an example system100in which confidence reinforcement of automated remediation decision making through full-stack observations and service health measurements may be implemented. System100includes an equipment infrastructure102(which may be configured as a network) that supports various network-based services, a controller104accessible by an administrator through a portal105, a datastore106, and a network108connected to, and over which, the equipment infrastructure, the controller, and the datastore communicate with each other. Controller104implements a complex rules engine109to define and evaluate logic for response rules, as described below. Controller104is depicted as a single entity inFIG.1Aby way of example only. It is understood that controller104may include multiple network management and control entities/functions configured to perform operations presented herein. Through portal105, the administrator may provision and control aspects of system100. Although shown separately, equipment infrastructure102may form part of network108. Network108may include one or more wide area networks (WANs), such as the Internet, and one or more local area networks (LANs), that convey traffic (e.g., data packets) between equipment infrastructure102, controller104, and datastore106using any known or hereafter developed communication protocols, such as the transmission control protocol (TCP)/Internet Protocol (IP), and the like. Equipment infrastructure102includes a collection of interconnected equipment or devices110, such as equipment provided in a data center, network, and so on. The equipment may include devices110(such as hardware devices, applications hosted on the hardware devices, and virtual devices) to provide compute, storage, and network resources in the data center, and/or the network, for example. Equipment infrastructure102may include servers, network devices, such as routers and switches, and the like, for example. The devices110may be co-located at a geographic location (“geolocation”) or may be distributed across multiple spaced-apart geolocations. Devices110can communicate with controller104over network108. Controller104has access to datastore106, which may be stored locally to the controller or offsite. Datastore106includes a device inventory120of devices110, classifiers121(also referred to as “device classifiers” and “classifications”) that may form part of the device inventory, historical confidence scores122associated with (i.e., mapped to) corresponding ones of the classifiers, a list of alarm events124, a list of (and executable components of) automated responses126mapped to corresponding ones of the alarm events, and response rules128for/mapped to corresponding ones of the automated responses. Device inventory120includes an inventory of devices110as discovered by controller104. Controller104may discover devices110using any known or hereafter developed device discovery technique. Device inventory120includes identifiers of, and other information related to, the devices, such as IP addresses, device names, and so on. Data objects described herein may be mapped to each other using any known or hereafter developed mapping constructs such as address pointers, shared data object names, common memory spaces, database mapping constructs, and the like. Classifiers121are assigned to devices110as listed in device inventory120. Example classifiers are described below in connection withFIGS.2-4. The administrator may assign classifiers121to devices110during provisioning of system100and thereafter. In other arrangements, classifiers121may be configured on devices110and discoverable by controller104. Classifiers121include descriptive labels or tags that identify commonality among the devices to which the classifiers are assigned. That is, the classifiers are assigned to groups (i.e., multiple ones) of the devices to identify device commonality that is distinct for each group. Classifiers may include/define logical attributes of devices110. For example, classifiers121include, but are not limited to, device location (e.g., Tokyo, Bangalore, etc.), device type (e.g., router, switch, etc.), device model (e.g., XP400, SRS50), business identity (e.g., enterprise A, enterprise B), and so on. As such, a classifier (e.g., location) may define a set of classifier values or sub-classifiers (e.g., Tokyo, Bangalore, and so on). Classifiers, classifier values, and sub-classifiers are all generally referred to as “classifiers.” Historical confidence scores122(also referred to simply as “confidences”) are mapped to corresponding ones of classifiers121(e.g., one historical confidence score per classifier). Historical confidence scores122represent computed historical probabilities that corresponding automated responses (defined below), when executed on devices110, will remediate (e.g., clear, obviate, or otherwise overcome) corresponding alarm events (also defined below) successfully to improve the performance of the services supported by the devices. In other words, historical confidence scores122represent historical probabilities of success of the automated responses in remediating the alarm events to improve the services. Historical confidence scores122are provisioned on system100with initial values (e.g., 50%) set by the administrator through portal105. Once provisioned, system100recomputes or updates historical confidence scores122over time to indicate the historical success of the automated responses as remediators, as described below. In various examples, alarm events124represent specifically identifiable degraded conditions of devices110that can negatively impact or degrade a service that runs on the devices. Such degraded conditions may include, low or degraded device resources, such as device component failures, low memory, low central processing unit (CPU) availability, and saturated or overloaded interfaces/ports, for example. Each device can detect alarm events locally and report the same to controller104. Automated responses126are mapped to corresponding ones of alarm events124. An automated response is an automated remediation task configured to be executed by a device to remediate a corresponding alarm event reported by the device to controller104(or other remediating entity). An automated response for a device includes, but is not limited to, executable code (e.g., a code snippet) compatible with the device (i.e., that the device executes to perform a task), a configuration command for the device, and so on. Controller104can command the device to perform/execute the automated response. That is, controller104deploys the automated response to the device, which then executes the automated response. Automated responses may also be referred to as any of automated tasks, automated actions, automated remediations, automated response actions, action responses, and response actions. Response rules128are mapped to corresponding ones of automated responses126. Example response rules are described below in connection withFIGS.5-7. Controller104determines whether to deploy automated responses126to devices110to remediate alarm events124reported by the devices based on historical confidence scores122, response rules128, and confidence thresholds associated with the response rules. More specifically, controller104compares the historical confidence scores122to the confidence thresholds in accordance with the response rules, to produce decisions as to whether to deploy the automated responses. The administrator may provision the confidence thresholds on complex rules engine109through portal105. Complex rules engine109assists controller104with the aforementioned evaluation of response rules128, as described below. Controller104interacts with equipment infrastructure102, datastore106, complex rules engine109, and portal105to implement the confidence reinforcement of automated remediation decisions through full-stack observations and service health measurements. High-level operations for implementing the foregoing are now described. At150, controller104repeatedly initiates a service test151(or multiple different types of service tests) that runs on devices110, to produce test results indicative of service health. An example service test is described below in connection withFIG.8. At152, controller104receives and monitors the test results. Operations150and152collectively represent “service health measurement” performed in system100. Asynchronously to150and152, at154, a device (e.g., device110a) among devices110detects an alarm event and reports the same to controller104. Upon receiving the alarm event, at156, controller104retrieves from datastore106(i) historical confidence scores for classifiers to which the device (e.g., device110a) belongs based on an identity of the device, (ii) an automated response to remediate the alarm event, and (iii) response rules and their confidence thresholds as mapped to the automated response. At158, controller104determines whether to command the device to execute the automated response (i.e., to deploy the automated response to the device for execution by the device) based on the response rules, the confidence thresholds, and the historical confidence scores. For example, controller104compares the historical confidence scores against the confidence thresholds in accordance with logic defined by the response rules, to produce an outcome indicative of whether to deploy/execute the automated response. In some examples, “role-based human reinforcement” may also be leveraged to influence the outcome. When the outcome indicates to permit/deploy/execute the automated response, at160, controller104commands the device to execute the automated response, which the device executes at162. On the other hand, when the outcome indicates not to execute the automated response, controller104does not command the device to execute the automated response. In summary, operations150-162detect/observe the alarm event, match the alarm event to the automated response, and potentially automatically apply the automated response to remediate the alarm event based on the response rules. The foregoing is referred to as an “automation workflow.” Assume that the outcome indicates to deploy the automated response, which the device executes. After the device executes the automated response (i.e., after the automation workflow has executed the automated response), controller104continues to run service test151and monitor its test results, specifically to determine whether execution of the automated response by the device resulted in improved service health. At166, controller104performs a “complex confidence calculation” to update/recompute the historical confidence scores corresponding to the automated response based/depending on the test results, i.e., to reflect the test results. When the test results indicate that the service health improved (e.g., the test results indicate a pass or a quantitative service improvement), controller104increases the historical confidence scores by a predetermined amount, e.g., 5%, 10%, or some other amount. In other words, the historical probability that the automated response is successful increases for the next round. Therefore, the next time that the alarm event is reported, system100will be more likely to deploy the automated response to remediate the alarm event. On the other hand, when the test results indicate that the service health degraded (e.g., a failure or a quantitative degradation), controller104decreases the historical confidence scores by a predetermined amount, e.g., 5%, 10%, or some other amount. In other words, the historical probability that the automated response is successful decreases for the next round. Therefore, the next time that the alarm event is reported, system100will be less likely to deploy the automated response to remediate the alarm event. Several main functions introduced above are now describe in further detail, including service health measurement, complex confidence calculation, complex rules engine109, and role-based human reinforcement. First, the service health measurement leverages known or hereafter developed network full-stack serviceability tools (e.g., ThousandEyes or AppDynamics applications by Cisco) to baseline service health and measure the success of an automated response. The service health measurement represents an “always-on” measurement system that employs service tests repeatedly to produce test results indicative of service health, correlates the service tests/test results to devices that have performed automated responses to remediate alarm events, and historically grades the confidence with which the automated responses remediated the alarm events based on prior impacts to service health caused by the automated responses. The service health measurement allows for unsupervised and closed-loop learning of automated remediation (i.e., automatically deploying the automated responses to the devices to remediate the alarm events), as well as measuring the true impact of the automated responses on supported services. Failed results produced by an automated response (i.e., a failure to remediate an alarm event) executed by a device may not be observable in key performance indicators (KPIs) generated locally by the device, and may best be observable at a level of a service that runs on the device. Thus, monitoring test results of service tests may be an effective way to catch the failed results. Second, the complex confidence calculation calculates historical confidence scores for automated responses based on correlated service tests. The complex confidence calculation computes historical confidence scores (i.e., confidences of success) of an automated response at various scopes (i.e., across different classifiers and levels of the classifiers) to convey affinities for the historical confidence scores for the automated response in a specific environment. For example, different scopes/classifier levels may reveal the confidence of success of rebooting a device to resolve an alarm event=high CPU on all devices with a device model classifier=Aggregated Service Router (ASR)1004, and the confidence of success of shutting-down a primary interface on a device type classifier=router in a device location classifier=Tokyo (compared to historically shutting-down interfaces on devices with the device location classifier=Tokyo). Historical confidence is self-learning and can be unsupervised in a closed-loop system. Automated promotion/demotion of automated response can be performed, based on calculating dynamic historical confidence scores. Thus, the embodiments provide the ability to adjust a learning rate for the historical confidence scores, to fine tune velocity of trusting an automated response if it is new or produces changing results. Third, complex rules engine109allows the administrator to define confidence thresholds for confidence scores corresponding to classifiers at both a localized scope (e.g., classifier=device location, classifier=device model, and classifier=device geolocation or region) and/or at a global level, which represents a combined or aggregate scope. This provides flexibility and protection of automated response/automation influence. For example, a confidence threshold may permit first-time introduction of an automated response/automation at a geolocation, provided that a historical confidence score for the automated response at other geolocations exceeds a certain threshold (i.e., a certain success rate). Conversely, an administrator may not want to run an immature automated response in a specific local environment (e.g., rebooting a core switch on a Tokyo trading floor), and may therefore set the confidence threshold for the automated response to run when the confidence of running the automated response in Tokyo is 95% or above. Fourth, role-based human reinforcement allows human/administrator control over the processes presented herein through portal105. In a situation in which a historical confidence score for an automated response is not sufficiently high to permit execution of the automated response, the administrator may choose to supervise deployment (i.e., execution) of the automated response in order to increase the historical confidence score, or to lower a historical confidence score of an automated response that a service test failed to auto-detect. To this end, the administrator may install user profiles to control which users can run which automated responses. The user profiles carry attributes of historical confidence sores for automated responses the users have approved to run manually. This approval system can leverage the user profiles to allow users with a specified track record to run automated responses (e.g., to permit those users with an approval rating >90%; where 90% would be measured by a user approving10manual automated responses to run with1service impact detected). Human reinforcement can also be used to mark an automated response that has run in the past as being service impacting, for situations where the synthetic service level test did not detect the degradation of the service. FIG.1Bshows an example sequence of operations180performed by system100to implement confidence reinforcement of automated remediation decision making through full-stack observations and service health measurements. Sequence of operations180expand on the operations described above.FIG.1Bis described with continued reference toFIG.1A. At181, system100employs the service health measurement as described above to provide a baseline measurement of an impact of an automated response to a service test performed across equipment infrastructure102, and also to correlate the impact to devices that support the service test. The system100repeatedly runs the service test using the full-stack observability tools mentioned above to measure the success of the automated response targeted to a device to remediate an alarm event. To this end, the service test produces a test result (i.e., a service health measurement) that is used to historically grade a confidence of success (i.e., a historical confidence score) that the automated response improves service health. The service test may include a TCP/IP-based service test (which may employ protocols such as the hypertext transfer protocol secure (HTTPS) protocol, and TCP/user datagram protocol (UDP)/Internet control message protocol (ICMP)) directed toward defined business targets. The service test is performed at a business service interface layer, which measures a true reachability and performance of a business service running over the top of the devices of equipment infrastructure102. In addition, the system100dynamically builds/maintains a dependency mapping between the service test, test results produced by the service test (i.e., a service health measurement), and the devices that support the service test. The dependency mapping attributes a service health impact uncovered by the service test to one or more devices that contribute to the service health impact. Such attribution is leveraged by other functions, described below. Various known applications, such as ThousandEyes and ApplicationDynamics by Cisco, offer the ability, through tomography, to automatically map-out network hops and/or service dependency paths. This constructs a service-to-device dependency map of underlying components/devices/IP addresses upon which a business service overlay depends. The service-to-device dependency map reinforces the historical confidence scores that measure the historical success of automated responses targeted to devices to remediate alarm events generated at the devices. The service-to-device dependency map attributes a target device (e.g., a router with IP address 192.0.2.100) as a dependency for a service tested by the service test. The attribution indicates to the system that, with respect to executing an automated response on the target device, the system should monitor the test results of running the service test both before and after the target device executes the automated response, to gain an understanding of what impact the automated response had on the service health. At182, the system100detects an alarm event and then matches the alarm event to an automated response to remediate the alarm event. The alarm event and the automated response are listed in and mapped to each other through alarm events124and automated responses126. A device detects the alarm event through a device health monitoring platform. The alarm event may take the form of any of a simple network management protocol (SNMP) trap, a syslog message log signature match, and a key performance indicator (KPI), such as a CPU exceeding a threshold, and the like. The device reports the alarm event to the system (e.g., to controller104), which learns an identity of the device from the report. The system (e.g., controller104) accesses the automated response mapped to the alert in datastore106. In an example, the system100detects the alarm event, reports the alarm event, and then matches the alarm event to the automated response using a network management system (NMS), which performs event management and fault detection in a manner that is distributed across controller104and equipment infrastructure102. That is, the NMS is incorporated into controller104and equipment infrastructure102. In another embodiment described below in connection withFIG.1C, the NMS is separate from controller104. In an example, the NMS may detect, as the alarm event, that an interface on a router with IP address 192.0.2.102 has a 7% packet loss. The alarm event is matched to the automated response, which is configured to troubleshoot the router and perform an automated action, such as to command a reset of the interface. Detecting the alarm event, reporting the alarm event, and matching the same to the automated response serves as a stimulus to subsequent operations that form a closed loop, with feedback provided from the service test. From182, the system learns the identity of the device that reported the alarm event, the alarm event, and the automated response for the alarm event. Armed with the aforementioned information, at183, the system identifies response rules mapped to the alarm event and performs a lookup of historical confidence scores against the response rules, as is now described. The system (i) retrieves the classifiers to which the device belongs from device inventory120/classifiers121using the identity of the device as an index, (ii) retrieves the historical confidence scores corresponding to the classifiers from historical confidence scores122, and (iii) retrieves the response rules corresponding to the alarm event from response rules128. The response rules represent a complex granular rule-set (e.g., complex Boolean logic) to evaluate the automated response based on confidence thresholds (as defined in complex rules engine109) for the historical confidence scores of the classifiers to which the device belongs. Together, the response rules and the confidence thresholds govern whether the automated response should be deployed to the device in order to remediate the alarm event. More specifically, using complex rules engine109, the system determines whether to deploy the automated response based on a comparison of the historical confidence scores against their corresponding confidence thresholds in accordance with the response rules. Generally, when the historical confidence scores are equal to or exceed their corresponding confidence thresholds, the system is approved to deploy the automated response, whereas when the historical confidence scores are less than their corresponding confidence thresholds, the system is not approved to deploy the automated response. Complex rules engine109advantageously allows for multiple criteria to be defined, adding precision to deciding when auto-remediation is permissible for a given environment, based on unique parameters and the response rules. Complex rules engine109allows for the logical combination of multiple rules to produce complex response rules as desired by the administrator. Such complex response rules create flexibility to ensure the administrator can control a desired level of match to any given situation for any possible response action. Allowing the setting of response rules to permit an automated response based solely on historical confidence scores within specific classifiers (such as classifier=device type, device geolocation, time-of-day, and so on), rather than being tied to a specific device, provides a granularity in control that extends beyond conventional automation system control. System100may generate any of the following possible responses or outcomes based on the evaluation of the response rules using the confidence thresholds and the historical confidence scores:a. Approved.b. Approve conditionally.c. Manual approval.d. Denied. At184, system100takes an action based on the above outcomes. In other words, based on the confidence thresholds and the response rules as matched at183, system100potentially takes the automated response in accordance with the following states:a. If Approved: Risk tolerance for the device is accepted. Proceed with the automated response, i.e., deploy the automated response to the device. Success of the automated response will be measured subsequently at185, described below.b. If Approve conditionally: Execute/deploy an additional response action for additional checks, and expect a “pass” result before applying the automated response. This is a transient state, which moves to Approved upon a successful result from the additional response action when deployed/executed, or Denied if the expected result is not observed. This state allows chaining or recursion of response actions.c. If Manual approval: Present the task to a defined approval group for manual supervised approval by the administrator before executing the automated response. This state is for response actions with lower confidence scores than can be trusted to run in a closed-loop or unsupervised approach. If the automated response is approved by the administrator, and the automated response is found to have a service impact, an approver's rating may be negatively impacted in the approver's user profile.d. If Denied: Do not proceed with the automated response. No further action is taken. At185, after the automated response has been performed by the device, the system performs the service test to produce test results, and correlates the test results to the device. The system may use any of multiple methods to correlate the service test to the device targeted by the automated response. A first method includes a time-based correlation that employs a look-forward window of a given time duration. The test results of all service tests performed during the look-forward window are used to score the success of the (previously executed) automated response. For example, if the automated response ran at time=01:00, and the look-forward window for that response action is a duration=00:30, any service tests that transition to an alarm state within a time range=01:00-01:30 are associated to the automated response (and, e.g., negatively or positively) impact the historical confidence score. The administrator may review the correlations at a later time and remove associations that the administrator deems as coincidental and not causal with respect to the action response. A second method includes service-dependency correlation. A service test may be deemed relevant to scoring the success of the previous automated response when an IP address of the device targeted by the automated response is in a dependency table for a test path that was exercised or used by the service test. This condition may also be combined with the look-forward window. The following sequence is an example of using the service-dependency correlation:a. Query: What service tests have seen device with IP address=192.0.1.150 in the network path in the last 24 hours?b. Answer: IP address=192.0.1.150 has been observed in the following service tests:i. Api.example.com.ii. Mail.example.com.c. Action: Correlate the service tests for api.example.com and mail.example.com to any changes/action responses applied to the device with IP address=192.0.1.150, since there is a known device dependency for those service tests. At186, the system adjusts the confidence scores based on the test result of the service test (as correlated at185). The following outcomes are possible based on the test result:a. No service impact found (e.g., no alerts from the service tests): Increase the historical confidence score of the automated response by 10% (or other amount) from a current value of the historical confidence score.b. Service is hard down (e.g., no response, timeout, or result does not match expected result): Reduce historical confidence score by 50% (or other amount) from the current value.c. Service is degraded (e.g., proper response is observed, but response time is outside of specification): Reduce historical confidence score by 25% (or other amount) of the current value. Historical confidence scores may be instantiated/initialized at 50% for new response actions. The historical confidence score influence listed above is computed for and applied to all localized classifiers of the device (i.e., to which the device belongs), e.g., device type, device geolocation, device mode, and so on. In turn, the changes to the localized historical confidence scores influence changes to the global historical confidence score. In other words, when an automated response runs on a device, system100updates/computes the historical confidence score for the automated response for each classifier to which the device belongs, and also updates/computes the global historical confidence score for the automated response. The administrator may change a “learning rate” for a historical confidence score based on a change to the 10%, 25%, and 50% adjustment listed above by way of example. FIG.1Cis a block diagram of system100configured according to an alternative embodiment. The alternative embodiment shown inFIG.1Cincludes an NMS190(also referred to as a “network event manager”) separate from controller104and configured to communicate/interact with the controller, datastore106, and equipment infrastructure102to implement the embodiments presented herein. Network108is omitted fromFIG.1Cfor clarity. NMS190may store content from datastore106locally. NMS190performs NMS functions as described above in connection withFIGS.1A and1B(e.g., seeFIG.1B,182). Specifically, NMS190includes a detect function192to detect alarm events on devices110(e.g., on device110a), a match function194to match the alarm events to automated responses, and an apply function196to apply automated responses to the devices (e.g., to device110a). NMS190also registers the alarm events with controller104to enable the controller to perform functions not provided by the NMS. An advantage of the alternative embodiment is to reduce a computation burden on controller104. Example classifiers are now described in connection withFIGS.2-4.FIG.2shows an example hierarchy of classifiers200of classifiers121that are assigned to device110aof devices110. Device110ais identified as “My Device” in device inventory120. Classifiers200include Model, Location.City, Business Function, and IP network subnet with respective classifier values=Catalyst 9000, Tokyo, Retail, and 19.0.2.0/24, respectively. FIG.3shows example classifiers300assigned to devices A, B, and C such that the classifiers group the devices into distinct but overlapping or coextensive groups based on commonality shared between the devices. Devices A and C belong to classifier Business Function=Retail/Sales (i.e., Retail/Sales is assigned to devices A and C, only), which establishes devices A and C as a distinct group of (multiple) devices for classifier Retail/Sales. All devices A, B, and C belong to classifier Device Type=Router, devices A and B belong to classifier Location=Japan, device C belongs to classifier Location=USA, and device B belongs to classifier Business Color=Blue. Historical confidence scores (not shown inFIG.3) for the above-listed classifiers span the same groups of devices defined by the classifiers and are independent of one another. Thus, a change to a historical confidence score for classifier Retail/Sales does not affect a historical confidence score associated with classifier USA. On the other hand, a change to a given historical confidence score for a given classifier applies to all devices that belong to the given classifier. FIG.4shows example hierarchical historical confidence scores400(each referred to inFIG.4as a “confidence of task”) that are mapped to corresponding classifiers for given response actions (i.e., tasks). As shown, lower-level model classifiers402Catalyst 9000, ISR 4000 are mapped to respective lower-level historical confidence scores40468%, 93%, and optionally feed into a higher-level classifier406“Model.” Similarly, lower-level location classifiers410Tokyo, Bangalore, and so on, are mapped to respective lower-level historical confidence scores41244%, 62%, and so on, and optionally feed into a higher-level classifier414“Location.” The lower-level confidence scores and the higher-level confidence all feed into a global historical confidence score420that is computed based on all underlying historical confidence scores (e.g., as an average). With respect to a given device, the global historical confidence score may be computed based on the historical confidence scores associated with all underlying historical classifiers feeding into the global historical classifier including particular classifiers for particular groups of devices to which the given device does not belong. Examples response rules that compare historical confidence scores against confidence thresholds to determine whether an automated response is permitted or denied are now described in connection withFIGS.5-7. Complex rules engine109evaluates the response rules on behalf of controller104. FIG.5shows example “automatic approval” response rules500used to evaluate whether an automated response, matched to an alarm event reported by a device in Tokyo, is permitted and should be executed by the device to remediate the alarm event. According to response rules500, the automated response is permitted when (i) according to a first logic leg502, the historical confidence score (labeled “location confidence”) for classifier Location.Tokyo is greater than or equal to confidence threshold=70%, or (ii) according to a second logic leg504, the global historical confidence score (labeled “global confidence”) is greater than or equal to a confidence threshold=90%. When either condition is met, the automated response is automatically approved. When neither condition is met, the response is not permitted. FIG.6shows example “conditional approval” response rules600used to evaluate whether the automated response is permitted and should be executed by the device. The conditional approval includes first logic leg502as described in connection withFIG.5and a second logic leg602that relies on a result of executing a second automated task “MyCheckTask2.” According to response rules600, the automated response is permitted when the condition of first logic leg502is met (as described above) and the condition set by second logic leg602is met (i.e., the result of MyCheckTask2=Pass). FIG.7shows example denial response rule700. According to denial response rule700, the automated response is denied and not executed when the global confidence is less than or equal to global confidence threshold=50%. Otherwise, the automated response is approved. In some examples, the response rules may prompt the administrator for real-time authorization before the automated response can be run. FIG.8shows an example business service802supported by equipment infrastructure102and a service test804(labeled “synthetic test” inFIG.8) employed to test the health of the business service. Business service802includes an email service to send email from a user806to an email server810over devices812of equipment infrastructure102. Devices812include a device812a(e.g., a switch), a device812b(e.g., a router), and a device812c(e.g., a switch). As shown, datastore106includes classifiers816(location.city=Tokyo, model=Catalyst 9000) assigned to device812band historical confidence scores818(44%, 68%) corresponding to the classifiers. “Over-the-top” service test804periodically (e.g., every 5 minutes) attempts to send a test email originated at [email protected] through devices812, to produce periodic test results. The test results build historical confidence scores (e.g., historical confidence scores818) within classifiers (e.g., classifiers816) to which multiple devices may belong. FIG.9is an illustration of an example operational scenario900for system100that expands on the example ofFIG.8, in which business service802and service test804operate on devices812. At904, an administrator supplies response rules and confidence thresholds to complex rules engine109through portal105. In addition, the administrator may apply human reinforcement via manual approval to run response actions. At906, device812bdetects and reports an alarm event, e.g., “low disk alert” via an SNMP trap for disk nearly full. System100matches an automated response908to the alarm event, such as “delete files from tmp folder.” Before applying the automated response, system100, using complex rules engine109, assesses the historical confidence scores for the automated response. To do this, at910, complex rules engine109accesses the classifiers to which device812bbelongs, the confidence scores corresponding to the classifiers, and the relevant response rules for/corresponding to the automated response. Complex rules engine109evaluates the response rules by comparing the confidence scores to their corresponding confidence thresholds in accordance with the logic of the response rules, to produce an up/down outcome on whether to permit the automated action to proceed. Assuming that results of the comparing indicate that the historical confidence scores exceed or are equal to their corresponding confidence thresholds, at914, system100proceeds with the automated response. After the automated response is applied, at916, system100runs service test804and monitors the test results as a measure of service health. If the test results indicate that the service test failed or that service health is degraded, system100reduces the historical confidence scores. On the other hand, if service health is not degraded or improves, system100increases the historical confidence scores. System100computes historical confidence scores for all classifiers to which the device belongs. In the example ofFIG.9, device812bhas location classifier=Tokyo and model classifier=Catalyst 9000. System100computes a first historical confidence score for automated response=“deleting file in tmp folder” within all devices with location classifier=Tokyo and a second confidence score for the automated response within all devices with model classifier=Catalyst 9000. Thus, each historical confidence scores spans all devices with/that belong to the same classifier(s). Thus, the administrator can set multiple response rules across multiple classifiers (e.g., rules such as: apply the automated response Catalyst 9000 devices in Tokyo if both the historical confidence score for running the automated response in location=Tokyo is 90% and the confidence score for running the automated response on model=Catalyst 9000 is 99%). FIG.10is a flowchart of an example method1000of confidence reinforcement of automated remediation decision making through full-stack observations and service health measurements. Operations of method1000are described above. Method1000may be performed primarily by a controller (e.g., controller104) configured to communicate with devices (e.g., devices110) of a network (e.g., equipment infrastructure102). In the ensuing description ofFIG.10, a historical confidence score is referred to as a historical probability of success (or simply historical probability). At1002, the controller stores classifiers assigned to groups of the devices to identify device commonality that is distinct for each group. The classifiers may include a device geolocation and one or more of a device type and a device model. Each classifier may span multiple ones of the devices. The controller also associates, to the classifiers, historical probabilities of success with which an automated response executed previously by one or more of the devices remediates a device alarm event. At1004, when a device of the devices reports the device alarm event to the controller, the controller identifies (i) each classifier to which the device belongs, (ii) each historical probability for each classifier, and (iii) a response rule (also referred to below as simply a “rule”) with classifier thresholds and to be used for evaluating whether to execute the automated response on the device. At1006, the controller determines to execute the automated response on the device by evaluating the rule using each historical probability and the classifier thresholds. The controller determines to executed the automated response when evaluating the rule indicates that each historical probability exceeds or is equal to a corresponding one of the classifier thresholds. The controller deploys the automated response to the device, which executes the automated response. At1008, after the automated response is executed on the device, the controller performs a service test to test a service supported across the devices, and monitors test results from the service test. The controller may perform the service test repeatedly over a period of time after the automated test is executed on the device, and monitor the test results over the period of time. After performing the service test, the controller correlates the service test to the device. The service test may be deemed relevant to scoring the success of the automated response (see1010below) when a network address (e.g., an IP address) of the device (targeted by the automated response) is in a dependency database for a test path (e.g., a list of next hop addresses for the test path) of the network that was exercised by the service test. Thus, the controller determines whether the device is in the test path exercised by the service test (i.e., whether the service test is correlated to the device) based on the network address of the device and the service dependency database. When the device is in the test path, the service test is correlated to the device, and flow proceeds to1010where the controller performs an update as described below. When the device is not in the test path, the service test is not correlated, and the controller does not perform the update. At1010, after the service test is performed and deemed correlated to the device, the controller updates each historical probability using the test results. Specifically, when the test results indicate success, the controller increases each historical probability, and when the test results indicate failure, the controller decreases each historical probability. The controller performs the update across each group of devices corresponding to each classifier. In addition, when the rule includes a global compare of a global historical probability for the automated response against a global confidence threshold, the controller retrieves the global historical probability for the global classifier to which the device belongs (along with the underlying historical probabilities mentioned above) and determines to permit the automated response at least based in part on results of the global compare (e.g., when the global historical probability exceeds the global confidence threshold). After performing the service test, the controller updates the global historical probability based on each of the updated (underlying) historical probabilities. In another embodiment, confidence scores may be associated to a service catalog of predefined change types or predefined configurations for devices. The confidence scores may be computed from service health measurements, but are not tied to closed-loop auto-remediation. In an example, consider a company that has a list of changes to be implemented on devices manually, such as upgrading software on a router. After a given manual service task is performed on a device, service tests may be performed, followed by scoring the success/confidence of the service task, and attribute/updating the confidence of the service task. When the service task is to be performed again manually (e.g., a manual change to upgrade a router in Tokyo), the historical confidence score for applying the service task to the device in that environment may be observed, and used as input to judge the risk of the change and whether it should be performed. Referring toFIG.11,FIG.11illustrates a hardware block diagram of a computing device1100that may perform functions associated with operations discussed herein in connection with the techniques depicted inFIGS.1-10. In various embodiments, a computing device or apparatus, such as computing device1100or any combination of computing devices1100, may be configured as any entity/entities as discussed for the techniques depicted in connection withFIGS.1-10in order to perform operations of the various techniques discussed herein. For example, computing device1100may represent controller104, complex rules engine109, portal105, and devices110of equipment infrastructure102. In at least one embodiment, the computing device1100may be any apparatus that may include one or more processor(s)1102, one or more memory element(s)1104, storage1106, a bus1108, one or more network processor unit(s)1110interconnected with one or more network input/output (I/O) interface(s)1112, one or more I/O interface(s)1114, and control logic1120. In various embodiments, instructions associated with logic for computing device1100can overlap in any manner and are not limited to the specific allocation of instructions and/or operations described herein. In at least one embodiment, processor(s)1102is/are at least one hardware processor configured to execute various tasks, operations and/or functions for computing device1100as described herein according to software and/or instructions configured for computing device1100. Processor(s)1102(e.g., a hardware processor) can execute any type of instructions associated with data to achieve the operations detailed herein. In one example, processor(s)1102can transform an element or an article (e.g., data, information) from one state or thing to another state or thing. Any of potential processing elements, microprocessors, digital signal processor, baseband signal processor, modem, PHY, controllers, systems, managers, logic, and/or machines described herein can be construed as being encompassed within the broad term ‘processor’. In at least one embodiment, memory element(s)1104and/or storage1106is/are configured to store data, information, software, and/or instructions associated with computing device1100, and/or logic configured for memory element(s)1104and/or storage1106. For example, any logic described herein (e.g., control logic1120) can, in various embodiments, be stored for computing device1100using any combination of memory element(s)1104and/or storage1106. Note that in some embodiments, storage1106can be consolidated with memory element(s)1104(or vice versa), or can overlap/exist in any other suitable manner. In at least one embodiment, bus1108can be configured as an interface that enables one or more elements of computing device1100to communicate in order to exchange information and/or data. Bus1108can be implemented with any architecture designed for passing control, data and/or information between processors, memory elements/storage, peripheral devices, and/or any other hardware and/or software components that may be configured for computing device1100. In at least one embodiment, bus1108may be implemented as a fast kernel-hosted interconnect, potentially using shared memory between processes (e.g., logic), which can enable efficient communication paths between the processes. In various embodiments, network processor unit(s)1110may enable communication between computing device1100and other systems, entities, etc., via network I/O interface(s)1112(wired and/or wireless) to facilitate operations discussed for various embodiments described herein. In various embodiments, network processor unit(s)1110can be configured as a combination of hardware and/or software, such as one or more Ethernet driver(s) and/or controller(s) or interface cards, Fibre Channel (e.g., optical) driver(s) and/or controller(s), wireless receivers/transmitters/transceivers, baseband processor(s)/modem(s), and/or other similar network interface driver(s) and/or controller(s) now known or hereafter developed to enable communications between computing device1100and other systems, entities, etc. to facilitate operations for various embodiments described herein. In various embodiments, network I/O interface(s)1112can be configured as one or more Ethernet port(s), Fibre Channel ports, any other I/O port(s), and/or antenna(s)/antenna array(s) now known or hereafter developed. Thus, the network processor unit(s)1110and/or network I/O interface(s)1112may include suitable interfaces for receiving, transmitting, and/or otherwise communicating data and/or information in a network environment. I/O interface(s)1114allow for input and output of data and/or information with other entities that may be connected to computing device1100. For example, I/O interface(s)1114may provide a connection to external devices such as a keyboard, keypad, a touch screen, and/or any other suitable input and/or output device now known or hereafter developed. In some instances, external devices can also include portable computer readable (non-transitory) storage media such as database systems, thumb drives, portable optical or magnetic disks, and memory cards. In still some instances, external devices can be a mechanism to display data to a user, such as, for example, a computer monitor, a display screen, or the like. In various embodiments, control logic1120can include instructions that, when executed, cause processor(s)1102to perform operations, which can include, but not be limited to, providing overall control operations of computing device; interacting with other entities, systems, etc. described herein; maintaining and/or interacting with stored data, information, parameters, etc. (e.g., memory element(s), storage, data structures, databases, tables, etc.); combinations thereof; and/or the like to facilitate various operations for embodiments described herein. The programs described herein (e.g., control logic1120) may be identified based upon application(s) for which they are implemented in a specific embodiment. However, it should be appreciated that any particular program nomenclature herein is used merely for convenience; thus, embodiments herein should not be limited to use(s) solely described in any specific application(s) identified and/or implied by such nomenclature. In various embodiments, any entity or apparatus as described herein may store data/information in any suitable volatile and/or non-volatile memory item (e.g., magnetic hard disk drive, solid state hard drive, semiconductor storage device, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM), application specific integrated circuit (ASIC), etc.), software, logic (fixed logic, hardware logic, programmable logic, analog logic, digital logic), hardware, and/or in any other suitable component, device, element, and/or object as may be appropriate. Any of the memory items discussed herein should be construed as being encompassed within the broad term ‘memory element’. Data/information being tracked and/or sent to one or more entities as discussed herein could be provided in any database, table, register, list, cache, storage, and/or storage structure: all of which can be referenced at any suitable timeframe. Any such storage options may also be included within the broad term ‘memory element’ as used herein. Note that in certain example implementations, operations as set forth herein may be implemented by logic encoded in one or more tangible media that is capable of storing instructions and/or digital information and may be inclusive of non-transitory tangible media and/or non-transitory computer readable storage media (e.g., embedded logic provided in: an ASIC, digital signal processing (DSP) instructions, software [potentially inclusive of object code and source code], etc.) for execution by one or more processor(s), and/or other similar machine, etc. Generally, memory element(s)1104and/or storage1106can store data, software, code, instructions (e.g., processor instructions), logic, parameters, combinations thereof, and/or the like used for operations described herein. This includes memory element(s)1104and/or storage1106being able to store data, software, code, instructions (e.g., processor instructions), logic, parameters, combinations thereof, or the like that are executed to carry out operations in accordance with teachings of the present disclosure. In some instances, software of the present embodiments may be available via a non-transitory computer useable medium (e.g., magnetic or optical mediums, magneto-optic mediums, CD-ROM, DVD, memory devices, etc.) of a stationary or portable program product apparatus, downloadable file(s), file wrapper(s), object(s), package(s), container(s), and/or the like. In some instances, non-transitory computer readable storage media may also be removable. For example, a removable hard drive may be used for memory/storage in some implementations. Other examples may include optical and magnetic disks, thumb drives, and smart cards that can be inserted and/or otherwise connected to a computing device for transfer onto another computer readable storage medium. VARIATIONS AND IMPLEMENTATIONS Embodiments described herein may include one or more networks, which can represent a series of points and/or network elements of interconnected communication paths for receiving and/or transmitting messages (e.g., packets of information) that propagate through the one or more networks. These network elements offer communicative interfaces that facilitate communications between the network elements. A network can include any number of hardware and/or software elements coupled to (and in communication with) each other through a communication medium. Such networks can include, but are not limited to, any local area network (LAN), virtual LAN (VLAN), wide area network (WAN) (e.g., the Internet), software defined WAN (SD-WAN), wireless local area (WLA) access network, wireless wide area (WWA) access network, metropolitan area network (MAN), Intranet, Extranet, virtual private network (VPN), Low Power Network (LPN), Low Power Wide Area Network (LPWAN), Machine to Machine (M2M) network, Internet of Things (IoT) network, Ethernet network/switching system, any other appropriate architecture and/or system that facilitates communications in a network environment, and/or any suitable combination thereof. Networks through which communications propagate can use any suitable technologies for communications including wireless communications (e.g., 4G/5G/nG, IEEE 802.11 (e.g., Wi-Fi®/Wi-Fi6®), IEEE 802.16 (e.g., Worldwide Interoperability for Microwave Access (WiMAX)), Radio-Frequency Identification (RFID), Near Field Communication (NFC), Bluetooth™, mm.wave, Ultra-Wideband (UWB), etc.), and/or wired communications (e.g., T1 lines, T3 lines, digital subscriber lines (DSL), Ethernet, Fibre Channel, etc.). Generally, any suitable means of communications may be used such as electric, sound, light, infrared, and/or radio to facilitate communications through one or more networks in accordance with embodiments herein. Communications, interactions, operations, etc. as discussed for various embodiments described herein may be performed among entities that may directly or indirectly connected utilizing any algorithms, communication protocols, interfaces, etc. (proprietary and/or non-proprietary) that allow for the exchange of data and/or information. In various example implementations, any entity or apparatus for various embodiments described herein can encompass network elements (which can include virtualized network elements, functions, etc.) such as, for example, network appliances, forwarders, routers, servers, switches, gateways, bridges, loadbalancers, firewalls, processors, modules, radio receivers/transmitters, or any other suitable device, component, element, or object operable to exchange information that facilitates or otherwise helps to facilitate various operations in a network environment as described for various embodiments herein. Note that with the examples provided herein, interaction may be described in terms of one, two, three, or four entities. However, this has been done for purposes of clarity, simplicity and example only. The examples provided should not limit the scope or inhibit the broad teachings of systems, networks, etc. described herein as potentially applied to a myriad of other architectures. Communications in a network environment can be referred to herein as ‘messages’, ‘messaging’, ‘signaling’, ‘data’, ‘content’, ‘objects’, ‘requests’, ‘queries’, ‘responses’, ‘replies’, etc. which may be inclusive of packets. As referred to herein and in the claims, the term ‘packet’ may be used in a generic sense to include packets, frames, segments, datagrams, and/or any other generic units that may be used to transmit communications in a network environment. Generally, a packet is a formatted unit of data that can contain control or routing information (e.g., source and destination address, source and destination port, etc.) and data, which is also sometimes referred to as a ‘payload’, ‘data payload’, and variations thereof. In some embodiments, control or routing information, management information, or the like can be included in packet fields, such as within header(s) and/or trailer(s) of packets. Internet Protocol (IP) addresses discussed herein and in the claims can include any IP version 4 (IPv4) and/or IP version 6 (IPv6) addresses. To the extent that embodiments presented herein relate to the storage of data, the embodiments may employ any number of any conventional or other databases, data stores or storage structures (e.g., files, databases, data structures, data or other repositories, etc.) to store information. Note that in this Specification, references to various features (e.g., elements, structures, nodes, modules, components, engines, logic, steps, operations, functions, characteristics, etc.) included in ‘one embodiment’, ‘example embodiment’, ‘an embodiment’, ‘another embodiment’, ‘certain embodiments’, ‘some embodiments’, ‘various embodiments’, ‘other embodiments’, ‘alternative embodiment’, and the like are intended to mean that any such features are included in one or more embodiments of the present disclosure, but may or may not necessarily be combined in the same embodiments. Note also that a module, engine, client, controller, function, logic or the like as used herein in this Specification, can be inclusive of an executable file comprising instructions that can be understood and processed on a server, computer, processor, machine, compute node, combinations thereof, or the like and may further include library modules loaded during execution, object files, system files, hardware logic, software logic, or any other executable modules. It is also noted that the operations and steps described with reference to the preceding figures illustrate only some of the possible scenarios that may be executed by one or more entities discussed herein. Some of these operations may be deleted or removed where appropriate, or these steps may be modified or changed considerably without departing from the scope of the presented concepts. In addition, the timing and sequence of these operations may be altered considerably and still achieve the results taught in this disclosure. The preceding operational flows have been offered for purposes of example and discussion. Substantial flexibility is provided by the embodiments in that any suitable arrangements, chronologies, configurations, and timing mechanisms may be provided without departing from the teachings of the discussed concepts. As used herein, unless expressly stated to the contrary, use of the phrase ‘at least one of’, ‘one or more of’, ‘and/or’, variations thereof, or the like are open-ended expressions that are both conjunctive and disjunctive in operation for any and all possible combination of the associated listed items. For example, each of the expressions ‘at least one of X, Y and Z’, ‘at least one of X, Y or Z’, ‘one or more of X, Y and Z’, ‘one or more of X, Y or Z’ and ‘X, Y and/or Z’ can mean any of the following: 1) X, but not Y and not Z; 2) Y, but not X and not Z; 3) Z, but not X and not Y; 4) X and Y, but not Z; 5) X and Z, but not Y; 6) Y and Z, but not X; or 7) X, Y, and Z. Each example embodiment disclosed herein has been included to present one or more different features. However, all disclosed example embodiments are designed to work together as part of a single larger system or method. This disclosure explicitly envisions compound embodiments that combine multiple previously-discussed features in different example embodiments into a single system or method. Additionally, unless expressly stated to the contrary, the terms ‘first’, ‘second’, ‘third’, etc., are intended to distinguish the particular nouns they modify (e.g., element, condition, node, module, activity, operation, etc.). Unless expressly stated to the contrary, the use of these terms is not intended to indicate any type of order, rank, importance, temporal sequence, or hierarchy of the modified noun. For example, ‘first X’ and ‘second X’ are intended to designate two ‘X’ elements that are not necessarily limited by any order, rank, importance, temporal sequence, or hierarchy of the two elements. Further as referred to herein, ‘at least one of’ and ‘one or more of’ can be represented using the ‘(s)’ nomenclature (e.g., one or more element(s)). In some aspects, the techniques described herein relate to a method including: by a controller configured to communicate with devices of a network, storing classifiers assigned to groups of the devices to identify device commonality that is distinct for each group; associating, to the classifiers, historical probabilities of success with which an automated response executed by one or more of the devices remediates a device alarm event; when a device of the devices reports the device alarm event, identifying each classifier to which the device belongs, each historical probability for each classifier, and classifier thresholds; determining to execute the automated response on the device using each historical probability and the classifier thresholds; after the automated response is executed on the device, performing a service test to test a service supported across the devices, and monitoring test results from the service test; and after the service test, updating each historical probability using the test results. In some aspects, the techniques described herein relate to a method, wherein updating each historical probability includes: when the test results indicate success, increasing each historical probability; and when the test results indicate failure, decreasing each historical probability. In some aspects, the techniques described herein relate to a method, wherein: identifying further includes identifying a rule with the classifier thresholds for evaluating whether to execute the automated response on the device; and determining includes determining to execute the automated response on the device by evaluating the rule using each historical probability and the classifier thresholds. In some aspects, the techniques described herein relate to a method, wherein: determining further includes determining to execute the automated response when evaluating the rule indicates that each historical probability exceeds or is equal to a corresponding one of the classifier thresholds. In some aspects, the techniques described herein relate to a method, further including: computing a global historical probability based on the historical probabilities of all of the classifiers, wherein determining further includes determining to execute the automated response when evaluating indicates that the global historical probability exceeds a global threshold for the global historical probability. In some aspects, the techniques described herein relate to a method, wherein: computing includes computing the global historical probability based on the historical probabilities associated with all of the classifiers including particular classifiers of the classifiers for particular groups of the groups to which the device does not belong. In some aspects, the techniques described herein relate to a method, wherein: the classifiers define logical attributes of the device; and the historical probabilities are associated with the logical attributes. In some aspects, the techniques described herein relate to a method, wherein: identifying includes identifying each classifier assigned to each group to which the device belongs; and updating includes updating each historical probability across each group. In some aspects, the techniques described herein relate to a method, further including: monitoring includes monitoring the test results of the service test over a period of time during which the service test is performed repeatedly. In some aspects, the techniques described herein relate to a method, further including: determining whether the device is in a test path of the network that was exercised by the service test based on a network address of the device; when the device is in the test path, updating each historical probability; and when the device is not in the test path, not updating each historical probability. In some aspects, the techniques described herein relate to a method, wherein: the classifiers include descriptive labels that define commonality between the devices. In some aspects, the techniques described herein relate to a method, wherein: each classifiers spans multiple ones of the devices. In some aspects, the techniques described herein relate to an apparatus including: one or more network processor units to communicate with devices in a network; and a processor coupled to the one or more network processor units and configured to perform: storing classifiers assigned to groups of the devices to identify device commonality that is distinct for each group; associating, to the classifiers, historical probabilities of success with which an automated response executed by one or more of the devices remediates a device alarm event; when a device of the devices reports the device alarm event, identifying each classifier to which the device belongs, each historical probability for each classifier, and classifier thresholds; determining to execute the automated response on the device using each historical probability and the classifier thresholds; after the automated response is executed on the device, performing a service test to test a service supported across the devices and monitoring test results from the service test; and after the service test, updating each historical probability using the test results. In some aspects, the techniques described herein relate to an apparatus, wherein the processor is configured to perform updating each historical probability by: when the test results indicate success, increasing each historical probability; and when the test results indicate failure, decreasing each historical probability. In some aspects, the techniques described herein relate to an apparatus, wherein: the processor is configured to perform identifying by identifying a rule with the classifier thresholds for evaluating whether to execute the automated response on the device; and the processor is configured to perform determining by determining to execute the automated response on the device by evaluating the rule using each historical probability and the classifier thresholds. In some aspects, the techniques described herein relate to an apparatus, wherein: the processor is further configured to perform determining by determining to execute the automated response when evaluating indicates that each historical probability exceeds or is equal to a corresponding one of the classifier thresholds. In some aspects, the techniques described herein relate to an apparatus, wherein the processor is further configured to perform: computing a global historical probability based on the historical probabilities of all of the classifiers, wherein the processor is configured to perform determining by determining to execute the automated response when evaluating indicates that the global historical probability exceeds a global threshold for the global historical probability. In some aspects, the techniques described herein relate to a non-transitory computer readable medium encoded with instructions that, when executed by a processor configured to communicate with devices over a network, causes the processor to perform: storing classifiers assigned to groups of the devices to identify device commonality that is distinct for each group; associating, to the classifiers, historical probabilities of success with which an automated response executed by one or more of the devices remediates a device alarm event; when a device of the devices reports the device alarm event, identifying each classifier to which the device belongs, each historical probability for each classifier, and classifier thresholds; determining to execute the automated response on the device using each historical probability and the classifier thresholds; after the automated response is executed on the device, performing a service test to test a service supported across the devices, and monitoring test results from the service test; and after the service test, updating each historical probability using the test results. In some aspects, the techniques described herein relate to a non-transitory computer readable medium, wherein the instructions to cause the processor to perform updating each historical probability include instructions to cause the processor to perform: when the test results indicate success, increasing each historical probability; and when the test results indicate failure, decreasing each historical probability. In some aspects, the techniques described herein relate to a non-transitory computer readable medium, wherein: the instructions to cause the processor to perform identifying include instructions to cause the processor to perform identifying a rule with the classifier thresholds for evaluating whether to execute the automated response on the device; and the instructions to cause the processor to perform determining include instructions to cause the processor to perform determining to execute the automated response on the device by evaluating the rule using each historical probability and the classifier thresholds. One or more advantages described herein are not meant to suggest that any one of the embodiments described herein necessarily provides all of the described advantages or that all the embodiments of the present disclosure necessarily provide any one of the described advantages. Numerous other changes, substitutions, variations, alterations, and/or modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and/or modifications as falling within the scope of the appended claims. The descriptions of the various embodiments have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
76,656
11943132
Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been depicted to scale. DETAILED DESCRIPTION The following definitions are used herein in the specification and the appended claims. “A cloud” or “a computing cloud” is interpreted in the sense of cloud computing or, synonymously, distributed computing over a network unless otherwise specified. “A server” is interpreted in the sense of computing. That is, a server is a computing server. A server is usually equipped with one or more computing processors for executing program instructions, and one or more storages for storing data. A server may be a standalone computing server (i.e. a physical server), a distributed server in the cloud, or a virtual server for handling clients' instructions. The one or more storages may be, for example, hard disks or solid-state disk drives. As used herein in the specification and appended claims, “a computing device” is a device having computing power, communication capability and networking capability. The computing power is realized by implementing the computing device with one or more computing processors. Usually, the computing device is further installed with one or more data storages, one or more input/output devices, etc., similar to implementing a computer. The one or more computing processors are appropriately programmed to enable the computing device to function as desired. The communication capability is usually realized by installing one or more communication transceivers at the computing device, where each communication transceiver is configured to be compliant to a public communication standard, such as an Ethernet standard, Bluetooth, NFC, WiFi, 5G or any other terrestrial mobile communication standard. A communication transceiver may support wireline communication over copper cables, fiber-optic communication, or wireless communication. The networking capability is usually realized by implementing one or more networking protocols at the one or more computing processors, where an individual networking protocol configures the one or more computing processors to network the computing device with other computing devices such that all the networked computing devices are mutually communicable under the individual networking protocol. A networking protocol may be an IP-based protocol such as the TCP/IP protocol, whereby two computing devices implemented with the TCP/IP protocol may be connected through the Internet. The networking protocol may be a non-IP protocol, such as the iOS AirDrop protocol. Hence, computing devices implemented with the same OS containing the non-IP protocol can be networked to form a private network but another computing device using a different OS is not joinable to the private network. One advantage of using the non-IP protocol is that more efficient operation may be achieved over using an IP-based protocol, resulting in improved performance. A computing device may be a desktop computer, a workstation, a notebook computer, a tablet computer, a smartphone, a smartwatch, etc. As used herein, “a LAN” is a communication network for connecting plural computing devices where the network is interfaced with the Internet by a gateway such that the gateway acts as an Internet service provider for providing Internet services to the computing devices that are attached to the network. As a first example, the LAN is a wired network or WLAN managed by a commercial Internet service provider. As a second example, the LAN is a 5G network for connecting smartphones where the 5G network has one or more gateways for providing Internet services to the smartphones. As a third example, the LAN is a WiFi hotspot connected to smart devices one of which is a smartphone connectable to a public 5G network, and the gateway connecting the Internet and the hotspot is located in the public 5G network. As a fourth example, the LAN is a multihop ad hoc network having a gateway connected to the Internet. As used herein, “cross-LAN unicast communication” means unicast communication between a first computing device and a second computing device where the first and second computing devices are attached to different LANs. It is used herein that “cross-LAN multicast communication” means multicast communication from a first computing device to plural second computing devices where each second computing device is attached to a certain LAN different from the LAN that the first computing device is attached to. As used herein, “intra-LAN unicast communication” means unicast communication between a first computing device and a second computing device where the first and second computing devices are attached to the same LAN. Similarly, it is used herein that “intra-LAN multicast communication” means multicast communication from a first computing to plural second computing devices where all the second computing devices and the first computing device are attached to the same LAN. Occasionally, the aforementioned LAN is denoted herein as a common LAN for convenience. Disclosed herein is a method for communicating data among computing devices, where the computing devices are distributed over plural LANs interconnected by the Internet. Advantageously, the disclosed method enables the data to be transmitted across different LANs without a need for the computing devices to acquire static IP addresses in accomplishing cross-LAN unicast communication. Additionally, the method supports multicasting of data to a group of computing devices at different LANs with efficiency in multicast data transmission. Furthermore, the disclosed method provides a mechanism for group member contact synchronization. The disclosed method is illustrated with the aid ofFIGS.1and3.FIG.1depicts a typical situation of cross-LAN unicast communication and intra-LAN unicast communication for illustrating the disclosed method, where computing devices511-513,521-522,531are distributed over three LANs510,520,530and data from a computing device in a LAN are transmitted to remaining computing devices over the three LANs510,520,530. (As an example for illustrating the meaning of “remaining computing devices”, if the computing device511in the LAN510is transmitted data to “remaining computing devices over the three LANs510,520,530,” these remaining computing devices are the computing devices512-513,521-522,531.)FIG.3depicts exemplary steps of the disclosed method, where the two cases of cross-LAN unicast communication and intra-LAN unicast communication are considered. Refer toFIG.1. The computing devices511-513are attached to a first LAN510. The computing devices521,522are attached to a second LAN520. The computing device531is attached to a third LAN530. The three LANs510,520,530are different domains that are separately managed. Each LAN assigns dynamic IP addresses to computing devices attached to this LAN. However, static IP addresses, which are centrally managed on a global scale by international authorities and are therefore not easy to obtain, are not available for allocation to the computing devices511-513,521-522,531. Refer toFIG.3. In step310of the disclosed method, a dedicated router550accessible by the computing devices511-513,521-522,531is set up in the Internet500for routing a data message sent from one computing device to another computing device in cross-LAN unicast communication. Each of the computing devices511-513,521-522,531may be provided with an IP address of the dedicated router550such that the dedicated router550is accessible to the computing devices511-513,521-522,531. Furthermore, an InS address is assigned to an individual computing device among the computing devices511-513,521-522,531in the step310. Advantageously, the InS address is recognizable by the dedicated router550and enables the dedicated router550to communicate with the individual computing device. It is noticed that the individual computing device attached to a certain LAN is accessible from the Internet500if an entry point of this LAN is known and additional accessibility information for accessing the individual computing device from the entry point is also known. In certain embodiments, the InS address of the individual computing device includes two items. The first item is a static IP address of a gateway of a first LAN, where the individual computing device is attached to the first LAN, and the gateway connects the first LAN and the Internet500. Thus, the gateway forms an entry point of the first LAN from the Internet500. By using the first item, the dedicated router550is communicable with the gateway. The second item is a plurality of access-specific data for enabling the gateway to communicate with the individual computing device. One example of access-specific data is a UDP port number used as an identifier of the individual computing device in communicating with the gateway. If a smartphone or a tablet computer is used as a computing device, the computing device may download an application from AppStore or Google Play. The application is executable in the computing device such that the IP address of the dedicated router550and the assigned InS address of the computing device are available to the computing device. Without loss of generality, consider the computing device511for illustrating cross-LAN unicast communication and intra-LAN unicast communication as enabled by the disclosed method. Exemplarily, the method includes steps321-323for accomplishing cross-LAN unicast communication. Consider a representative case that the computing device511attached to the first LAN510has a first data message to be sent to the computing device531attached to the third LAN530. Denote the computing device511as a source computing device511, and the computing device531as a destination computing device531. In the step321, the source computing device511embeds a first InS address into the first data message to form an extended first data message. The first InS address is the InS address assigned to the destination computing device531. After the extended first data message is formed, the source computing device511forwards the extended first data message to the dedicated router550(denoted by a path121) in the step322. Thereafter, the dedicated router550forwards the extended first data message to the destination computing device531(denoted as a path131) according to the first InS address extracted from the extended first data message in the step323. Advantageously, it allows the first data message to be sent across different LANs (from the first LAN510to the third LAN530) without a need for the destination computing device531to acquire a static IP address in accomplishing cross-LAN unicast communication. Similarly, all the computing devices511-513,521-522,531are not required to acquire respective static IP addresses to accomplish cross-LAN unicast communication. Another advantage of using the dedicated router550as an intermediate in delivering the extended first data message from the source computing device511to the destination computing device531is that the extended first data message is not required to be decoded and then handled by a centralized application server during the delivery. It enhances privacy and security of transmitted data in that no third-party server stores the data and owns the data, and there is no data leakage as no data is saved at the third party server. Note that the third LAN530has a gateway553serving as an entry point of the third LAN530from the Internet500. The first InS address may include a static IP address of the gateway553. The disclosed method further includes step331for accomplishing intra-LAN unicast communication. Consider a case that the source computing device511has a second data message to be sent to the computing device512. For convenience, denote the computing device512as a destination computing device512. Note that both the source computing device511and the destination computing device512are attached to the first LAN510, which is denoted as a common LAN510for simplicity. In the step331, the source computing device511forwards the second data message to the destination computing device512(via a path111) based on a data transmission mechanism employed by the common LAN510. In one embodiment of the step331, the second data message is forwarded to the destination computing device512according to a dynamic IP address assigned by the common LAN510to the destination computing device512. In another embodiment of the step331, the second data message is forwarded to the destination computing device512according to a non-IP protocol supported by the common LAN510. As mentioned above, using the non-IP protocol may achieve a higher operational efficiency in transmitting the second data message over using the IP-based protocol to thereby improve the data-transmission performance. Certain embodiments of the disclosed method provide multicast communication across different LANs.FIG.2depicts a typical situation of multicast communication within the same LAN and across different LANs. For illustration of the disclosed method, consider a representative case that the computing device511multicasts a third data message to the computing devices512-513,521-522,531. For convenience, denote the computing device511as a source computing device, and each of the computing devices512-513,521-522,531as a destination computing device.FIG.4depicts exemplary steps of the disclosed method for providing cross-LAN multicast communication and intra-LAN multicast communication. A communication group of members selected from the computing devices511-513,521-522,531is first established, where the members are eligible to receive data messages of any member of the group in multicast communication. In the representative case considered here, the members are the computing devices511-513,521-522,531. For convenience, denote the computing devices511-513,521-522,531as members511-513,521-522,531, respectively. Further consider that the member511has a third data message to be multicast to remaining members in the group, i.e. the members512-513,521-522,531. Denote the member511as a source member511, and the members512-513,521-522,531as destination members512-513,521-522,531, respectively. In step410, the destination members512-513,521-522,531are divided into a first set of members and a second set of members such that the source member511and each member in the first set are attached to different LANs and such that the source member511and each member in the second set are attached to a same LAN, namely, the first LAN510or the common LAN510. Hence, the first set of members consists of the destination members521-522,531, and the second set of members consists of the destination members512,513. Steps421-425are used for accomplishing cross-LAN multicast communication. In the step421, the source member511creates or obtains a multicast InS address for representing a first list of respective InS addresses of members in the first set. It follows that the multicast InS address is associated with the first list of respective InS addresses. The multicast InS address is recognizable by the dedicated router550and informs the dedicated router550to perform multicast communication. The multicast InS address may be obtained by the source member511from the dedicated router550, where the dedicated router550generates the multicast InS address. Alternatively, the multicast InS address may be created by the source member511based on, e.g., an application downloaded from the operator of the dedicated router550and executable in the source member511. In the step422, the source member511informs the dedicated router550that the multicast InS address is associated with the first list of respective InS addresses. It follows that when the dedicated router550receives the third data message with the multicast InS address, the dedicated router550is aware that the third data message is multicast to different destination members according to the first list of respective InS addresses. In the step423, the source member511embeds the multicast InS address into the third data message to form an extended third data message. After the extended third data message is formed in the step423, the source member511forwards the extended third data message to the dedicated router550(via a path221) in the step424. After the dedicated router550receives the extended third data message from the source member511, the dedicated router550multicasts the extended third data message to all members in the first set according to the first list of respective InS addresses (via paths231,232) in the step425. Generally, when the dedicated router550receives the extended third data message, the multicast InS address is first extracted from the extended third data message. The first list of respective InS address is then identified. Existing mechanisms in the Internet500for multicasting over the Internet500are used to forward the extended third data message to the destination members521-522,531in the first set. By the arrangement set forth in the steps421-425, it advantageously allows the third data message to be multicast across different LANs without a need for all destination members521-522,531in the first set to acquire respective static IP addresses in accomplishing cross-LAN multicast communication. Similar to the advantage enjoyed in cross-LAN unicast communication as mentioned above, using the dedicated router550instead of a centralized application server as an intermediate in cross-LAN multicast communication enhances privacy and security of transmitted data. Step431is used for accomplishing intra-LAN multicast communication. In the step431, the source member511forwards the third data message to each member in the second set, i.e. destination members512,513, based on a data transmission mechanism employed by the common LAN510. Advantageously, it allows all members in the second set to receive the third data message directly through the common LAN510without routing through and additionally burdening the dedicated router550. In one embodiment of the step431, the third data message is forwarded to all members in the second set according to IP multicast as standardized by IETF. In another embodiment of the step431, a non-IP protocol supported by the common LAN510is used in forwarding the third data message to all members in the second set. The non-IP protocol may be, for instance, a multicasting protocol of WiFi hotspot. In the steps321,423, it is necessary to embed an InS address to a data message.FIG.5depicts a layer structure of an IP datagram581usable for embedding an InS address into a data message in cross-LAN multicast/unicast communication. The IP datagram581is composed of an IP header591and an IP payload596. On top of the IP layer is the transport layer. TLPs that are used in the transport layer include TCP, UDP, and any other TLP as standardized by IETF. The IP payload596is a TLP datagram582. The TLP datagram582is composed of a TLP header592and a TLP payload597. If TCP is used as a TLP in transmitting the IP payload596, the TLP datagram582is a TCP datagram so that the TLP header592and the TLP payload597are a TCP header and a TCP payload, respectively. Similarly, if UDP is used as the TLP, the TLP datagram582is a UDP datagram so that the TLP header592and the TLP payload597are a UDP header and a UDP payload, respectively. In the disclosed method, advantageously, on top of the transport layer is an InSP layer, which is introduced to carry the InS address. In the InSP layer, an InSP datagram583is contained in the TLP payload597. The InSP datagram583is composed of an InSP header593and an InSP payload598. The InSP header593contains the InS address. On top of the InSP layer is an application layer. The InSP payload598is used to store application data599. The application data599are data taken from the data message. In certain embodiments of cross-LAN unicast communication exemplified by the steps321-323, the extended first data message is transmitted from the source computing device511to the destination computing device531based on a TLP such that the extended first data message is encoded to be a plurality of TLP datagrams, where a payload of an individual TLP datagram includes the first InS address, which is assigned to the destination computing device531. The TLP is usually selected from TCP and UDP. In certain embodiments of cross-LAN multicast communication exemplified by the steps421-425, the extended third data message is transmitted from the source member511to all the members in the first set based on a TLP such that the extended third data message is encoded to be a plurality of TLP datagrams, where a payload of an individual TLP datagram includes the multicast InS address. Similarly, the TLP is usually selected from TCP and UDP. As mentioned above, the communication group of members selected from the computing devices511-513,521-522,531is first established for facilitating multicast communication across different LANs. In establishing the communication group, the group is updated from time to time in response to admission of new members and leave of existing members. Although the communication group is primarily used for facilitating multicast communication, information about membership of the group may also be used for other purposes as deemed appropriate by those skilled in the art, such as for initiating unicast communication with a certain member in the group.FIG.6depicts a flowchart showing exemplary steps in a process of establishing the communication group. In step610, a host for managing admission of the new members into the group and removal of the existing members from the group is identified among the members. The host may be selected or appointed among the members as a manager for managing new-member admission and existing-member removal. Usually, the host is a particular computing device that initiates creation of the group. That is, this particular computing device is the earliest member of the group. Without loss of generality, consider an illustrative example that: the computing device511is the host; the computing devices511-513and521are existing members of the group; the existing member521is about to leave the group; and the computing device522is about to join the group as a new member. In step620, an individual member in the group creates and keeps a local list of group members at the individual member. The local list of group members records all the members in the group, and includes a timestamp of last update for indicating recency of the local list of group members. In the illustrative example, each of the existing members511-513and521maintains a corresponding local list of group members. In certain embodiments, the timestamp of last update is a version number of the list of group members. In step630, the host511updates the corresponding local list of group members of the host in response to occurrence of new-member admission or existing-member removal. It follows that the host's corresponding local list of group members is an authentic list of group members for the communication group. Usually, an existing member is removed from the group on request from a human user of this existing member, and a new member contacts the host511to request approval of joining the group. In the illustrative example, the existing member521informs the host511that the existing member521decides to leave the group. The host511updates the authentic list of group members by removing a name or an identifier of the existing member521therefrom, preferably with deleting any accessibility information of the existing member521from the authentic list of group members. In the illustrative example, the new member522sends a request to the host511for approving the new member522to join the group. Since the host511and the new member522are attached to different LANs, the request may be sent to the host511through the Internet500. If the host511and the new member522are in close proximity, it is possible that the request is sent from the new member522to the host511via some short-range communication means, such as WiFi or sending of a QR code. Upon joining the group, the new member522creates a corresponding local list of group members and stores this list in the new member522according to the step620. In the step630, when the host511admits the new member522to join the group, preferably the host511acquires a copy of InS address of the new member522, and sends a copy of InS address of the host511to the new member522. The copy of InS address of the new member522is further incorporated into the authentic list of group members in the step630such that the authentic list of group members includes respective InS addresses of all the members in the group. In step640, a non-host member in the group regularly updates a corresponding local list of group members of the non-host member with the authentic list of group members. The non-host member is a member of the group except the host511. Preferably and practically, when the authentic list of group members is updated or created, the host511stores the authentic list of group members at a cloud server552in the Internet500(in the step630) for allowing the non-host member to update the non-host member's corresponding list of group members (in the step640) via obtaining the authentic list of group members from the cloud server552. As an advantage, the host511is free from a burden of being frequently contacted to provide the authentic list of group members to the non-host members. Although it is convenient for the non-host members to update their corresponding lists of group members through contacting the cloud server552, this updating approach is disrupted if some members attached to the same common LAN lose connectivity to the Internet500. It happens when, for instance, the common LAN is an ad hoc network and a computing device attached to the ad hoc network and responsible for providing Internet connectivity is offline. Advantageously, the disclosed method enables group member contact synchronization among different members that are connected even in an absence of Internet connectivity.FIG.7depicts a flowchart showing exemplary steps of group member contact synchronization without a need for Internet connectivity. In the illustrative example to be used for illustrating the group member contact synchronization, assume that the first LAN510is disconnected from the Internet500. Consider that all the members511-513attached to the first LAN510are aware of a request for synchronizing respective lists of group members among the members511-513,522. Of course, the new member522is not aware of the request due to disconnection of the first LAN510from the Internet500. In addition, some of the members attached to the first LAN510may not be aware that the former member521has left the group. The members511-513are aware of the request due to either being an initiator of the request or being informed by the initiator. Usually, a first member among the members511-513in the first LAN510is the initiator of the request and multicasts the request to all the members in the group (not only in the first LAN510) except the first member (step710). In the illustrative example, assume that the member512is the initiator, and is not aware of the leave of the former member521and the addition of the new member522. The member512multicasts the request to the members511,513,521. Only the members511,513receives the request. In response to initiating or receiving the request, each member aware of the request in the group performs steps720,730,740,750,760. Hence, these steps are individually performed by the members511-513. For illustration hereinafter, consider that these steps are performed by the member512. Executing these steps by any of the other members512,513is similar. In the step720, the member512multicasts the local list of group members stored at the member512to all members listed in the list of group members except the member512itself, namely, to the members511,513,521. Only the members511,513receive copies of the member512's local list of group members. In the step730, the member512receives one or more local lists of group members from any member in the group other than the member512. In the illustrative example, assume that the member512successfully receives two local lists of group members from the members511,513. In the step740, the member512augments the member512's local list of group members with the two received local lists of group members to form a plurality of collected local lists of group members. In the step750, the member512determines a most-recent local list of group members having a latest timestamp of last update in the plurality of collected local lists of group members. As the member511is also the host511, the member512determines that the member511's local list of group members is the most-recent one. In the step760, the member512updates its local list of group members with the most-recent one. Hence, the member512is allowed to update its local list of group members without a need to access the Internet500. Since the steps720,730,740,750,760are individually executed by the members511-513, their corresponding local lists of group members are updated with the most-recent local list of group members. Group member contact synchronization is achieved. The present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiment is therefore to be considered in all respects as illustrative and not restrictive. The scope of the invention is indicated by the appended claims rather than by the foregoing description, and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
30,065
11943133
DETAILED DESCRIPTION OF THE EMBODIMENTS To more specifically describe the present invention, the present invention is described in detail below with the reference to the accompanying drawings. FIG.1shows a schematic diagram of a storage model for a service network, wherein the storage model is mainly used for describing various resources stored in the service network, the resources including a service entity and service call information. The service entity is expressed in the form of a sextuple, i.e., S={ID,Address,Info,Dinto,Entity,Keys} wherein“ID” refers to a unique identifier of a service in the cross-boundary service network;“Address” refers to an access address of a service resource in the network, the access address being represented with a domain name in the cross-boundary service network;“Info” refers to key information for describing the service, including a service name and service resource value;“Dinfo” refers to detailed information for describing the service, including all information of the service, such as parameters, pictures, and a software development kit (SDK);“Entity” refers to description of the service entity, which is information of describing the service with a Web service description language; and“Keys” refer to a keyword set in the service, which facilitates the retrieval and use of the service. The service call information is expressed in the form of a sextuple, i.e., C={From,To,Param,Out,Start,End} wherein“From” refers to an initiator identifier of service call;“To” refers to a called service identifier;“Param” refers to a parameter passed in during the service call;“Out” refers to a service call result;“Start” refers to a start time of the service call; and“End” refers to an end time of the service call. FIG.2shows a schematic diagram of a reference service. Proposing a reference service mechanism for services with same or similar functions in a service network, with differences in format and usage mode, and establishing a unified and standardized usage view for a user, comprises the following steps:(1): initialization of reference services: performing function clustering on existing services in the service network by means of data mining, and generating an initial reference service blueprint containing all call parameters for each type of service set;(2): revision by a domain expert: on the basis of the initialization, modifying, merging, and deleting, by the domain expert, the existing call parameters according to existing related service discussions, performing revision to form the final reference services, and publishing the final reference services into the service network;(3): mapping of the reference services: associating, by a user provider, a service of the user provider with a reference service published in the network according to the following rules; and(4): call of the reference services: when the user calls the reference services, selecting and returning, by a platform, an optimal result according to a routing strategy. FIG.3shows a schematic diagram of mapping of reference services. Establishing a method for mapping between the reference services and ordinary services to map the multiple ordinary services with similar functions into the unified reference services, and generating a configuration file, comprises the following steps:(1): recommendation by a system: mapping, by a service provider, services of the service provider to the reference services published in the network, and automatically recommending, by the system, a reference service according to a function;(2): semi-automatic mapping: for each parameter (an input parameter and an output parameter) piof a selected service, calculating semantic similarities between the parameter and all parameters (qj) of the reference service, and then taking a maximum value to determine a relationship between the parameters: maxTsim(pi,qj),j=1,2, . . .n wherein the maximum value is cyclically calculated for “m” times in total, “Tsim” is a semantic similarity between two parameters calculated according to parameter names, parameter descriptions, and the like, m is the number of parameters for the selected service, and n is the number of parameters for the reference service;(3): manual modification by the user: after the semi-automatic mapping, automatically generating, by the system, an initial mapping relationship diagram, and manually adjusting, by the user, an association between the parameters and the relationship therebetween; and(4): audit by the system: after the user confirms the modification, automatically generating, by the system, a mapping configuration file and uploading the mapping configuration file to a service router, and after audit is passed by an administrator, allowing the service to enter the candidate reference service list. FIG.4shows a schematic diagram of reference service call. When the user calls the reference services, a service switch first forwards a request of the user to the service router that the reference services belong to, and the service router calls the corresponding services according to the routing strategy. The routing strategy comprises:(1) an optimal performance strategy, wherein the strategy focuses on an overall performance load of a reference network; after the service router receives the request of the user for the reference services, all candidate services associated with the reference services are searched for; and for each of the candidate services, a historical call record of the candidate service is queried, the performance of the candidate reference service is calculated and evaluated, and then the optimal ordinary service is selected: service⁢quality=F⁡(count_call)F⁡(avg_time)*F⁡(access_time)wherein count_call represents the number of times of calling the reference services within a period of time, avg_time represents an average response time, and access_time represents a time of latest call; and(2) a fastest strategy, wherein the strategy focuses on a return speed of the services called by the user, without considering the overall performance load of the network; and after the service router receives the request of the user for the reference services, all candidate services associated with the reference services and the mapping configuration file of the reference services are searched for, a parameter format is converted according to the configuration file, call requests for all the candidate services are simultaneously initiated, and a fastest result is returned according to service response times. As shown inFIG.4(b), the service router initiates call requests for both a service a and a service b, and the service a responds faster than the service b, so the service router preferentially returns a call result of the service a.
6,809
11943134
DESCRIPTION OF EMBODIMENTS Hereinafter, an embodiment of the present invention will be described with reference to the drawings. Here, components with corresponding functions are denoted by the same reference signs throughout all drawings in the specification, and description thereof will be omitted as appropriate. Configuration of Embodiment FIG.1is a block diagram illustrating a configuration of a transfer system using transfer apparatuses according to an embodiment of the present invention. A transfer system40illustrated inFIG.1is configured to include an external server42in which a management CTL (also referred to as a CTL)42athat is a control function is mounted and a group of transfer apparatuses44ato44dthat are internal nodes controlled by the CTL42avia a management SW43. External communication apparatuses50aand50bsuch as a router outside the internal nodes are connected to the transfer apparatuses44cand44d. The transfer apparatuses44aand44bto which the external communication apparatuses50aand50bare not connected are spine SWs controlled by the management CTL42aand will also be referred to as spines44aand44bin this example. The transfer apparatuses44cand44dto which the external communication apparatuses50aand50bare connected are leaf SWs controlled by the CTL42aand will also be referred to as leaves44cand44din this example. Each of the transfer apparatuses44ato44dincludes an RIB construction unit45for internal routing and an FIB construction unit46for internal routing, which are NOSs that realize a routing function and a packet transfer function as illustrated in the internal routing layer L1. The RIB construction unit45for internal routing will also be referred to as an internal RIB construction unit45or an internal RIB45, and the FIB construction unit46for internal routing will also be referred to as an internal FIB construction unit46or an internal FIB46. The internal RIBs45are logically connected to each other on software as represented by the dashed lines. Note that the internal routing layer L1configures the internal control unit according to the aspects. Also, any one of the transfer apparatuses44ato44dincludes, as NOSs, an RIB construction unit48for external routing and an FIB construction unit49for external routing as illustrated in an external routing layer L2. The RIB construction unit48for external routing will also be referred to as an external RIB construction unit48or an external RIB48, and the FIB construction unit49for external routing will also be referred to as an external FIB construction unit49or an external FIB49. In addition, the external RIB construction unit48and the external FIB construction unit49are provided as a backup in any one of the transfer apparatuses44ato44din which the external routing layer L2is not disposed. Note that the external routing layer L2configures the external control unit described in the aspects. The internal RIB45, the internal FIB46, the external RIB48, and the external FIB49described above are implemented in each of the transfer apparatuses44ato44d. However, inFIG.1, in order to easily identify the units for the internal routing and the units for the external routing, the internal RIB construction unit45and the internal FIB construction unit46are illustrated in the internal routing layer L1separately illustrated above the transfer apparatuses44ato44d, and the external RIB construction unit48and the external FIB construction unit49are similarly separately illustrated in the external routing layer L2. The group of transfer apparatuses44ato44dwith such a configuration will also be referred to as logical nodes. In addition, each of the external communication apparatuses50aand50balso includes an RIB construction unit55and an FIB construction unit56. The RIB construction unit55and the FIB construction unit56are connected, as an internal routing layer for each of the external communication apparatuses50aand50b, to the external RIB48. The internal RIB45resolves routing to determine a destination (destination address) to which routing information is transferred via communication routing in the logical nodes and stores, in the internal RIB DB45tc(seeFIG.2), internal routing information with which the destination address obtained through the resolution is associated. In other words, the internal RIB45resolves the communication routing in the logical nodes and performs the routing. The internal FIB46reads a routing resolved by the routing, that is, the internal routing information, which is stored in the RIB DB45tc, with which the destination address is associated, and reflects the internal routing information in the packet transfer function (the transfer units52aand52cinFIG.2) of each of the transfer apparatuses44ato44d. In other words, the internal FIB46stores the internal routing information, with which the destination address is associated, in the FIB DBs46taand46tc(the third DB or the third storage section) of the transfer units52aand52cillustrated inFIG.2. Also, the external RIB48aand the external FIB49a(seeFIG.2) are implemented in any one of the transfer apparatuses44ato44dillustrated inFIG.1as described above. The external RIB48aperforms routing for resolving all communication routings related to the transfer destination of the external routing information when the external routing information is notified from the outside of the logical nodes. The external FIB49areflects a routing outside the logical nodes that are resolved by the routing in the packet transfer functions (the transfer units52aand52cinFIG.2) of all the transfer apparatuses44ato44din the logical nodes. The external routing information is information of the routing of the external communication apparatus50aitself. Also, the external routing information is routing information of routing to an external communication apparatus that is connected to the external communication apparatus (an external router, for example)50aand are not illustrated in the drawing, that is, routing information detected by the external router50a. As described above, the transfer system40includes the plurality of transfer apparatuses44ato44din the logical nodes, includes the internal routing layer L1in each of the transfer apparatuses44ato44d, and includes the external routing layer L2in any one of the transfer apparatuses44ato44d. The management CTL42aof the external server42centrally performs configuration control for all the RIB construction units (including internal and external RIB construction units or not including the RIB construction units of the external communication apparatuses) in the transfer system40via the management SW43(also referred to as an SW43). Note that the routing control is not performed by the management CTL and is performed by the various RIB construction units in all the apparatuses and services are thus not affected even if a failure occurs in the management CTL. FIG.2is a block diagram illustrating internal configurations of the leaf44cand the spine44a. However, because both the leaves44cand44dillustrated inFIG.1have the same configuration, and the spines44aand44balso have the same configuration, only the leaf44cand the spine44aare illustrated inFIG.2. The spine44ais configured to include a management port51aconnected to the SW43, a transfer unit52athat realizes high-speed and large-capacity hardware transfer using an application specific integrated circuit (ASIC), and a hardware OS53athat serves as a calculation processing function with which software processing is performed by a central processing unit (CPU). An FIB DB46tais implemented in the transfer unit52a. The hardware OS53aincludes an external RIB construction unit48aon the external routing layer L2, an external RIB DB48ta, an external FIB construction unit49a, and a communication control bridge (also referred to as a bridge)61a. The external RIB48aand the external FIB49aare connected to each other via the bridge61ausing a virtual SW. Note that the external RIB DB48tamay be implemented outside the external routing layer L2and configures the second DB or the second storage section described in the aspects. Further, the hardware OS53ais configured to include an internal RIB construction unit45aon the internal routing layer L1, an internal RIB DB45ta, a communication control bridge62a, an internal FIB construction unit46a, and a transfer driver63a. The internal RIB construction unit45aand the internal FIB construction unit46aare connected to each other via the bridge62ausing a virtual SW. In other words, the external RIB48a, the external FIB49a, the bridge61a, the internal RIB45a, the bridge62a, the internal FIB46a, and the transfer driver63aoperate on the hardware OS53a. The leaf44cillustrated inFIG.2is configured to include a management port51cconnected to the management SW43, a transfer unit52c, and a hardware OS53c. An FIB DB46tcis implemented in the transfer unit52c. The hardware OS53cis configured to include an external route agent (also referred to as an agent)65c, an internal RIB construction unit45con the internal routing layer L1, an internal RIB DB45tc, a communication control bridge62c, an internal FIB construction unit46c, and a transfer driver63c. The internal RIB construction unit45cand the internal FIB construction unit46care connected to each other via the bridge62cusing a virtual SW. The bridge62cconfigures the separation unit that is described in the aspects and separates the internal routing information and the external routing information. The external route agent65cconfigures the agent unit described in the aspects. The internal RIB DBs45taand45tcmay be implemented outside the internal routing layer L1and configure the first DBs or the first storage sections described in the aspects. In other words, the agent65c, the internal RIB45c, the communication control bridge62c, the internal FIB46c, and the transfer driver63coperate on the hardware OS53c. However, the external RIB48aand the internal RIB45aof the spine44aare connected to the management port51a. The internal RIB45cof the leaf44cis connected to the management port51c. Also, although each of the internal RIB45a, the bridge62a, the internal FIB46a, and the transfer driver63aon the hardware OS53aof the spine44aand each of the internal RIB45c, the communication control bridge62c, the internal FIB46c, and the transfer driver63con the hardware OS53cof the leaf44care distinguished by applying the reference signs a and c thereto, only one of them will be described below because both have the same function. Note that the transfer units52aand52cand the transfer drivers63aand63cconfigure the transfer function units described in the aspects. Operations in Embodiment It is assumed that in the transfer system40with such a configuration, each of the transfer apparatuses44ato44dthat are controlled as logical nodes illustrated inFIG.1are connected to each other as follows as a condition assumed when an operation of addressing a failure in a routing is performed. In other words, a case in which all the transfer apparatuses44ato44dillustrated inFIG.1are connected with a Clos type that is similar to the connection topology of the fabric cards13aand13band the line cards14ato14cmounted in the transfer apparatus10(FIG.7) illustrated inFIG.8is assumed. The connection topology may be another topology. Next, the spines44aand44band the leaves44cand44dare defined in the aforementioned connection topology as described above. The leaves44cand44devenly establish connection to all the spines44aand44b, and the spines44aand44bevenly establish connection to all the leaves44cand44d. At this time, ports that establishes connection between each of the leaves44cand44dand each of the spines44aand44bare internal ports of the logical nodes and are internal ports P1and P2of the transfer units52aand52cinFIG.2. The internal port Pt of the leaf44cout of the internal ports P1and P2will be described as one of the internal ports. The internal port P1is connected to the internal RIB45cvia the transfer driver63cconnected to the transfer unit52c, the internal FIB46c, and the communication control bridge62c. For this connection, internal routing information using packets is exchanged between the internal RIBs45cand45aof the leaf44cand the spine44avia the internal ports P1and P2. The exchange is executed by all the transfer apparatuses44ato44ddetecting (executed by all the transfer apparatuses44ato44dsolving) a destination (destination addresses) of an apparatus with which internal routing information is exchanged. The internal FIBs46cand46awrite and store the exchanged internal routing information in the internal RIB DBs45tcand45ta. The stored internal routing information is read by the internal FIBs46cand46avia a netlink and is written and stored in the FIB DBs46tcand46taof the transfer units52cand52a. Through the series of processes, storing of the internal routing information of which routing has been resolved in the FIB DBs46tcand46tais completed. On the other hand, the external RIB48aand the external FIB49aare implemented in any one of the transfer apparatuses44ato44d(the spine44ain this example) in the logical nodes. The external RIB48aand the external FIB49aare in charge of processing of external routing information input from an external port P3. This processing is triggered by the external routing information input from the external port P3being notified to the external RIB48aand the external FIB49avia an internal routing connected to the internal ports P1and P2of the leaf44cand the spine44a. An external router50aconnected to the external port P3of the leaf44cand the external RIB48aof the spine44aare connected to each other on software. Also, the external RIB48aand the leaf44care connected to each other on hardware via internal ports p1and p2. If the external routing information from the external router50ais input from the external port P3and is further input to the agent65cvia the bridge62c, then the agent65caccesses the internal RIB DB45tcand acquires a destination address of the external routing information (in this case, the address of the external RIB48a) as represented by the bidirectional dashed-line arrow. The agent65ctransfers the external routing information to the external RIB48ain accordance with the acquired destination address. The transfer is performed by the external routing information from the agent65cbeing transferred to the external FIB49avia the internal ports P1and P2and being further transferred to the external RIB48avia the bridge61ain accordance with the destination address. The external RIB48astores the transferred external routing information in the external RIB DB48tavia a netlink. The stored external routing information is read by the external FIB49aand is stored in the FIB DB46taof the transfer unit52aas represented by the dashed-line arrow Y1. Along with the storage, the external FIB49aoutputs the external routing information to the internal port P2via the bridge62aas represented by the dashed-line arrow Y2and transfers the external routing information to the other transfer apparatuses44bto44d(which are representatives of the leaf44c) from the internal port P2. The transferred external routing information is stored in the FIB DB46tcof the transfer unit52cvia the internal port P1of the leaf44cas follows. In other words, the transfer driver63coutputs the external routing information input from the internal port P1to the bridge62cas represented by the dashed-line arrow Y2, the bridge62coutputs the external routing information to the transfer driver63cagain, and the transfer driver63cstores the external routing information in the FIB DB46tc. The aforementioned transferring from the agent65cto the external RIB48awill be further described. The agent65cencapsulates once the destination address to which the external routing information from the external port P3is transferred via an internal routing. This encapsulation is performed by adding internal address information to the external routing information because the external routing information passes through the internal routing. Specifically, the encapsulation is performed by adding, to the head of the external routing information, header information with the destination address of the internal routing. The encapsulation is performed by the following two methods. The first one is a method of transferring the external routing information directly to the external FIB49aby the transfer unit52cof the leaf44cand the transfer driver63cthat is a driver thereof. In the case of this method, the transfer unit52cand the transfer driver63cestablish a session directly with the external FIB49aand then transfer the external routing information to the encapsulated destination. The external FIB49athat has received the external routing information through the transferring decapsulates the encapsulated information. After the decapsulation, the external FIB49atransfers the external routing information to the external RIB48avia the bridge61a. In the second method, the external routing information input from the external port P3is once input from the transfer unit52cof the leaf44cto the agent65cvia the transfer driver63cand the internal FIB46cthrough the bridge62c. The agent65cencapsulates the external routing information with the destination address of the external routing information to the external FIB49aand transfers the encapsulated external routing information to the external FIB49aof the spine44aas follows. In other words, the agent65cperforms the encapsulation by adding an internal address AD1configured on the input side of the external FIB49aof the spine44a, the internal address AD1being detectable from the internal RIB45cof the leaf44c. The capsule obtained by the encapsulation is notified from the internal FIB46ato the bridge62avia the internal ports P1and P2. The notified capsule is input from the bridge62ato the external FIB49aas represented by the dashed-line arrow Y3, and the encapsulated information is then decapsulated by the external FIB49a. The external routing information after the decapsulation is transferred to the external RIB48avia the bridge61a. In the embodiment, the second method is employed. The external routing information transferred to the external FIB49ain this method is stored in the external RIB DB48tavia a netlink. The stored external routing information is read by the external FIB49aand is distributed (transferred) to the FIB DB46taof the spine44aand the FIB DB46tcof the leaf44c, which is another transfer apparatus, via the dashed-line arrows Y1and Y2as described above. In the case in which the spine44aincludes the external routing layer L2as illustrated inFIG.2, the external routing layer L2for backup is implemented in a transfer apparatus other than the spine44a, for example, the spine44billustrated inFIG.1. This backup is performed to avoid a single point of failure at an aggregated connection portion of the group of transfer apparatuses44ato44d. In other words, a single point of failure is prevented using the external FIB49or the like of the backup external routing layer L2prepared in another spine44bwhen a failure occurs in a routing related to the spine44a. The processing to be performed when the external routing information input from the external router50ato the external port P3as described above is distributed to each of the transfer apparatuses44ato44dwill be described with reference toFIGS.3to5. FIG.3is a block diagram that schematically illustrates the four transfer apparatuses44ato44din the logical nodes as four switches 1, 2, 3, and 4 and illustrates pieces of external routing information, “0.0.0.1”, “0.0.0.2”, . . . “0.0.0.10”, which are input from the external router50ato the external port (see the external port P3inFIG.2) of the switch 1. In the example, it is assumed that the external routing information input from the switch 1 is transferred to the FIB DB (see the FIB DB46tainFIG.2) in the transfer unit (see the transfer unit52ainFIG.2) of the switch 2. The transfer is performed through the routing that reaches the switch 2 from the switch 1 via the switch 3 or 4. An IP table T1and an output table T2are stored in the FIB DB of the switch 2 that is a destination of the external routing information as illustrated inFIG.4. The IP table T1includes a <destination> from the switch 2 and an <output group> as parameters associated with each destination. The output table T2includes an <output group> as parameters and an <output destination> that is a unique number of the transfer apparatus related to the internal routing information associated with the parameters. Note that the IP table T1configures the first information described in the aspects. The output table T2configures the second information described in the aspects. The “switch 1” that is a destination, related to the internal routing information, from the switch 2 and “0.0.0.1”, “0.0.0.2”, . . . “0.0.0.10” that are destinations outside the group of transfer apparatuses related to the external routing information are stored in the <destination> in the IP table T1. These external destinations “0.0.0.1”, “0.0.0.2”, . . . “0.0.0.10” are destination addresses on the side of the external router50a. “Group: Switch1”, for example, is stored as a parameter in the <output group>. “Group. Switch1” that is the same parameter as that in the IP table T1is stored in the <output group> in the output table T2. The destination addresses “3 and 4” of the switch associated with “Group: Switch1” are stored in the <output destination>. Referring to each of such tables T1and T2, in a case in which the internal routing information is transferred from the switch 2 to the destination switch 1, it is only necessary to output the internal routing information to the switch 1 in the <destination> in the IP table T1and to the switch 3 or 4 in the <output destination> via Group: Switch1 in the <output group>. Similarly, in a case in which the external routing information is transferred from the switch 2 to the destination “0.0.0.1”, it is only necessary to output the external routing information to “0.0.0.1” in the <destination> in the IP table T1and to the switch 3 or 4 in the <output destination> via Group: Switch1 in the <output group>. The same applies to the following addresses up to the destination “0.0.0.10”. Here, it is assumed that an internal routing between the switch 1 and the switch 4 has been disconnected. Because the switch 4 is not available in this case, “4” in the <output destination> in the output table T2is deleted, and the output table T2is rewritten such that only “3” remains. The location to be rewritten is only one location. On the other hand, a routing information table is stored in the FIB DB of the switch 2 as illustrated inFIG.5in the related art. The “switch 1” that is a destination, related to the internal routing information, from the switch 2 and “0.0.0.1”, “0.0.0.2”, . . . “0.0.0.10” that are external destinations related to the external routing information are stored in the <destination> in the table. The switch “3, 4” of each destination is stored in the <output destination>. In other words, the output destination 3 or 4 of the internal routing information or the external routing information is associated with each destination in the related art. Specifically, the output destination 3 or 4 is associated with the “switch 1” and each of the ten pieces of external routing information “0.0.0.1”, “0.0.0.2”, . . . “0.0.0.10”. Thus, in a case in which the routing between the switch 1 and the switch 4 is disconnected, it is necessary to rewrite all the “switch 1” in the <destination> and the ten pieces of external routing information “0.0.0.1”. “0.0.0.2”, . . . “0.0.0.10” to delete “4” in the <output destination> and leave only “3”. Effects of Embodiment Effects of the transfer apparatus44according to the embodiment will be described. Note that the description will be given with the reference signs a and c illustrated inFIGS.1and2removed. The transfer apparatuses44ato44dwill be referred to as a transfer apparatus44. The transfer apparatus44has the external routing layer L2that serves as the external control unit that centrally controls the group of transfer apparatuses (the group of transfer apparatuses44ato44d) connected to each other with communication routings in accordance with the external routing information notified from the external communication apparatus50outside the group of transfer apparatuses. Further, the transfer apparatus44has the internal routing layer L1that serves as the internal control unit that performs autonomous decentralized control for autonomously constructing a routing for each of the transfer apparatuses in accordance with the internal routing information exchanged only between the transfer apparatuses in the group of transfer apparatuses. Features of the transfer apparatus44will be described. (1) The transfer apparatus44has an internal RIB DB45tthat serves as the first DB that stores a destination address obtained by the internal routing layer L1resolving the internal routing information through autonomous decentralized control and an external RIB DB48tthat serves as the second DB that stores the external routing information. Further, the transfer apparatus44has the transfer driver63and a transfer unit52that serve as transfer function units. The transfer unit52has the external port P3to which the external routing information is input and the internal ports P1and P2to and from which the internal routing information is input and output, and transfers the external routing information input from the external port P3and the internal routing information to other transfer apparatuses via the internal ports P1and P2. Also, the transfer unit52has an internal RIB DB46tthat serves as the third DB that stores the external routing information or the internal routing information transferred from other transfer apparatuses. Further, the transfer apparatus44includes the agent65that acquires, from the internal RIB DB45t, the destination address of the external routing information input from the external port P3and transfers the external routing information from the internal ports P1and P2to the transfer apparatus of the acquired destination address via the bridge62. The internal routing layer L1performs control of detecting the destination address of the internal routing information from the internal RIB DB45t, transferring the internal routing information to the transfer apparatus of the detected destination address, and storing the internal routing information or the external routing information transferred from other transfer apparatuses via the internal ports P1and P2in the internal RIB DB46t. The external routing layer L2is configured to perform control of storing the external routing information transferred from the agent65in the external RIB DB48t, storing the stored external routing information in the internal RIB DB46tof the transfer apparatus to which the external routing layer L2itself belongs, and transferring the external routing information to other transfer apparatuses via the internal ports P1and P2. According to this configuration, it is possible to individually process the internal routing information exchanged only between the transfer apparatuses and the external routing information input from the external port P3. During the processing, the agent65uses the destination address obtained by the internal routing layer L1resolving the internal routing information in an autonomously decentralized manner (detecting the destination address) and stored in the internal RIB DB45t. The destination address used here enables the transfer of the external routing information to the external routing layer L2and storage thereof in the external RIB DB48t. Further, the external routing layer L2can store the external routing information, which is stored in the external RIB DB48t, in the internal RIB DB46tof the transfer apparatus to which the external routing layer L2belongs, and can also transfer the external routing information such that the external routing information is stored in the internal RIB DBs46tof other transfer apparatuses. Thus, it is possible to transfer the external routing information input from the external port P3to the external routing layer L2that centrally controls the group of transfer apparatuses using the destination address resolved by the internal routing layer L1in an autonomously decentralized manner. Thus, it is possible to resolve the internal routing by the internal routing layer L1in an autonomously decentralized manner at the time of a failure in the internal routing or the transfer apparatus related to the internal routing information of the group of transfer apparatuses, and the transferring of the external routing information to the destination address is thus not affected. Further, according to the external routing layer L2, the internal routing layer L1can solve the information related to the output destination of the transferred external routing information in an autonomously decentralized manner and transfer the information to the transfer apparatus to which the external routing layer L2itself belongs and other transfer apparatuses, and the transfer processing is thus not affected by the failure. Thus, it is possible to reduce the amount of calculation load and a rerouting time for solving a failure at the time of a failure in the connection routings between the transfer apparatuses or in the transfer apparatuses. (2) The internal RIB DB46tindividually stores, as destinations, the destination addresses of the transfer apparatuses related to the internal routing information and the destination addresses of the external communication apparatuses50and50brelated to the external routing information, and stores the IP table T1as first information in which parameters are stored in an associated manner with each of the destinations and the output table T2as second information in which parameters a stored in an associated manner with the destination addresses of the transfer apparatuses that are output destinations related to the internal routing information. According to this configuration, because transferring cannot be performed via an internal routing or a transfer apparatus in which a failure has occurred in a case in which the failure has occurred in the internal routing between the transfer apparatuses or in the transfer apparatus, it is only necessary to delete only the destination address of the transfer apparatus that is the output destination via which the transferring cannot be performed according to the output table T2. This simple rewriting processing enables reflection of the deletion of the internal routing or the transfer apparatus in which the failure has occurred in the output table T2. In other words, it is possible to simply reflect a failure in the data tables at the time of a failure in an internal routing or in a transfer apparatus. (3) The internal routing layer L1includes the internal RIB DB45t, the internal RIB45, and the internal FIB46. The internal RIB45performs resolution for detecting the destination address to which the internal routing information to be exchanged between the transfer apparatuses is to be transferred and stores, in the internal RIB DB45t, the internal routing information with which the destination address obtained through the resolution is associated. The external FIB49reads the internal routing information, which is stored in the internal RIB DB45t, with which the destination address is associated, and stores the internal routing information in the internal RIB DB46t. The external routing layer L2includes the external RIB DB48t, the external RIB48, and the external FIB49. When the external routing information is notified from the agent65, the external RIB48performs resolution for detecting the destination address outside the group of transfer apparatuses related to the external routing information and the destination address inside the group of transfer apparatuses related to the internal routing information and stores the external routing information, with which all the destination addresses obtained through the resolution are associated, in the external RIB DB48t. The external FIB49reads the external routing information, which is stored in the external RIB DB48t, with which the destination addresses are associated, stores the read external routing information in the internal RIB DB46t, and transfers the external routing information such that the external routing information is stored in the internal RIB DBs46tof other transfer apparatuses. According to this configuration, the internal routing layer L1includes the internal RIB DB45t, the internal RIB construction unit45, and the internal FIB construction unit46. Also, the external routing layer L2includes the external RIB DB48t, the external RIB construction unit48, and the external FIB construction unit49. Thus, it is possible to integrate each of the internal routing layer L1and the external routing layer L2in one unit. (4) The external routing layer L2is configured to be implemented in any one of the transfer apparatuses in the group of transfer apparatuses and be also implemented as a backup in any one of the transfer apparatuses other than the transfer apparatus in which the external routing layer L2is implemented. According to this configuration, it is possible to operate the external routing layer L2as a backup when a failure occurs in a transfer apparatus in which the external routing layer L2is operating or in an internal routing related to the transfer apparatus. It is thus possible to prevent a single point of failure at an integrated connection portion of the group of transfer apparatuses. (5) The transfer system40is configured to include a group of transfer apparatuses, in which the transfer apparatuses44corresponding to any one of (1) to (4) above are connected to each other, and the external server42that is connected to the external routing layer L2disposed in any one of the group of transfer apparatuses and centrally performs centralized control on the group of transfer apparatuses via the external routing layer L2. According to this configuration, it is possible to centrally perform control such as routing construction on the group of transfer apparatuses through the centralized control performed by the external server42and thereby to resolve the routing information and perform routing without taking into consideration the number of transfer apparatuses at the time of a failure in an internal routing or on a transfer apparatus. Modification Example of Embodiment FIG.6is a block diagram illustrating a configuration of a transfer system using transfer apparatuses according to a modification example of the embodiment of the present invention. A transfer system40A illustrated inFIG.6is different from the transfer system40(FIG.2) according to the aforementioned embodiment in that the transfer system40A is configured such that the external server42A is provided with the external routing layer L2instead of the spine44aincluding the external routing layer L2. In the case of this configuration, the external FIB49of the external server42A is connected to the SW43, and the management port51aof the spine44ais connected to the FIB DB46ta, the internal RIB45a, and the communication control bridge62a. Also, the destination address to the external RIB48in the external routing layer L2of the external server42A is stored in the internal RIB DB45tcof the leaf44c. In the transfer system40A with this configuration, the external routing layer L2of the external server42A performs the processing of resolving the external routing that is equivalent to that in the aforementioned embodiment. In other words, if the external routing information input from the external port P3of the leaf44cis input to the agent65c, the agent65caccesses the internal RIB DB45tcand acquires the destination address of the external routing information. The agent65cencapsulates the external routing information and transfers the encapsulated external routing information to the acquired destination address. The transferred capsule is notified from the internal FIB46of the spine44ato the bridge62avia the internal ports P1and P2, is input to the port51aas represented by the dashed-line arrow Y3, and is then input to the external FIB49of the external server42A via the SW43. The encapsulated information is decapsulated by the external FIB49, and the external routing information after the decapsulation is transferred to the external RIB48via the bridge61. The external RIB48stores the transferred external routing information in the external RIB DB48t. The stored external routing information is read by the external FIB49and is stored in the FIB DB46taof the transfer unit52avia the SW43and the port51aas represented by the dashed-line arrow Y1. With this storage, the external routing information from the external FIB49is output from the port51ato the internal port P2via the bridge62aas represented by the dashed-line arrow Y2, is transferred from the internal port P2to other transfer apparatuses44bto44d(the representative of which is the leaf44c), and is stored in the FIB DB46tcof the transfer unit52c. Also, a program executed by a computer according to the embodiment will be described. The computer includes the transfer apparatus44that has the external routing layer L2that centrally controls a group of transfer apparatuses connected to each other with communication routings in accordance with external routing information notified from the external communication apparatuses50and50boutside the group of transfer apparatuses and the internal routing layer L1that performs autonomous decentralized control for autonomously constructing a routing for each of the transfer apparatuses in accordance with internal routing information exchanged only between the transfer apparatuses in the group of transfer apparatuses. The program causes the computer to function as follows. In other words, the program causes the computer to function as a unit that transfers the external routing information input from the external port P3of the transfer apparatus44and the internal routing information to other transfer apparatuses44via the internal ports P1and P2of the transfer apparatus44and stores the external routing information or the internal routing information transferred from other transfer apparatuses44in the FIB DB48t. Also, the program causes the computer to function as a unit that acquires the destination address of the external routing information, which has been input from the external port P3, from the internal RIB DB45tin which the destination address obtained by the internal routing layer L1resolving the internal routing information through autonomous decentralized control is stored, and transfers the external routing information from the internal ports P1and P2to the transfer apparatus44of the acquired destination address. Also, the program causes the computer to function as a unit that detects the destination address of the internal routing information that is a target of the transferring from the internal RIB DB45t, transfers the internal routing information to the transfer apparatus44of the detected destination address, and stores the internal routing information or the external routing information, which has been transferred from other transfer apparatuses44via the internal ports P1and P2, in the FIB DB46t. Further, the program causes the computer to function as a unit that stores the external routing information transferred from other transfer apparatuses44in the external RIB DB48t, stores the stored external routing information in FIB DB46t, and transfers the external routing information to other transfer apparatuses44via the internal ports P1and P2. According to this program, it is possible to obtain effects similar to those of the aforementioned transfer apparatus44. In addition, a specific configuration can be appropriately changed without departing from the gist of the present invention. REFERENCE SIGNS LIST 40Transfer system42External server43Management SW44ato44dTransfer apparatus44a,44bSpine44c,44dLeaf45Internal RIB construction unit45ta,45tcInternal RIB DB (first DB)46Internal FIB construction unit46ta,46tcFIB DB (third dB)48,48aExternal RIB construction unit48taExternal RIB DB (second DB)49,49aExternal FIB construction unit50a.50bExternal communication apparatus (external router)51Management port52a,52cTransfer unit53a,53cHardware OS61a,62a,62cCommunication control bridge63a.63cTransfer driver65cExternal route agent
40,861
11943135
DETAILED DESCRIPTION As will be appreciated by one skilled in the art, aspects of the embodiments may be embodied as a system, apparatus, method, or program product. Accordingly, embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects. For example, the disclosed embodiments may be implemented as a hardware circuit comprising custom very-large-scale integration (“VLSI”) circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. The disclosed embodiments may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like. As another example, the disclosed embodiments may include one or more physical or logical blocks of executable code which may, for instance, be organized as an object, procedure, or function. Furthermore, embodiments may take the form of a program product embodied in one or more computer readable storage devices storing machine readable code, computer readable code, and/or program code, referred hereafter as code. The storage devices may be tangible, non-transitory, and/or non-transmission. The storage devices may not embody signals. In a certain embodiment, the storage devices only employ signals for accessing code. Any combination of one or more computer readable medium may be utilized. The computer readable medium may be a computer readable storage medium. The computer readable storage medium may be a storage device storing the code. The storage device may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, holographic, micromechanical, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the storage device would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random-access memory (“RAM”), a read-only memory (“ROM”), an erasable programmable read-only memory (“EPROM” or Flash memory), a portable compact disc read-only memory (“CD-ROM”), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store, a program for use by or in connection with an instruction execution system, apparatus, or device. Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment, but mean “one or more but not all embodiments” unless expressly specified otherwise. The terms “including,” “comprising,” “having,” and variations thereof mean “including but not limited to,” unless expressly specified otherwise. An enumerated listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise. The terms “a,” “an,” and “the” also refer to “one or more” unless expressly specified otherwise. As used herein, a list with a conjunction of “and/or” includes any single item in the list or a combination of items in the list. For example, a list of A, B and/or C includes only A, only B, only C, a combination of A and B, a combination of B and C, a combination of A and C or a combination of A, B and C. As used herein, a list using the terminology “one or more of” includes any single item in the list or a combination of items in the list. For example, one or more of A, B and C includes only A, only B, only C, a combination of A and B, a combination of B and C, a combination of A and C or a combination of A, B and C. As used herein, a list using the terminology “one of” includes one and only one of any single item in the list. For example, “one of A, B and C” includes only A, only B or only C and excludes combinations of A, B and C. As used herein, “a member selected from the group consisting of A, B, and C,” includes one and only one of A, B, or C, and excludes combinations of A, B, and C.” As used herein, “a member selected from the group consisting of A, B, and C and combinations thereof” includes only A, only B, only C, a combination of A and B, a combination of B and C, a combination of A and C or a combination of A, B and C. Furthermore, the described features, structures, or characteristics of the embodiments may be combined in any suitable manner. In the following description, numerous specific details are provided, such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of embodiments. One skilled in the relevant art will recognize, however, that embodiments may be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of an embodiment. Aspects of the embodiments are described below with reference to schematic flowchart diagrams and/or schematic block diagrams of methods, apparatuses, systems, and program products according to embodiments. It will be understood that each block of the schematic flowchart diagrams and/or schematic block diagrams, and combinations of blocks in the schematic flowchart diagrams and/or schematic block diagrams, can be implemented by code. This code may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the schematic flowchart diagrams and/or schematic block diagrams. The code may also be stored in a storage device that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the storage device produce an article of manufacture including instructions which implement the function/act specified in the schematic flowchart diagrams and/or schematic block diagrams. The code may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus, or other devices to produce a computer implemented process such that the code which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the schematic flowchart diagrams and/or schematic block diagram. The schematic flowchart diagrams and/or schematic block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of apparatuses, systems, methods, and program products according to various embodiments. In this regard, each block in the schematic flowchart diagrams and/or schematic block diagrams may represent a module, segment, or portion of code, which includes one or more executable instructions of the code for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more blocks, or portions thereof, of the illustrated Figures. The description of elements in each figure may refer to elements of proceeding figures. Like numbers refer to like elements in all figures, including alternate embodiments of like elements. Methods, apparatuses, and systems are disclosed for modifying a data connection to support QoS requirements. UEs operating “behind” a 5G-capable Residential Gateway (5G-RG) should be able to access the 5G core network and establish PDU Sessions capable to support strict QoS, e.g. to support guaranteed bitrate flows. A problem with the current 5G-RGs and gateway UEs (e.g., operating according to current 3GPP Rel-16) is that the 5G-RG (or gateway UE) maps the data traffic (i.e. the IPsec SA traffic) of a UE operating behind the gateway (referred to as “UE-1”) into one of its QoS flows without knowing the QoS requirements of this data traffic. This occurs because the PDU Session of UE-1 and the associated IPsec SA for UE-1 are established completely transparently to the 5G-RG (or gateway UE). In other words, the 5G-RG (or gateway UE) does not know that UE-1 has created its own PDU Session that requires specific QoS handling. Therefore, when the data traffic of UE-1 goes through the PDU Session of the 5G-RG, it is very likely that it does not receive the appropriate QoS handling. This means that it is not possible for the PDU Session of UE-1 to guarantee its assigned level of QoS, because the traffic of this PDU Session goes through the PDU Session of 5G-RG and can be mapped to a QoS flow of 5G-RG that does not support the appropriate QoS. To resolve this problem, this disclosure proposes a solution that enables the 5G-RG (a) to map the IPsec traffic of UE-1 into a QoS flow that can meet the QoS requirements of this traffic, or (b) to create a new QoS flow that will carry the IPsec traffic of UE-1 and will be capable to offer the QoS for this traffic. FIG.1depicts a wireless communication system100for modifying a data connection to support QoS requirements, according to embodiments of the disclosure. In one embodiment, the wireless communication system100includes at least one remote unit105, at least one gateway UE107, a 5G-RAN115, a 5G-RG137and a mobile core network140. The 5G-RAN115and the mobile core network form a mobile communication network. The 5G-RAN115may be composed of a 3GPP access network120containing at least one cellular base unit121and/or a non-3GPP access network130containing at least one access point131. The gateway UE107may communicate with the 3GPP access network120using 3GPP communication links123and communicates with the non-3GPP access network130using non-3GPP communication links133. In various embodiments, the remote unit105may communicate with the 3GPP access network120using 3GPP communication links123, may communicate with the non-3GPP access network130using non-3GPP communication links133, and/or may communicate with the 5G-RG137or the gateway UE107, e.g., using non-3GPP communication links133. Even though a specific number of remote units105, gateway UEs107, 3GPP access networks120, cellular base units121, 3GPP communication links123, non-3GPP access networks130, access points131, non-3GPP communication links133, 5G-RG137, and mobile core networks140are depicted inFIG.1, one of skill in the art will recognize that any number of remote units105, 3GPP access networks120, cellular base units121, 3GPP communication links123, non-3GPP access networks130, access points131, non-3GPP communication links133, and mobile core networks140may be included in the wireless communication system100. In one implementation, the wireless communication system100is compliant with the 5G system specified in the 3GPP specifications. More generally, however, the wireless communication system100may implement some other open or proprietary communication network, for example, LTE or WiMAX, among other networks. The present disclosure is not intended to be limited to the implementation of any particular wireless communication system architecture or protocol. In one embodiment, the remote units105may include computing devices, such as desktop computers, laptop computers, personal digital assistants (“PDAs”), tablet computers, smart phones, smart televisions (e.g., televisions connected to the Internet), smart appliances (e.g., appliances connected to the Internet), set-top boxes, game consoles, security systems (including security cameras), vehicle on-board computers, network devices (e.g., routers, switches, modems), or the like. In some embodiments, the remote units105include wearable devices, such as smart watches, fitness bands, optical head-mounted displays, or the like. Moreover, the remote units105may be referred to as UEs, subscriber units, mobiles, mobile stations, users, terminals, mobile terminals, fixed terminals, subscriber stations, user terminals, wireless transmit/receive unit (“WTRU”), a device, or by other terminology used in the art. The remote units105may communicate directly with one or more of the cellular base units121in the 3GPP access network120via uplink (“UL”) and downlink (“DL”) communication signals. Furthermore, the UL and DL communication signals may be carried over the 3GPP communication links123. Similarly, the remote units105may communicate with one or more access points131in the non-3GPP access network(s)130via UL and DL communication signals carried over the non-3GPP communication links133. Additionally, the remote units105may communicate with the gateway UE107and/or the 5G-RG137via UL and DL communication signals carried over non-3GPP communication links133. Here, the access networks120and130are intermediate networks that provide the remote units105, gateway 107, and 5G-RG137with access to the mobile core network140. In some embodiments, the remote units105communicate with a remote host via a network connection with the mobile core network140. For example, an application in a remote unit105(e.g., web browser, media client, telephone/VoIP application) may trigger the remote unit105to establish a PDU session (or other data connection) with the mobile core network140using the 5G-RAN115(e.g., a 3GPP access network120and/or a non-3GPP access network130). The mobile core network140then relays traffic between the remote unit105and the data network150using the PDU session. Note that the remote unit105may establish one or more PDU sessions (or other data connections) with the mobile core network140. As such, the remote unit105may have at least one PDU session for communicating with the data network150. The remote unit105may establish additional PDU sessions for communicating with other data network and/or other remote hosts. The cellular base units121may be distributed over a geographic region. In certain embodiments, a cellular base unit121may also be referred to as an access terminal, a base, a base station, a Node-B, an eNB, a gNB, a Home Node-B, a relay node, a device, or by any other terminology used in the art. The cellular base units121are generally part of a radio access network (“RAN”), such as the 3GPP access network120, that may include one or more controllers communicably coupled to one or more corresponding cellular base units121. These and other elements of radio access network are not illustrated but are well known generally by those having ordinary skill in the art. The cellular base units121connect to the mobile core network140via the 3GPP access network120. The cellular base units121may serve a number of remote units105within a serving area, for example, a cell or a cell sector, via a 3GPP communication link123. The cellular base units121may communicate directly with one or more of the remote units105via communication signals. Generally, the cellular base units121transmit DL communication signals to serve the remote units105in the time, frequency, and/or spatial domain. Furthermore, the DL communication signals may be carried over the 3GPP communication links123. The 3GPP communication links123may be any suitable carrier in licensed or unlicensed radio spectrum. The 3GPP communication links123facilitate communication between one or more of the remote units105and/or one or more of the cellular base units121. The non-3GPP access networks130may be distributed over a geographic region. In various embodiments, the non-3GPP access network130may comprise one or more wireless networks, for example WLAN(s). In certain embodiments, a non-3GPP access network130may serve a number of remote units105with a serving area of an access point131. An access point131in a non-3GPP access network130may communicate directly with one or more remote units105by receiving wireless UL communication signals and transmitting wireless DL communication signals to serve the remote units105in the time, frequency, and/or spatial domain. Both DL and UL communication signals are carried over the non-3GPP communication links133. The 3GPP communication links123and non-3GPP communication links133may employ different frequencies and/or different communication protocols. In various embodiments, an access point131may communicate using unlicensed radio spectrum. The mobile core network140may provide services to a remote unit105via the non-3GPP access networks130, as described in greater detail herein. In some embodiments, the non-3GPP access network130may comprise one or more fixed wireline networks. Here, a fixed wireline non-3GPP access network130may connect to a 5G-RG137. A 5G-RG137may connect to the mobile core network140via the fixed wireline network. Here, the 5G-RG137may register as a UE to the mobile core network140and thus establish data connections (e.g., PDU sessions) with the mobile core network140. As such, the 5G-RG137supports 5G-NAS signaling and may establish a NAS layer connection with the AMF145. Additionally, the 5G-RG137may connect to the remote unit105and serve as a gateway function by providing the remote unit105with access to the mobile core network140. While depicted as connecting to the mobile core network140via a non-3GPP access network130, in certain embodiments the 5G-RG137may connect to the mobile core network140using a fixed wireless connection via the 3GPP access network120. The gateway UE107may connect wirelessly to the mobile core network140via the 3GPP access network120and/or non-3GPP access network130. Here, the gateway UE107may register as a UE to the mobile core network140and thus establish data connections (e.g., PDU sessions) with the mobile core network140. Here, the gateway UE107establishes a NAS layer connection with the AMF145. Additionally, the gateway UE107may connect (wirelessly) to the remote unit105and serve as a gateway function by providing the remote unit105with access to the mobile core network140. In some embodiments, a non-3GPP access network130connects to the mobile core network140via an interworking function135. The interworking function135provides interworking between the remote unit105and the mobile core network140. In some embodiments, the interworking function135is a Non-3GPP Interworking Function (“N3IWF”) and, in other embodiments, it is a Trusted Non-3GPP Gateway Function (“TNGF”). The N3IWF supports the connection of “untrusted” non-3GPP access networks to the mobile core network (e.g. 5GC), whereas the TNGF supports the connection of “trusted” non-3GPP access networks to the mobile core network. The interworking function135supports connectivity to the mobile core network140via the “N2” and “N3” interfaces, and it relays “N1” signaling between the remote unit105and the AMF145. As depicted, both the 3GPP access network120and the interworking function135communicate with the AMF145using a “N2” interface. The interworking function135also communicates with the UPF using a “N3” interface. In certain embodiments, a non-3GPP access network130may be controlled by an operator of the mobile core network140and may have direct access to the mobile core network140. Such a non-3GPP AN deployment is referred to as a “trusted non-3GPP access network.” A non-3GPP access network130is considered as “trusted” when it is operated by the 3GPP operator, or a trusted partner, and supports certain security features, such as strong air-interface encryption. In contrast, a non-3GPP AN deployment that is not controlled by an operator (or trusted partner) of the mobile core network140, does not have direct access to the mobile core network140, or does not support the certain security features is referred to as a “non-trusted” non-3GPP access network. In one embodiment, the mobile core network140is a 5G core (“5GC”) or the evolved packet core (“EPC”), which may be coupled to a data network (e.g., the data network150, such as the Internet and private data networks, among other data networks. A remote unit105may have a subscription or other account with the mobile core network140. Each mobile core network140belongs to a single public land mobile network (“PLMN”). The present disclosure is not intended to be limited to the implementation of any particular wireless communication system architecture or protocol. The mobile core network140includes several network functions (“NFs”). As depicted, the mobile core network140includes multiple user plane functions (“UPFs”). Here, the mobile core network140includes at least a UPF-0141that serves the 5G-RG137(and/or gateway UE107) and a UPF-1143that serves a remote unit105. Note that in certain embodiments, the mobile core network may contain one or more intermediate UPFs, for example a first intermediate UPF that serves the non-3GPP access network130and the second intermediate UPF that serves the 3GPP access network120. In such embodiments, there is an anchor UPF receiving UP traffic of the intermediate UPFs. The mobile core network140also includes multiple control plane functions including, but not limited to, an Access and Mobility Management Function (“AMF”)145that serves both the 3GPP access network120and the non-3GPP access network130, a Session Management Function (“SMF”)146, a Policy Control Function (“PCF”)148, and a Unified Data Management function (“UDM”)149. In certain embodiments, the mobile core network140may also include an Authentication Server Function (“AUSF”), a Network Repository Function (“NRF”) (used by the various NFs to discover and communicate with each other over APIs), or other NFs defined for the 5GC. In various embodiments, the mobile core network140supports different types of mobile data connections and different types of network slices, wherein each mobile data connection utilizes a specific network slice. The different network slices are not shown inFIG.1for ease of illustration, but their support is assumed. Although specific numbers and types of network functions are depicted inFIG.1, one of skill in the art will recognize that any number and type of network functions may be included in the mobile core network140. Moreover, where the mobile core network140is an EPC, the depicted network functions may be replaced with appropriate EPC entities, such as an MME, S-GW, P-GW, HSS, and the like. As depicted, a remote unit105(e.g., a UE) may connect to the mobile core network140(e.g., to a 5G mobile communication network) via the 5G-RG137. Such a remote unit105is said to be operating “behind” the 5G-RG137. Similarly, a remote unit105may connect to the mobile core network140via the gateway UE107. Such a remote unit105is said to be operating “behind” the gateway UE107. In some embodiments, the 5G-RG137is able to modify its PDU Session with the 5G core network when a remote unit105operating behind the 5G-RG137requests specific QoS resources (e.g., an IEEE 802.11 Traffic Stream) over non-3GPP access. The modified PDU Session of the 5G-RG137supports a new QoS flow that is capable to transfer the traffic of the remote unit105to the 5G core network by providing the necessary QoS handling. Similarly, a gateway UE107may be configured to modify its PDU Session with the 5G core network when a remote unit105operating behind the gateway UE107requests specific QoS resources, such that the modified PDU session supports a new QoS flow that is capable to transfer the traffic of the remote unit105to the 5G core network by providing the necessary QoS handling. FIG.2depicts a network architecture200, according to embodiments of the disclosure. The network architecture200includes a UE205which is operating behind a gateway—here the 5G-RG210, a UPF-0215, an IWF-1220and UPF-1225. The UE205may be one implementation of the remote unit105, while the 5G-RG210may be one implementation of the 5G-RG137. While the network architecture200depicts the UE205operating behind the 5G-RG210, in other embodiments of the network architecture200, the 5G-RG210is replaced with a gateway UE107(i.e., the UE205operates behind a gateway UE107). The 5G-RG210is registered with a 5G core network and has established a PDU Session230for communicating with Data Network-0235, e.g. the Internet or an IPTV network offering TV channel streaming. The PDU Session230of 5G-RG is anchored at the UPF-0215. Initially, the PDU Session230of the 5G-RG is composed of two QoS flows (QoS Flow-1245and QoS Flow-2250), each one offering different QoS characteristics. The 5G-RG210is configured with QoS rules that map the uplink data traffic of the 5G-RG210to one of these QoS flows. Similarly, the UPF-0215is configured with N4 rules that map the downlink data traffic of the 5G-RG210to one of these QoS flows. In addition, the UE205is registered with the 5G core network via the 5G-RG210and has established its own PDU Session260for communicating with Data Network-1240, e.g. an enterprise network. Here, the PDU Session260of the UE205is anchored at UPF-1225. In contrast to the 5G-RG210, the data traffic of the UE205needs to go through a first Interworking function (“IWF-1”)220(e.g., a TNGF or a N3IWF) before reaching the 5G core network. In various embodiments, this requirement is due to the UE205(as well as every UE operating “behind” a 5G-RG210) accessing the 5G core by utilizing the solution for non-3GPP access, which requires a TNGF or N3IWF. Note that all data traffic of the UE205, i.e., all traffic sent via the PDU Session260of the UE205, is carried over an IPsec Security Association (SA)265between the UE205and the IWF-1220, which is established during the setup of the PDU Session260of the UE205. This IPsec SA265carries all QoS flows of the UE205(which are different from the QoS flows of 5G-RG210) and, hence, it should support specific QoS characteristics. If no established QoS flow for the 5G-RG210supports the appropriate QoS, then the 5G-RG210creates a new QoS flow255that will carry the IPsec traffic of the UE205. Here, the 5G-RG210establishes the new QoS flow255, based on the QoS requirements associated with the IPsec traffic of the UE205and then maps the IPsec traffic of the UE205onto this new QoS flow255. In this way, the IPsec traffic of the UE205receives the appropriate QoS handling when going through the PDU Session230of the 5G-RG210. Creating a new QoS flow is discussed in greater detail below with reference toFIGS.3A-3B. In the general case, where the PDU Session260of the UE205is composed of multiple IPsec SAs, then the 5G-RG210may establish a new QoS flow for every IPsec SA. Alternatively, the 5G-RG210may establish a new QoS for some IPsec SAs and map the other IPsec SAs into existing QoS flows. For ease of illustration, only a single IPsec SA265is shown inFIG.2. FIGS.3A-3Bdepicts a network procedure300for modifying a data connection to support QoS requirements of an access network, according to embodiments of the disclosure. The network procedure300involves the UE205, the 5G-RG210, the UPF-0215, the IWF-1220, the UPF-1225, an AMF-0301, an SMF-0302, and a PCF-0303. The UE205, the 5G-RG210, the UPF-0215, the IWF-1220, and the UPF-1225are substantially as described above with reference toFIG.2. The AMF-0301is an AMF serving the 5G-RG210and may be an implementation of the AMF145. The SMF-0302is an SMF serving the 5G-RG210and may be an implementation of the SMF146. The PCF-0303is a PCF serving the 5G-RG210and may be an implementation of the PCF148. In the network procedure300, the 5G-RG210makes sure that the traffic over the child IPsec SA between UE205and IWF-1220is transferred via its PDU Session (first data connection) by receiving the appropriate QoS handling. In turn, this enables the UE205to establish a PDU Session (composed by one or more child IPsec SAs) via the 5G-RG210that can receive the expected QoS handling. AtFIG.3A, the network procedure300begins at step 1 with the 5G-RG210having registered with a 5G core network, either via fixed wireless access, or via fixed wireline access (e.g., cable or xDSL), and has established a PDU Session305(first data connection) to the Data Network-0235, e.g., the Internet or an IPTV network offering TV channel streaming. Note that the 5G network functions AMF-0301, SMF-0302, PCF-0303, and UPF-0215have been allocated to support the 5G-RG210and its PDU Session305. The PDU Session305supports one or more QoS flows, each one supporting certain QoS characteristics. The data traffic310of the 5G-RG210is carried via the PDU Session305. In addition, the UE205has connected to the 5G-RG210via non-3GPP access (e.g., Bluetooth, Wi-Fi, etc.) and has registered to the 5G core network via the 5G-RG210. Here, such registration may be performed according to 3GPP specifications. The 5G network functions IWF-1220(e.g., a TNGF or N3IWF) and AMF-1 (not shown in the figure) are allocated to serve the UE205. The UE205decides to establish a PDU Session in order to communicate with an external Data Network-1240(e.g., the Internet or a corporate data network). For this purpose, the UE205sends a PDU Session Establishment Request to the IWF-1220via the PDU Session305of the 5G-RG210. The 5G network allocates a SMF-1 and UPF-1 to serve this PDU Session of the UE205. At step 2, as part of the UE205PDU Session Establishment procedure, the IWF-1220sends to the UE205a request to establish a child IPsec SA, which will carry one or more QoS flows of the PDU Session of the UE205(see messaging315). Here, the request includes “Additional QoS Information” that indicates what QoS characteristics (e.g., max delay, mean and peak bitrates, etc.) are required for the traffic over this child IPsec SA. At step 3, the UE205requests to reserve QoS resources over the non-3GPP access (between the UE205and the 5G-RG210) in order to support the QoS requirements of the child IPsec SA (see block320). For this purpose, the UE205requests from the 5G-RG210to establish a new Traffic Stream (“TS”), e.g., by sending an Add Traffic Stream (“ADDTS”) Request as specified in the IEEE 802.11 specification (see messaging325). The ADDTS includes the parameters TSPEC and TCLAS, a first set of parameters. It is assumed here that the non-3GPP access between the UE205and the 5G-RG210complies with IEEE 802.11. The TCLAS (Traffic Classification) element specifies the traffic that will be carried over the TS, e.g., by containing source and destination IP addresses and the Security Parameter Indexes (SPIs) of the child IPsec SA. The TSPEC (Traffic Specification) element specifies the QoS requirement of the TS, e.g., by containing the delay bound, min/mean/peak data rates, etc. The TSPEC element is populated based on the QoS requirements in the received “Additional QoS Information.” At step 4, after receiving the ADDTS Request containing TCLAS and TSPEC (first set of parameters), the 5G-RG210determines what QoS resources are required for the TS (e.g., based on TSPEC) and what traffic should be carried on the TS (e.g., based on TCLAS). If the 5G-RG210can satisfy the requested QoS reservation, it responds with an ADDTTS Response message and creates the associated Traffic Stream (TS) over non-3GPP access (see messaging330). This TS will carry the traffic of the child IPsec SA between the UE205and the 5G-RG210. At step 5, after successfully establishing the TS with the 5G-RG210(e.g., reserving the QoS resources), the UE205accepts the child IPsec SA requested by the IWF-1220(see messaging335). Note that signaling between the UE205and the IWF-1220, including the PDU Session Establishment request/response and the IKE_Create_Child_SA request/response, is sent over a “signaling IPsec” tunnel, which is set up during the UE registration. This “signaling IPsec” tunnel goes through the PDU Session305of 5G-RG and thus through one of the existing QoS flows of the PDU Session305. At step 6 (refer to block340), after successfully establishing the TS with the UE205, the 5G-RG210determines if the traffic of this TS (second data connection) can be mapped into one of the existing QoS flows over its PDU Session305(first data connection). In various embodiments, this is determined by comparing the QoS characteristics of the TS (as defined by TSPEC) and the QoS characteristics of each existing QoS flow. If the traffic of the TS can be mapped into an existing QoS flow (called the matched QoS flow), then the 5G-RG210is configured to (a) forward the traffic arriving via the TS to the UPF-0215by using the matched QoS flow and (b) to forward the traffic arriving from the UPF-0215that matches the TCLAS (e.g., the downlink traffic of the child IPsec SA) to the TS by using the matched QoS flow. Continuing atFIG.3B, at step 7, if the traffic of the TS cannot be mapped into an existing QoS flow, then the 5G-RG210decides to modify its PDU Session305and request a new QoS flow. For this purpose, the 5G-RG210starts the UE-initiated PDU Session Modification procedure345by sending a PDU Session Modification Request including the Requested QoS Rules and the Requested QoS flow descriptions (see messaging350). The Requested QoS Rules specify the traffic that will be carried on the new QoS flow and is derived by using the TCLAS element received in step 3. In one example, the Requested QoS Rules will include one QoS rule with two packet filters: One that is used to detect the uplink traffic carried over the child IPsec SA (e.g., by means of an SPI and IP addresses) and another that is used to detect the downlink traffic carried over the child IPsec SA (e.g., again by means of an SPI and IP addresses). The Requested QoS flow descriptions specify the QoS characteristics of the requested QoS flow (e.g., guaranteed bitrates for uplink and downlink) and is derived by using the TSPEC element received in step 3. In one example, the Requested QoS flow descriptions will include two parameters: One for the guaranteed bitrate in the downlink direction and another for the guaranteed bitrate in the uplink direction. Note that the UE-initiated PDU Session Modification procedure345may be performed according to 3GPP specifications. As such, there may be additional steps (see block355) prior to the network functions sending a PDU Session Modification Command (see messaging360). After the PDU Session modification is successfully completed (see messaging365), the PDU Session of the 5G-RG210is modified to support a new QoS flow380, which can meet the QoS requirements for the child IPsec SA385between the UE205and the IWF-1220. In this way, the IPsec traffic of the UE's PDU Session390receives the appropriate QoS handling when going through the 5G-RG's modified PDU Session370. In the general case, where the UE's PDU Session390is composed by multiple child IPsec SAs, the PDU Session Modification procedure at step 7 may create multiple new QoS flows in the PDU Session of 5G-RG210. Each of these new QoS flows may be used to carry the traffic of a separate child IPsec SA. The 5G-RG210is configured to (a) forward the traffic arriving via the TS375(second data connection) to the UPF-0215by using the new QoS flow380and (b) to forward the traffic395arriving from the UPF-0215that matches the TCLAS (e.g., the downlink traffic of the child IPsec SA385) to the TS375by using the new QoS flow380. FIG.4depicts a second network architecture400, according to embodiments of the disclosure. Depicted are the UE205which is operating behind the 5G-RG210. The 5G-RG210is registered with a 5G core network and has established a PDU Session405for communicating with Data Network-0235. The PDU Session405of the 5G-RG210is anchored at the UPF-0215and is composed of two QoS flows (QoS Flow-1245and QoS Flow-2410), each one offering different QoS characteristics. The UE205is registered with the 5G core network via the 5G-RG210and has established its own PDU Session260for communicating with Data Network-1240. Here, the PDU Session260of the UE205is anchored at UPF-1225and goes through the IWF-1220before reaching the 5G core network. Data traffic of the UE205mapped to the QoS Flow-2410is carried over an IPsec Security Association (SA)265between the UE205and the IWF-1220, which is established during the setup of the PDU Session260of the UE205. Note that if no established QoS flow for the 5G-RG210supports the appropriate QoS, then the 5G-RG210creates a new QoS flow to carry IPsec traffic of the UE205. In various embodiments, the 5G-RG210may establish a new QoS flow for some IPsec SAs and map the other IPsec SAs into existing QoS flows. For ease of illustration, only a single IPsec SA is shown inFIG.4. As discussed above with reference toFIG.3A, the UE205sends an ADDTS request to the 5G-RG210allowing the 5G-RG210to map the data traffic of the UE205(i.e. IPsec SA traffic) into an appropriate QoS flow of the PDU Session405. Without knowing the QoS requirements of the data traffic of the UE205, the 5G-RG210would be unable to perform the appropriate mapping. Rather, the 5G-RG210would blindly map the data traffic of the UE205into one of its QoS flows without knowing the QoS requirements of this data traffic. Therefore, it is very likely that the blindly mapped data traffic of the UE205would not receive the appropriate QoS handling as it goes through the PDU Session405of the 5G-RG210and thus it would not be possible for the PDU Session of the UE205to guarantee its assigned level of QoS. FIG.5depicts one embodiment of a gateway apparatus500that may be used for modifying a data connection to support QoS requirements, according to embodiments of the disclosure. The gateway apparatus500may be one embodiment of the 5G-RG137, and/or the 5G-RG210. Furthermore, gateway apparatus500may include a processor505, a memory510, an input device515, an output device520, a transceiver525. In some embodiments, the input device515and the output device520are combined into a single device, such as a touch screen. In certain embodiments, the gateway apparatus500does not include any input device515and/or output device520. As depicted, the transceiver525includes at least one transmitter530and at least one receiver535. Here, the transceiver525communicates with one or more remote units105and with one or more interworking functions135that provide access to one or more PLMNs Additionally, the transceiver525may support at least one network interface540. For example, the transceiver525may support a first interface that supports a first data connection with a 5G core network over a first access network, the first data connection supporting a plurality of QoS flows, and a second interface that communicates with a remote unit over a second access network. The processor505, in one embodiment, may include any known controller capable of executing computer-readable instructions and/or capable of performing logical operations. For example, the processor505may be a microcontroller, a microprocessor, a central processing unit (“CPU”), a graphics processing unit (“GPU”), an auxiliary processing unit, a field programmable gate array (“FPGA”), or similar programmable controller. In some embodiments, the processor505executes instructions stored in the memory510to perform the methods and routines described herein. The processor505is communicatively coupled to the memory510, the input device515, the output device520, and the first transceiver525. In various embodiments, the processor505receives a first request over the second access network and determines whether a requested second data connection can be mapped into one of the plurality of QoS flows over the first data connection. Here, the first request contains a first set of parameters for establishing a second data connection with the remote unit over the second access network. In some embodiments, the first access network and the second access network utilize different access technologies, wherein the processor505further converts the first set of parameters into the second set of parameters. In some embodiments, the data traffic relayed between the second data connection and the new QoS flow over the first data connection is the data traffic of a child IPsec security association established between the remote unit and an Interworking function in the 5G core network. For example, in certain embodiments the first request may be an ADDTS request and the first set of parameters may contain a TCLAS parameter and a TSPEC parameter. In such embodiments, the processor505may determine whether the second data connection can be mapped into one of the plurality of QoS flows over the first data connection by comparing the TSPEC parameter with QoS parameters associated with each of the plurality of QoS flows over the first data connection. In one embodiment, the processor505further converts the TCLAS parameter into a Requested QoS Rules parameter and the TSPEC parameter into a Requested QoS Flow Description parameter, wherein the Requested QoS Rules parameter and the Requested QoS Flow Description parameter are contained in the second set of parameters. The processor505sends a second request to establish a new QoS flow over the first data connection in response to determining that the second data connection cannot be mapped into one of the plurality of QoS flows over the first data connection, the second request containing a second set of parameters derived from the first set of parameters and relays the data traffic between the second data connection and the new QoS flow over the first data connection. Additionally, the processor505may relay the data traffic between the second data connection and an existing one of the plurality of QoS flows over the first data connection in response to determining that the second data connection can be mapped to the existing one of the plurality of QoS flows over the first data connection. In some embodiments, the second request may contain a request to modify the first data connection by creating a new QoS flow that supports the second set of parameters. In certain embodiments, the first data connection may be a PDU Session, wherein the second request contains a PDU Session Modification Request. In some embodiments, the first request indicates QoS resources to reserve over the second access network. In such embodiments, determining whether the second data connection can be mapped into one of the plurality of QoS flows over the first data connection includes comparing the QoS resources to reserve over the second access network with QoS parameters associated with each of the plurality of QoS flows over the first data connection. The memory510, in one embodiment, is a computer readable storage medium. In some embodiments, the memory510includes volatile computer storage media. For example, the memory510may include a RAM, including dynamic RAM (“DRAM”), synchronous dynamic RAM (“SDRAM”), and/or static RAM (“SRAM”). In some embodiments, the memory510includes non-volatile computer storage media. For example, the memory510may include a hard disk drive, a flash memory, or any other suitable non-volatile computer storage device. In some embodiments, the memory510includes both volatile and non-volatile computer storage media. In some embodiments, the memory510stores data relating to modifying a data connection to support QoS requirements, for example storing TCLAS parameters, TSPEC parameters, parameter conversion tables, IPsec security associations, and the like. In certain embodiments, the memory510also stores program code and related data, such as an operating system (“OS”) or other controller algorithms operating on the gateway apparatus500and one or more software applications. The input device515, in one embodiment, may include any known computer input device including a touch panel, a button, a keyboard, a stylus, a microphone, or the like. In some embodiments, the input device515may be integrated with the output device520, for example, as a touchscreen or similar touch-sensitive display. In some embodiments, the input device515includes a touchscreen such that text may be input using a virtual keyboard displayed on the touchscreen and/or by handwriting on the touchscreen. In some embodiments, the input device515includes two or more different devices, such as a keyboard and a touch panel. The output device520, in one embodiment, may include any known electronically controllable display or display device. The output device520may be designed to output visual, audible, and/or haptic signals. In some embodiments, the output device520includes an electronic display capable of outputting visual data to a user. For example, the output device520may include, but is not limited to, an LCD display, an LED display, an OLED display, a projector, or similar display device capable of outputting images, text, or the like to a user. As another, non-limiting, example, the output device520may include a wearable display such as a smart watch, smart glasses, a heads-up display, or the like. Further, the output device520may be a component of a smart phone, a personal digital assistant, a television, a table computer, a notebook (laptop) computer, a personal computer, a vehicle dashboard, or the like. In certain embodiments, the output device520includes one or more speakers for producing sound. For example, the output device520may produce an audible alert or notification (e.g., a beep or chime). In some embodiments, the output device520includes one or more haptic devices for producing vibrations, motion, or other haptic feedback. In some embodiments, all or portions of the output device520may be integrated with the input device515. For example, the input device515and output device520may form a touchscreen or similar touch-sensitive display. In other embodiments, all or portions of the output device520may be located near the input device515. As discussed above, the transceiver525may communicate with one or more remote units and/or with one or more interworking functions that provide access to one or more PLMNs. The transceiver525may also communicate with one or more network functions (e.g., in the mobile core network140). The transceiver525operates under the control of the processor505to transmit messages, data, and other signals and also to receive messages, data, and other signals. For example, the processor505may selectively activate the transceiver (or portions thereof) at particular times in order to send and receive messages. The transceiver525may include one or more transmitters530and one or more receivers535. In certain embodiments, the one or more transmitters530and/or the one or more receivers535may share transceiver hardware and/or circuitry. For example, the one or more transmitters530and/or the one or more receivers535may share antenna(s), antenna tuner(s), amplifier(s), filter(s), oscillator(s), mixer(s), modulator/demodulator(s), power supply, and the like. In one embodiment, the transceiver525implements multiple logical transceivers using different communication protocols or protocol stacks, while using common physical hardware. FIG.6depicts one embodiment of a user equipment apparatus600that may be used for modifying a data connection to support QoS requirements, according to embodiments of the disclosure. The user equipment apparatus600may be one embodiment of the remote unit105and/or the gateway UE107. Furthermore, the user equipment apparatus600may include a processor605, a memory610, an input device615, an output device620, a transceiver625. In some embodiments, the input device615and the output device620are combined into a single device, such as a touch screen. In certain embodiments, the user equipment apparatus600does not include any input device615and/or output device620. As depicted, the transceiver625includes at least one transmitter630and at least one receiver635. Here, the transceiver625communicates with a mobile core network (e.g., a 5GC) via an interworking function (e.g., TNGF or N3IWF) and over a non-3GPP access network. Additionally, the transceiver625may support at least one network interface640. For example, when functioning as a gateway UE, the transceiver625may support a first interface that supports a first data connection with a 6G core network over a first access network, the first data connection supporting a plurality of QoS flows, and a second interface that communicates with a remote unit over a second access network. The processor605, in one embodiment, may include any known controller capable of executing computer-readable instructions and/or capable of performing logical operations. For example, the processor605may be a microcontroller, a microprocessor, a central processing unit (“CPU”), a graphics processing unit (“GPU”), an auxiliary processing unit, a field programmable gate array (“FPGA”), or similar programmable controller. In some embodiments, the processor605executes instructions stored in the memory610to perform the methods and routines described herein. The processor605is communicatively coupled to the memory610, the input device615, the output device620, and the transceiver625. In various embodiments, when functioning as a gateway UE, the processor605receives a first request over the second access network and determines whether a requested second data connection can be mapped into one of the plurality of QoS flows over the first data connection. Here, the first request contains a first set of parameters for establishing a second data connection with the remote unit over the second access network. In some embodiments, the first access network and the second access network utilize different access technologies, wherein the processor605further converts the first set of parameters into the second set of parameters. In some embodiments, the data traffic relayed between the second data connection and the new QoS flow over the first data connection is the data traffic of a child IPsec security association established between the remote unit and an Interworking function in the 6G core network. For example, in certain embodiments the first request may be an ADDTS request and the first set of parameters may contain a TCLAS parameter and a TSPEC parameter. In such embodiments, the processor605may determine whether the second data connection can be mapped into one of the plurality of QoS flows over the first data connection by comparing the TSPEC parameter with QoS parameters associated with each of the plurality of QoS flows over the first data connection. In one embodiment, the processor605further converts the TCLAS parameter into a Requested QoS Rules parameter and the TSPEC parameter into a Requested QoS Flow Description parameter, wherein the Requested QoS Rules parameter and the Requested QoS Flow Description parameter are contained in the second set of parameters. The processor605sends a second request to establish a new QoS flow over the first data connection in response to determining that the second data connection cannot be mapped into one of the plurality of QoS flows over the first data connection, the second request containing a second set of parameters derived from the first set of parameters and relays the data traffic between the second data connection and the new QoS flow over the first data connection. Additionally, the processor605may relay the data traffic between the second data connection and an existing one of the plurality of QoS flows over the first data connection in response to determining that the second data connection can be mapped to the existing one of the plurality of QoS flows over the first data connection. In some embodiments, the second request may contain a request to modify the first data connection by creating a new QoS flow that supports the second set of parameters. In certain embodiments, the first data connection may be a PDU Session, wherein the second request contains a PDU Session Modification Request. In some embodiments, the first request indicates QoS resources to reserve over the second access network. In such embodiments, determining whether the second data connection can be mapped into one of the plurality of QoS flows over the first data connection includes comparing the QoS resources to reserve over the second access network with QoS parameters associated with each of the plurality of QoS flows over the first data connection. The memory610, in one embodiment, is a computer readable storage medium. In some embodiments, the memory610includes volatile computer storage media. For example, the memory610may include a RAM, including dynamic RAM (“DRAM”), synchronous dynamic RAM (“SDRAM”), and/or static RAM (“SRAM”). In some embodiments, the memory610includes non-volatile computer storage media. For example, the memory610may include a hard disk drive, a flash memory, or any other suitable non-volatile computer storage device. In some embodiments, the memory610includes both volatile and non-volatile computer storage media. In some embodiments, the memory610stores data relating to modifying a data connection to support QoS requirements, for example storing TCLAS parameters, TSPEC parameters, parameter conversion tables, IPsec security associations, and the like. In certain embodiments, the memory610also stores program code and related data, such as an operating system (“OS”) or other controller algorithms operating on the user equipment apparatus600and one or more software applications. The input device615, in one embodiment, may include any known computer input device including a touch panel, a button, a keyboard, a stylus, a microphone, or the like. In some embodiments, the input device615may be integrated with the output device620, for example, as a touchscreen or similar touch-sensitive display. In some embodiments, the input device615includes a touchscreen such that text may be input using a virtual keyboard displayed on the touchscreen and/or by handwriting on the touchscreen. In some embodiments, the input device615includes two or more different devices, such as a keyboard and a touch panel. The output device620, in one embodiment, may include any known electronically controllable display or display device. The output device620may be designed to output visual, audible, and/or haptic signals. In some embodiments, the output device620includes an electronic display capable of outputting visual data to a user. For example, the output device620may include, but is not limited to, an LCD display, an LED display, an OLED display, a projector, or similar display device capable of outputting images, text, or the like to a user. As another, non-limiting, example, the output device620may include a wearable display such as a smart watch, smart glasses, a heads-up display, or the like. Further, the output device620may be a component of a smart phone, a personal digital assistant, a television, a table computer, a notebook (laptop) computer, a personal computer, a vehicle dashboard, or the like. In certain embodiments, the output device620includes one or more speakers for producing sound. For example, the output device620may produce an audible alert or notification (e.g., a beep or chime). In some embodiments, the output device620includes one or more haptic devices for producing vibrations, motion, or other haptic feedback. In some embodiments, all or portions of the output device620may be integrated with the input device615. For example, the input device615and output device620may form a touchscreen or similar touch-sensitive display. In other embodiments, all or portions of the output device620may be located near the input device615. As discussed above, the transceiver625communicates with one or more network functions of a mobile communication network via one or more access networks. The transceiver625operates under the control of the processor605to transmit messages, data, and other signals and also to receive messages, data, and other signals. For example, the processor605may selectively activate the transceiver (or portions thereof) at particular times in order to send and receive messages. The transceiver625may include one or more transmitters630and one or more receivers635. Although only one transmitter630and one receiver635are illustrated, the user equipment apparatus600may have any suitable number of transmitters630and receivers635. Further, the transmitter(s)630and the receiver(s)635may be any suitable type of transmitters and receivers. In one embodiment, the transceiver625includes a first transmitter/receiver pair used to communicate with a mobile communication network over licensed radio spectrum and a second transmitter/receiver pair used to communicate with a mobile communication network over unlicensed radio spectrum. In certain embodiments, the first transmitter/receiver pair used to communicate with a mobile communication network over licensed radio spectrum and the second transmitter/receiver pair used to communicate with a mobile communication network over unlicensed radio spectrum may be combined into a single transceiver unit, for example a single chip performing functions for use with both licensed and unlicensed radio spectrum. In some embodiments, the first transmitter/receiver pair and the second transmitter/receiver pair may share one or more hardware components. For example, certain transceivers625, transmitters630, and receivers635may be implemented as physically separate components that access a shared hardware resource and/or software resource, such as for example, the network interface640. In various embodiments, one or more transmitters630and/or one or more receivers635may be implemented and/or integrated into a single hardware component, such as a multi-transceiver chip, a system-on-a-chip, an ASIC, or other type of hardware component. In certain embodiments, one or more transmitters630and/or one or more receivers635may be implemented and/or integrated into a multi-chip module. In some embodiments, other components such as the network interface640or other hardware components/circuits may be integrated with any number of transmitters630and/or receivers635into a single chip. In such embodiment, the transmitters630and receivers635may be logically configured as a transceiver625that uses one more common control signals or as modular transmitters630and receivers635implemented in the same hardware chip or in a multi-chip module. FIG.7depicts a method700for modifying a data connection to support QoS requirements, according to embodiments of the disclosure. In some embodiments, the method700is performed by a gateway device, such as the gateway UE107, the 5G-RG137, the 5G-RG210, the gateway apparatus500, and/or the user equipment apparatus600. In certain embodiments, the method700may be performed by a processor executing program code, for example, a microcontroller, a microprocessor, a CPU, a GPU, an auxiliary processing unit, a FPGA, or the like. The method700begins and supports705a first data connection with a 5G core network over a first access network. Here, the first data connection supports a plurality of QoS flows. The method700includes receiving710a first request over a second access network. Here, the first request contains a first set of parameters for establishing a second data connection with a remote unit over the second access network. In some embodiments, the first request indicates QoS resources to reserve over the second access network. In some embodiments, the first access network and the second access network utilize different access technologies, thus the first set of parameters may require conversion into the second set of parameters. The method700includes determining715whether the second data connection can be mapped into one of the plurality of QoS flows over the first data connection. In certain embodiments, determining whether the second data connection can be mapped into one of the plurality of QoS flows over the first data connection includes comparing QoS resources to reserve over the second access network with QoS parameters associated with each of the plurality of QoS flows over the first data connection. In certain embodiments, the first request is an ADDTS request and the first set of parameters includes a TCLAS parameter and a TSPEC parameter. In such embodiments, determining whether the second data connection can be mapped into one of the plurality of QoS flows over the first data connection includes comparing the TSPEC parameter with QoS parameters associated with each of the plurality of QoS flows over the first data connection. The method700includes sending720a second request to establish a new QoS flow over the first data connection in response to determining that the second data connection cannot be mapped into one of the plurality of QoS flows over the first data connection. Here, the second request contains a second set of parameters derived from the first set of parameters. In some embodiments, the second request contains a request to modify the first data connection by creating a new QoS flow that supports the second set of parameters. In such embodiments, the first data connection may be a PDU Session, wherein the second request contains a PDU Session Modification Request. The method700includes relaying725the data traffic between the second data connection and the new QoS flow over the first data connection. In some embodiments, the data traffic that is relayed between the second data connection and the new QoS flow over the first data connection is the data traffic of a child IPsec security association established between the remote unit and an Interworking function in the 5G core network. The method700ends. Disclosed herein is a first apparatus for modifying a data connection to support QoS requirements, according to embodiments of the disclosure. The first apparatus may be implemented by a gateway device, such as the gateway UE107, the 5G-RG137, the 5G-RG210, the gateway apparatus500, and/or the user equipment apparatus600. The first apparatus includes a processor, a first interface that supports a first data connection with a 5G core network over a first access network, the first data connection supporting a plurality of QoS flows, and a second interface that communicates with a remote unit over a second access network. The processor receives a first request over the second access network, the first request containing a first set of parameters for establishing a second data connection with the remote unit over the second access network. The processor determines whether the second data connection can be mapped into one of the plurality of QoS flows over the first data connection. The processor sends a second request to establish a new QoS flow over the first data connection in response to determining that the second data connection cannot be mapped into one of the plurality of QoS flows over the first data connection, the second request containing a second set of parameters derived from the first set of parameters and relays the data traffic between the second data connection and the new QoS flow over the first data connection. In some embodiments, the processor relays the data traffic between the second data connection and an existing one of the plurality of QoS flows over the first data connection in response to determining that the second data connection can be mapped to the existing one of the plurality of QoS flows over the first data connection. In some embodiments, the second request contains a request to modify the first data connection by creating a new QoS flow that supports the second set of parameters. In such embodiments, the first data connection may be a PDU Session, wherein the second request contains a PDU Session Modification Request. In some embodiments, the first request indicates QoS resources to reserve over the second access network. In such embodiments, determining whether the second data connection can be mapped into one of the plurality of QoS flows over the first data connection includes comparing the QoS resources to reserve over the second access network with QoS parameters associated with each of the plurality of QoS flows over the first data connection. In certain embodiments, the first request is an ADDTS request and the first set of parameters contains a TCLAS parameter and a TSPEC parameter. In such embodiments, determining whether the second data connection can be mapped into one of the plurality of QoS flows over the first data connection may include comparing the TSPEC parameter with QoS parameters associated with each of the plurality of QoS flows over the first data connection. In one embodiment, the processor further converts the TCLAS parameter into a Requested QoS Rules parameter and the TSPEC parameter into a Requested QoS Flow Description parameter, wherein the Requested QoS Rules parameter and the Requested QoS Flow Description parameter are contained in the second set of parameters. In some embodiments, the first access network and the second access network utilize different access technologies, wherein the processor further converts the first set of parameters into the second set of parameters. In some embodiments, the data traffic relayed between the second data connection and the new QoS flow over the first data connection is the data traffic of a child IPsec security association established between the remote unit and an Interworking function in the 5G core network. Disclosed herein is a first method for modifying a data connection to support QoS requirements, according to embodiments of the disclosure. The first method may be performed by a gateway device, such as the gateway UE107, the 5G-RG137, the 5G-RG210, the gateway apparatus500, and/or the user equipment apparatus600. The first method includes supporting a first data connection with a 5G core network over a first access network, the first data connection supporting a plurality of QoS flows. The first method includes receiving a first request over a second access network, the first request containing a first set of parameters for establishing a second data connection with a remote unit over the second access network. The first method includes determining whether the second data connection can be mapped into one of the plurality of QoS flows over the first data connection. The first method includes sending a second request to establish a new QoS flow over the first data connection in response to determining that the second data connection cannot be mapped into one of the plurality of QoS flows over the first data connection, the second request containing a second set of parameters derived from the first set of parameters. The first method includes relaying the data traffic between the second data connection and the new QoS flow over the first data connection. In some embodiments, the first method further includes relaying the data traffic between the second data connection and an existing one of the plurality of QoS flows over the first data connection in response to determining that the second data connection can be mapped to the existing one of the plurality of QoS flows over the first data connection. In some embodiments, the second request contains a request to modify the first data connection by creating a new QoS flow that supports the second set of parameters. In such embodiments, the first data connection may be a PDU Session, wherein the second request contains a PDU Session Modification Request. In some embodiments, the first request includes QoS resources to reserve over the second access network. In certain embodiments, determining whether the second data connection can be mapped into one of the plurality of QoS flows over the first data connection includes comparing the QoS resources to reserve over the second access network with QoS parameters associated with each of the plurality of QoS flows over the first data connection. In certain embodiments, the first request is an ADDTS request and the first set of parameters contains a TCLAS parameter and a TSPEC parameter. In such embodiments, determining whether the second data connection can be mapped into one of the plurality of QoS flows over the first data connection includes comparing the TSPEC parameter with QoS parameters associated with each of the plurality of QoS flows over the first data connection. In some embodiments, the first method may further include converting the TCLAS parameter into a Requested QoS Rules parameter and the TSPEC parameter into a Requested QoS Flow Description parameter, wherein the Requested QoS Rules parameter and the Requested QoS Flow Description parameter are contained in the second set of parameters. In some embodiments, the first access network and the second access network utilize different access technologies. In such embodiments, the first method may further include converting the first set of parameters into the second set of parameters. In some embodiments, the data traffic relayed between the second data connection and the new QoS flow over the first data connection is the data traffic of a child IPsec security association established between the remote unit and an Interworking function in the 5G core network. Embodiments may be practiced in other specific forms. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
70,679
11943136
DETAILED DESCRIPTION It should be understood at the outset that although an illustrative implementation of one or more embodiments are provided below, the disclosed systems and/or methods may be implemented using any number of techniques, whether currently known or in existence. The disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary designs and implementations illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents. FIG.1Ais a diagram illustrating a network100(also referred to herein as a “domain” or “subnetwork”) configured to implement preferred path routing. The network100comprises a central entity165(also referred to herein as a “controller”) and multiple network entities (NEs)101-121. The NEs101-121are interconnected via links122-154. The central entity165is connected to one of the NEs104via a central entity-to-domain link166. In an embodiment, the central entity165may be a network or domain controller that maintains a topology of the network100to craft paths (shortest paths, PPRs, and PPR graphs) between edge NEs101-121in the network100, as will be further described below. In an embodiment, the central entity165is substantially similar to a Path Computation Element (PCE), which is further described in Internet Engineering Task Force (IETF) Request for Comments (RFC) 8281, entitled “Path Computation Element Communication Protocol (PCEP) Extensions for PCE-Initiated LSP Setup in a Stateful PCE Model,” by E. Crabbe, dated December 2017, and which is hereby incorporated by reference in its entirety. In an embodiment, the central entity165may be substantially similar to a Software Defined Network Controller (SDNC), which is further described in the IETF RFC 8402 entitled “Segment Routing Architecture,” by C. Filsfils, dated July 2018, and which is hereby incorporated by reference in its entirety. In an embodiment, the central entity165may be substantially similar to an Application Layer Traffic Optimization (ALTO) server, which is further described in the IETF RFC 7285, entitled “Application Layer Traffic Optimization (ALTO) Protocol,” by R. Alimi, dated September 2014, and which is hereby incorporated by reference in its entirety. In an embodiment, NEs101-121(also referred to herein as “nodes”) may be a topological devices (or physical devices) such as a router, a bridge, a network switch, or a logical device configured to perform switching and routing using the preferred path routing mechanisms disclosed herein. In an embodiment, one or more of the NEs101-121may be non-topological NEs such as, for example, a function, context, service, or a virtual machine. A non-topological NE may be implemented by the NEs101-121or implemented by another device attached to the NEs101-121. In an embodiment, NEs101-121may be headend nodes or edge nodes positioned at an edge of the network100. While NEs101-121are shown inFIG.1Aas headend nodes, it should be appreciated that NEs101-121may otherwise be an intermediate node or any other type of NE. Although only twenty one NEs101-121are shown inFIG.1A, it should be appreciated that the network100shown inFIG.1Amay include any number of NEs. In an embodiment, the central entity165and NEs101-121are configured to implement various packet forwarding protocols, such as, but not limited to, Multi-Protocol Label Switching (MPLS), Segment Routing-MPLS (SR-MPLS), Internet Protocol (IP) Version 4 (IPv4), IP Version 6 (IPv6), Next Generation Explicit Routing (NGER), or any future packet forwarding protocol. The links122-154may be wired links, wireless links, or interfaces interconnecting the NEs101-121together. Similarly, the central entity-to-domain link166is a wired link, wireless link, or interfaces interconnecting at least one of the NEs101-121to the central entity165. In operation, the central entity165is configured to determine one or more shortest paths between two edge NEs101-121in the network100and one or more PPRs160A-D between different edge NEs101-121in the network100. A shortest path refers to a path between two NEs101-121that is determined based on a metric, such as, for example, a cost or weight associated with each link on the path, a number of NEs on the path, a number of links on the path, etc. In an embodiment, a shortest path may be computed for a destination using a Dijkstra's Shortest Path First (SPF) algorithm. A PPR160A-D (also referred to herein as a “Non-Shortest Path” (NSP)) refers to a custom path or any other path that is determined based on an application or server request for a path between an ingress NE101-121and an egress NE101-121(or between a source and destination). In an embodiment, the PPR160A-D deviates from the shortest path. However, the PPR160A-D may also be the same as the shortest path in some circumstances. The PPR160A-D includes a sequential ordering of elements121-154(e.g., NEs101-121and/or links122-154) along a path in the network100. In an embodiment, the central entity165determines the PPRs160A-D based on a network topology of network100, which is maintained at the central entity165. In this embodiment, the central entity165generates PPR information170, describing each of the PPRs160A-D, and sends the PPR information170to an NE104via the central entity-to-domain link166. As will be further described below, the PPR information170may include details regarding each of the PPRs160A-D, such as, for example, a PPR-identifier (PPR-ID) of each PPR160A-D, attributes associated with resources to be reserved on each PPR160A-D, and multiple PPR-path description elements (PPR-PDEs) describing one or more elements on each PPR160A-D. In this embodiment, NE104floods the PPR information170to the remaining NEs101-103and105-121in the network100using the underlying Interior Gateway Protocol (IGP) of the network100. For example, NE104transmits the PPR information170to neighboring NEs103and105. NE103forwards the PPR information170to neighboring NE102, and NE105forwards the PPR information170to neighboring NE106. In this way, the remaining NEs101-103and105-121continue to forward the PPR information170to all the remaining NEs101-103and105-121in the network100. The IGP implemented by the network100may be Open Shortest Path First (OSPF) Version 2 (OSPFv2), OSPF Version 3 (OSPFv3), Intermediate System—Intermediate System (IS-IS), or direct SDN. In another embodiment, an operator or administrator of the network100may determine the PPRs160A-D and send the PPR information170describing the PPRs160A-D to one of the NEs101-121in the network100. The PPR information170may then be flooded to all the remaining NEs101-121in the network100. After receiving the PPR information170, each of the NEs101-121is configured to determine whether the respective NE101-121is identified in the PPR information170describing one or more of the PPRs160A-D. If so, then the respective NE101-121updates the local forwarding database by adding a forwarding entry defining a next hop for a particular destination or egress NE identified by the PPR information170. If not, then the respective NE101-121ignores the PPR information170. In a network100implementing preferred path routing, a separate PPR160A-D is determined and provisioned for each ingress and egress NE101-121pair. As shown byFIG.1A, PPR160A is a path provisioned between ingress NE101and egress NE107. PPR160A includes the following elements: ingress NE101, link141, NE121, link150, NE120, link149, NE119, link148, NE118, link145, NE115, link138, NE108, link128, and egress NE107. PPR160B is a path provisioned between ingress NE101and egress NE118. PPR160B includes the following elements: ingress NE101, link141, NE121, link150, NE120, link149, NE119, link148, and NE118. PPR160C is a path provisioned between ingress NE121and egress NE118. PPR160C includes the following elements: ingress NE121, link150, NE120, link149, NE119, link148, and NE118. PPR160D is a path provisioned between ingress NE121and egress NE107. PPR160D includes the following elements: ingress NE121, link150, NE120, link149, NE119, link148, NE118, link145, NE115, link138, NE108, link128, and egress NE107. In this case, the central entity165or an administrator of the network100generates the PPR information170to include details regarding each of PPRs160A-D and each of the elements on each of the PPRs160A-D. For example, a separate PPR-ID is determined for each of the PPRs160A-D, and a separate PPR-PDE is created for one or more of the elements on each of the PPRs160A-D. This PPR information170is flooded through the network100using the underlying IGP of the network such that one more of NEs101-121in network100stores the PPR information170. Therefore, to provision PPRs160A-D in network100, PPR information170for each of the PPRs160A-D has to be created, flooded through the network100, and stored at one or more of the NEs101-121. This creates a scalability issue across the network100, in that the amount of PPR information170that has to be created, forwarded, and stored is extensive when there are a large number of PPRs160A-D to be provisioned in the network100. In various embodiments, PPR graphs represent a plurality of PPRs160A-D between one or more ingress NEs101-121and one or more egress NEs101-121in the network100. Instead of creating PPR information170for each PPR160A-D in a network100, the PPR information170describes PPR graphs, as disclosed herein. The PPR graphs include flags or bits to indicate whether an NE101-121is an ingress NE101-121or an egress NE101-121. In this way, the amount of PPR information170that has to be created, forwarded, and stored across the network100is dramatically decreased. By using a single PPR graph instead of multiple PPRs160A-D, paths may be provisioned in a more resource efficient manner that saves computing resources and network resources. FIG.1Bis a diagram illustrating a network175configured to implement preferred path routing and PPR graphs180according to various embodiments of the disclosure. Network175is similar to network100, except that network175is configured to provision a single PPR graph180, instead of multiple PPRs160A-D. In an embodiment, a PPR graph180represents multiple PPRs160A-D between one or more ingress NEs101-121(also referred to herein as “sources”) and one or more egress NEs101-121(also referred to herein as “destinations”). As shown byFIG.1B, the PPR graph180includes the following elements: NE101, link141, NE121, link150, NE120, link149, NE119, link148, NE118, link145, NE115, link138, NE108, link128, and NE107. PPR graph180includes two ingress NEs101and121and two egress NEs118and108. The ingress NEs101and121are represented inFIG.1Bwith rectangles around NEs101and121, and the egress NEs118and107are represented inFIG.1Bwith circles around NEs118and107. In an embodiment, the PPR graph180represents multiple possible paths between the ingress NEs101and121and the egress NEs119and108. For example, as shown inFIG.1B, PPR graph180includes all of the elements from PPR160A (FIG.1A): ingress NE101, link141, NE121, link150, NE120, link149, NE119, link148, NE118, link145, NE115, link138, NE108, link128, and egress NE107. PPR graph180also includes all of the elements from PPR160B (FIG.1A): ingress NE101, link141, NE121, link150, NE120, link149, NE119, link148, and NE118. Similarly, PPR graph180includes all of the elements from PPR160C (FIG.1A): ingress NE121, link150, NE120, link149, NE119, link148, and NE118. Finally, PPR graph180includes all of the elements from PPR160D (FIG.1A): ingress NE121, link150, NE120, link149, NE119, link148, NE118, link145, NE115, link138, NE108, link128, and egress NE107. In this embodiment, the PPR graph180includes a PPR160A between ingress NE101and egress NE107, a PPR160B between ingress NE101and egress NE118, a PPR160C between ingress NE121and egress NE118, and a PPR160D between ingress NE121and egress NE107. Therefore, in various embodiments, a PPR graph180includes a plurality of different PPRs160A-D, having different ingress NEs101and121and different egress NEs118and107. In an embodiment, the central entity165or a network administrator determines the PPR graph180based on a network topology of network175maintained at the central entity165and based on a network resource to be reserved for the PPR graph180. In an embodiment, the central entity165or the network administrator generates PPR information170describing the PPR graph180and sends the PPR information170to a headend NE104in network175via central entity-to-domain link166. As shown byFIG.1B, the PPR information170includes a PPR type183, one or more PPR-IDs186, and one or more PPR-PDEs190. The PPR type183indicates a type of PPR graph180. The different types of PPR graphs180will be further described below with reference toFIGS.4-10. The one or more PPR-IDs186includes information identifying various paths within the PPR graph180. In an embodiment, each of the PPR-IDs186includes an address, label, or identifier of each egress NE118and107included in the PPR graph180. The PPR-PDEs190include information identifying one or more elements (e.g., NE101, link141, NE121, link150, NE120, link149, NE119, link148, NE118, link145, NE115, link138, NE108, link128, and NE107) on the PPR graph180. In an embodiment, each of PPR-PDEs190includes a label, address, or identifier of one or more of the elements101-154(e.g., NEs101-121and links122-154) on the PPR graph180. In an embodiment, each of the PPR-PDEs190includes several flags, such as a source flag191and a destination flag193. The source flag191may be a bit that, when set, indicates that the element101-154identified by the PPR-PDE190is an ingress NE101or121on the PPR graph180. The destination flag193is also a bit that, when set, indicates that the element101-154identified by the PPR-PDE190is an egress NE118or107. According to various embodiments, transmitting the PPR information170describing a single PPR graph180, which includes all four PPRs160A-D, instead of separately transmitting PPR information170describing multiple PPRs160A-D, is a more efficient and effective manner of communicating data regarding multiple different PPRs160A-D in a network175. In addition, the amount of data that each of the NEs101-121processes and stores is reduced due to the consolidated nature of the PPR information170describing multiple PPRs160A-D. Therefore, the use of PPR graphs180enables NEs101-121in a network175to be more efficiently programmed to forward traffic according to various the PPR graphs180. FIG.2is a diagram200illustrating the PPR-PDEs190A-H describing elements on the PPR graph180ofFIG.1B, which is included in the PPR information170, according to various embodiments of the disclosure. In an embodiment, PPR-PDEs190A-H describe one or more of the elements (e.g., NE101, link141, NE121, link150, NE120, link149, NE119, link148, NE118, link145, NE115, link138, NE108, link128, and NE107) on the PPR graph180. In the example shown inFIG.2, PPR-PDE190A includes details regarding NE101, such as a label, address, or identifier of NE101. PPR PDE190A also includes the source flag191set to indicate that NE101is an ingress NE101of the PPR graph180. An ingress NE101is configured to be a source of traffic, or receive traffic from sources external to the network175, and forward the traffic through the network175using the PPR graph180. PPR-PDE190B includes details regarding NE121, such as a label, address, or identifier of NE121. PPR-PDE190B also includes the source flag191set to indicate that NE121is also an ingress NE121of the PPR graph180. PPR-PDE190C includes details regarding NE120, and PPR-PDE190D includes details regarding NE119. PPR-PDE190E includes details regarding NE118, and includes the destination flag193, which is set to indicate that NE118is an egress NE118of the PPR graph180. This means that NE118is a destination on the PPR graph180and is configured to forward traffic outside of the network175, to another network, or to another entity. PPR-PDE190F includes details regarding NE115, and PPR-PDE190G includes details regarding NE108. PPR-PDE190H includes details regarding NE107, and includes the destination flag193, which again means that the egress NE107is a destination on the PPR graph. Information250shown to the right of diagram200shows the PPR-PDEs190A-H for the four different PPRs160A-D (also referred to herein as “branches253A-D”) shown inFIG.1A, each of which would have had to be forwarded through the network100, and some of which would have had to be locally stored at one or more NEs101-121. However, the PPR-PDEs190A-H including the source flags191and destination flags193essentially include all of the information250shown to the right of diagram200, in a much more compact and efficient data structure. Therefore, data that is transmitted through the network175to provision PPRs is reduced and the amount of computing resources needed to process the data is also reduced. FIG.3is a diagram of an embodiment of an NE300in a network such as networks100and175. NE300may be implemented as the central entity165or the NEs101-121. The NE300may be configured to implement and/or support the routing and PPR graph180provisioning mechanisms described herein. The NE300may be implemented in a single node or the functionality of NE300may be implemented in a plurality of nodes. One skilled in the art will recognize that the term NE encompasses a broad range of devices of which NE300is merely an example. While NE300is described as a physical device, such as a router or gateway, the NE300may also be a virtual device implemented as a router or gateway running on a server or a generic routing hardware (whitebox). The NE300is included for purposes of clarity of discussion, but is in no way meant to limit the application of the present disclosure to a particular NE embodiment or class of NE embodiments. At least some of the features and/or methods described in the disclosure may be implemented in a network apparatus or module such as a NE300. For instance, the features and/or methods in the disclosure may be implemented using hardware, firmware, and/or software installed to run on hardware. As shown inFIG.3, the NE300comprises one or more ingress ports310and a receiver unit (Rx)320for receiving data, at least one processor, logic unit, or central processing unit (CPU)330to process the data, transmitter unit (Tx)340and one or more egress ports350for transmitting the data, and a memory360for storing the data. The processor330may comprise one or more multi-core processors and be coupled to a memory360, which may function as data stores, buffers, etc. The processor330may be implemented as a general processor or may be part of one or more application specific integrated circuits (ASICs) and/or digital signal processors (DSPs). The processor330may comprise a network configuration module335, which may perform processing functions of the central entity165or the NEs101-121. The network configuration module335may also be configured to perform the steps of methods900,1100, and1300, and/or any other method discussed herein. As such, the inclusion of the network configuration module335and associated methods and systems provide improvements to the functionality of the NE300. Further, the network configuration module335effects a transformation of a particular article (e.g., the network) to a different state. In an alternative embodiment, network configuration module335may be implemented as instructions stored in the memory360, which may be executed by the processor330. The memory360may comprise a cache for temporarily storing content, e.g., a random-access memory (RAM). Additionally, the memory360may comprise a long-term storage for storing content relatively longer, e.g., a read-only memory (ROM). For instance, the cache and the long-term storage may include dynamic RAMs (DRAMs), solid-state drives (SSDs), hard disks, or combinations thereof. The memory360may be configured to store the PPR information170, which includes the PPR type183, PPR-IDs186, and PPR-PDEs190A-H (hereinafter referred to as “PPR-PDE190”), a PPR graph identifier (PPG-ID)399, and/or backup PPR information379. The PPG-ID399may be a label, address, or identifier uniquely identifying the PPR graph180. Each PPR-PDE190may include, amongst other information, a source flag191, a destination flag193, an anycast PPR-ID364, an anycast group PPR-ID367, a QoS attribute370, a maximum QoS attribute373, and/or a backup PPR flag376, and details of each of these will be further described below. The backup PPR information379includes one or more backup PPR graph PDEs381A-N and backup PPR-IDs382. The backup PPR-IDs382are similar to the PPR-IDs186, except that the backup PPR-IDs382identify one or more backup PPR graphs. The backup PPR graph PDEs381A-N are similar to the PPR-PDEs190, except that the backup PPR graph PDEs381A-N describe elements on one or more backup PPR graphs. The anycast PPR-ID364, anycast group PPR-ID367, QoS attribute370, maximum QoS attribute373, backup PPR flag376, backup PPR-ID382, and backup PPR graph PDEs381A-N will be further described below. In addition, the memory360is configured to store a forwarding database365and a link state database361. In an embodiment, the forwarding database365stores forwarding entries359describing forwarding rules for how a particular NE300(e.g., NE101-121ofFIGS.1-2) should forward a data packet that includes a PPR-ID186and/or a destination address. The link state database361stores entries describing the reservation of resources along links within the network. It is understood that by programming and/or loading executable instructions onto the NE300, at least one of the processor330and/or memory360are changed, transforming the NE300in part into a particular machine or apparatus, e.g., a multi-core forwarding architecture, having the novel functionality taught by the present disclosure. It is fundamental to the electrical engineering and software engineering arts that functionality that can be implemented by loading executable software into a computer can be converted to a hardware implementation by well-known design rules. Decisions between implementing a concept in software versus hardware typically hinge on considerations of stability of the design and numbers of units to be produced rather than any issues involved in translating from the software domain to the hardware domain. Generally, a design that is still subject to frequent change may be preferred to be implemented in software, because re-spinning a hardware implementation is more expensive than re-spinning a software design. Generally, a design that is stable that will be produced in large volume may be preferred to be implemented in hardware, for example in an ASIC, because for large production runs the hardware implementation may be less expensive than the software implementation. Often a design may be developed and tested in a software form and later transformed, by well-known design rules, to an equivalent hardware implementation in an ASIC that hardwires the instructions of the software. In the same manner as a machine controlled by a new ASIC in a particular machine or apparatus, likewise a computer that has been programmed and/or loaded with executable instructions may be viewed as a particular machine or apparatus. In some embodiments, the NE300may be configured to implement OSPFv2, OSPFv3, IS-IS, or direct SDN controller based on network implementations. Disclosed herein are embodiments directed to advanced PPR graph features, which refer to enhanced networking features that can be provided using the PPR graphs180described above. In a first embodiment, the PPR graphs180are used to anycast addressing and routing methodologies (also referred to herein as “anycasting”). In a second embodiment, the PPR graphs180are used to enforce QoS attributes370at each of the ingress NEs in a PPR graph180. In a third embodiment, at least two backup PPR graphs may be set up for a PPR graph180, and each ingress NE in the PPR graph180may be assigned to one of the at least two backup PPR graphs using a backup PPR flag376. In the first embodiment in which PPR graphs180implement anycasting (also referred to herein as an “anycast PPR graph”), an anycast PPR-ID364, which may be a label, address, or identifier, identifies two or more endpoint destinations or egress NEs in a network. In this embodiment, the anycast PPR graph includes multiple paths (e.g., PPRs160) from each of the sources (e.g., ingress NEs) in the PPR graph180to each of the destinations (e.g., egress NEs) in the PPR graph180. In this embodiment, the PPR information170for the anycast PPR graph includes a PPR graph type183that indicates that the PPR information170includes anycast addresses. In this embodiment, the PPR information170for the anycast PPR graph also includes the anycast PPR-ID364representing multiple destinations included within an anycast group, an anycast group PPR-ID367uniquely representing each of the destinations within the anycast group, and the PPR-ID186of the anycast PPR graph.FIGS.4-9provide additional details and examples regarding this embodiment of the disclosure directed to anycast PPR graphs. In the second embodiment in which QoS attributes370are enforced in PPR graphs180, each ingress NE in a PPR graph180is associated with a QoS attribute370. In this embodiment, a PPR-PDE190for each ingress NE on a PPR graph180includes a QoS attribute370for the respective ingress NE. In an embodiment, the PPR information170for a PPR graph180also includes a maximum QoS attribute373for the PPR graph180. Each ingress NE on a PPR graph180calculates a sum of the QoS attributes370for each downstream ingress NE on the PPR graph180and compares the sum of the QoS attributes370to the maximum QoS attribute373for the PPR graph180. When the sum of the QoS attributes370for each downstream ingress NE on the PPR graph180is less than the maximum QoS attribute373for the PPR graph180, the ingress NE allocates the QoS attribute370along the PPR graph180. For example, the ingress NE reserves a resource along an outgoing element (e.g., interface) of the ingress NE toward the egress NE based on the QoS attribute370. When the sum of QoS attributes370for each downstream ingress NE on the PPR graph180is greater than the maximum QoS attribute373for the PPR graph180, the ingress NE allocates the maximum QoS attribute373along the PPR graph180. For example, the ingress NE reserves a resource along an outgoing element of the ingress NE toward the egress NE based on the maximum QoS attribute373.FIGS.10-11provide additional details and examples regarding this embodiment of the disclosure directed to QoS enforcement. In the third embodiment in which backup PPR graphs are set up for the PPR graph180, NEs in the PPR graph180may implement fast rerouting mechanisms by encoding backup PPR flags376in the PPR information170. A backup PPR flag376indicates a particular backup PPR graph for each ingress NE in the PPR graph180. In an embodiment, within the PPR information170, each PPR-PDE190for an ingress NE includes a backup PPR flag376. In an embodiment, the PPR information170includes backup PPR information379, which includes backup PPR-IDs382and backup PPR graph PDEs381A-N for each backup PPR graph that can be used when an element adjacent to an ingress NE fails or is no longer available. In this embodiment, the backup PPR flag376indicates the particular backup PPR graph to use when an element adjacent to a particular ingress NE fails or is no longer available to transmit traffic.FIGS.12-13provide additional details and examples regarding this embodiment of the disclosure directed to backup PPR graphs. FIG.4is a diagram illustrating an anycast PPR graph400configured to implement anycast addressing and routing methodologies according to various embodiments of the disclosure. In an embodiment, the anycast PPR graph400is similar to PPR graph180, in that the anycast PPR graph400includes at least one path between at least one an ingress NE and an egress NE. However, in the embodiment of the anycast PPR graph400shown inFIG.4, the anycast PPR graph400includes at least two disjoint trees with non-overlapping NEs, in which each disjoint tree has a single separate egress NE (e.g., destination) and multiple ingress NEs (e.g., sources). The anycast PPR graph400includes two PPR trees403and406that are separate and disjoint (e.g., PPR tree403and PPR tree406do not share any elements (e.g., NEs or links)). InFIG.4, the PPR tree403includes NEs410-417, and PPR tree406includes NEs418to422, each of which are similar to NEs101-121ofFIGS.1A-B. NEs410-417of PPR tree403are interconnected by links430-436, which are similar to links122-154ofFIGS.1A-B. Similarly, NEs418-422of PPR tree406are each are similar to NEs101-121ofFIGS.1A-Band are interconnected by links437-440, which are also similar to links122-154ofFIGS.1A-B. PPR trees403and406each include multiple ingress NEs (also referred to as source NEs or sources) and a single egress NE (also referred to as a destination NE or destination). PPR trees403and406represent one or more paths from each of the ingress NEs to the single egress NE. As shown byFIG.4, PPR tree403includes multiple ingress NEs410,411,414, and416, but only a single egress NE417. The ingress NEs410,411,414, and416are represented inFIG.4with rectangles, while the single egress NE417is represented inFIG.4with a circle. Similarly, PPR tree406includes multiple ingress NEs418and420, but only a single egress NE422. The ingress NEs418and420are represented inFIG.4with circles, while the single egress NE422is represented inFIG.4with a rectangle. The anycast PPR graph400is identified by the PPR-ID186, which represents the entire anycast PPR graph400, including both PPR trees403and406. In an embodiment, the PPR-ID186may be a label, address, or identifier of one of the egress NEs417or422. The egress NEs417and422within the anycast PPR graph400are assigned to an anycast group, which refers to a group of one or more egress NEs417having a same anycast PPR-ID364. In this case, each of egress NEs417and422is associated an anycast PPR-ID364representing the anycast group. The anycast PPR-ID364is a unicast address, label, or identifier, which may be encoded according to any type of transmission protocol (e.g., IPv4, IPv6, MPLS, etc.). In the control plane, the central entity165determines or obtains the anycast PPR-ID364for each of the egress NEs417and422within the anycast PPR graph400. The central entity165then generates the PPR information170describing the anycast PPR graph400, including the PPR-ID186representing the anycast PPR graph400and the anycast PPR-ID364for each of the egress NEs417and422within the anycast PPR graph400. In an embodiment, the central entity165may send the PPR information170to at least one of the NEs410-422in a network, such that the PPR information170is flooded through the entire network, as described above with reference toFIGS.1A-B. After receiving the PPR information170, each of the NEs410-422identified in the PPR-PDEs190of the PPR information170updates the forwarding database365to include a forwarding entry359. The forwarding entry359may include the PPR-ID186of the anycast PPR graph400, the anycast PPR-ID364for both the egress NEs417and422, and/or a next element (e.g., next hop) on the anycast PPR graph400toward one of the egress NEs417and422. In the data plane, one of the ingress NEs410,411,414, or416on the anycast PPR graph400receives a data packet including an anycast PPR-ID364as the destination of a data packet. For example, when the ingress NE414receives a data packet including the anycast PPR-ID364, then the ingress NE414searches the forwarding database365for the forwarding entry359corresponding to the anycast PPR-ID364and the ingress NE414. The forwarding entry359indicates that the nearest destination represented by the anycast PPR-ID364is the egress NE417. The ingress NE414then identifies the next element (e.g., link434or NE415) by which to forward the data packet to reach the egress NE417and forwards the data packet to the identified next element. FIG.5is a diagram illustrating another embodiment of an anycast PPR graph500configured to implement anycast addressing and routing methodologies according to various embodiments of the disclosure. The anycast PPR graph500is similar to the anycast PPR graph400, except that the anycast PPR graph500includes two bidirectional forests503and506. A bidirectional forest503and506includes a bidirectional path between multiple ingress NEs and multiple egress NEs. In the example shown inFIG.5, the bidirectional forest503only includes a single egress NE417, and the bidirectional forest506only includes a single egress NE422. However, it should be appreciated that each of the bidirectional forests503and506may include multiple egress NEs. Similar toFIG.4, inFIG.5, the egress NEs417and422are represented with circles, and the ingress NEs410,411,414,415,416,418, and420are represented with rectangles. Similar to the PPR tree403, the bidirectional forest503includes NEs410-417interconnected by links430-436. Similar to the PPR tree406, the bidirectional forest506includes NEs418-422interconnected by links437-440. The control plane mechanisms for advertising the PPR information170describing the anycast PPR graph500from the central entity165to the NEs410-422is the same as the control plane mechanisms for advertising the PPR information170for the anycast PPR graph400. In the data plane, the NEs410-422may include forwarding entries359for egress NEs417and422in both directions, instead of just one direction. FIG.6is a diagram illustrating yet another embodiment of an anycast PPR graph600configured to implement anycast addressing and routing methodologies according to various embodiments of the disclosure. The anycast PPR graph600is similar to the anycast PPR graph500, except that the anycast PPR graph600includes two bidirectional forests603and606that each have multiple egress NEs associated with different anycast groups. Similar to the bidirectional forest503, the bidirectional forest603includes NEs410-417interconnected by links430-436. Similar to bidirectional forest606, the bidirectional forest606includes NEs418-422interconnected by links437-440. Unlike the bidirectional forest503, the bidirectional forest603includes ingress NEs410,414,415, and416represented with rectangles, and egress NEs411and417represented with circles. Egress NE411is associated with a first anycast group represented by a first anycast PPR-ID364A. Egress NE417is associated with a second anycast group represented by a second anycast PPR-ID364B. The bidirectional forest606includes a single ingress NE420represented with a rectangle and egress NEs418and422represented with circles. The egress NE418is associated with the first anycast group represented by the first anycast PPR-ID364A. The egress NE422is associated with the second anycast group represented by the second anycast PPR-ID364B. In the control plane, the central entity165determines or obtains the anycast PPR-ID364A for egress NEs411and418and the anycast PPR-ID364B for egress NEs417and422. The central entity165then generates the PPR information170describing the anycast PPR graph600, including the PPR-ID186representing the anycast PPR graph600, the anycast PPR-ID364A for egress NEs411and418, and the anycast PPR-ID364B for egress NEs417and422. In an embodiment, the anycast PPR-ID364A may be included in the PPR-PDEs190describing egress NEs411and418. In this embodiment, the anycast PPR-ID364B may be included in the PPR-PDEs describing egress NEs417and422. In an embodiment, the central entity165may send the PPR information170to at least one of the NEs410-422in a network, such that the PPR information170is flooded through the entire network, as described above with reference toFIGS.1A-B. After receiving the PPR information170, each of the NEs410-422identified in the PPR-PDEs190of the PPR information170updates the forwarding database365to include a forwarding entry359. The forwarding entry359may include the PPR-ID186of the anycast PPR graph600, the anycast PPR-ID364A for egress NEs411and418, the anycast PPR-ID364B for egress NEs417and422, and/or a next element (e.g., next hop) on the anycast PPR graph600toward one of the egress NEs411,418,417, or422. In some cases, two different anycast PPR graphs may share NEs, some of which are egress NEs included in the same anycast group, and thus have the same anycast PPR-ID364. In this case, the shared NE that is part of the two different anycast PPR graphs may not be able to determine how to transmit a data packet with the anycast PPR-ID364as the destination. In an embodiment, an anycast group PPR-ID367may be included in the PPR information170such that the shared NE may use the anycast group PPR-ID367to identify the egress NE within the anycast group and determine how to transmit the data packet to the identified egress NE. FIGS.7A-Care diagrams illustrating the use of an anycast group PPR-ID367to implement anycast addressing and routing methodologies according to various embodiments of the disclosure. In particular,FIG.7Ais a diagram illustrating a PPR tree700,FIG.7Bis a diagram illustrating a PPR tree715, andFIG.7Cis a diagram illustrating a PPR graph720including the PPR tree700and the PPR tree715. Referring now toFIG.7A, shown is a PPR tree700including NEs703-705, each of which are similar to NEs101-121ofFIGS.1A-B. NEs703-705are interconnected by links710-711, which are similar to links122-154ofFIGS.1A-B. In PPR tree700, ingress NE703is represented with a rectangle, and egress NE705is represented with a circle. The NE704positioned between the ingress NE703and egress NE705is an intermediate NE704. In an embodiment, the egress NE705is a member of an anycast group having the anycast group PPR-ID367as the shared address of all members within the anycast group. In this embodiment, the PPR information170representing the PPR tree700may include a PPR graph type183indicating that the PPR tree700implements anycast addressing and routing mechanisms. The PPR information170further includes a PPR-ID186, which identifies the PPR tree700, and may include a label, address, or identifier of the egress NE705. The PPR information170further includes the anycast PPR-ID364, representing the anycast group including the egress NE705. However, since all members of an anycast group are assigned same anycast PPR-ID364, intermediate NEs704may not be able to distinguish between different egress NEs705of the same anycast group. In various embodiments, each member of the anycast group may be assigned a different anycast group PPR-ID367such that intermediate NE704may distinguish between different egress NEs705in a common anycast group. For example, the central entity165may determine or obtain the anycast group PPR-ID367for each egress NE705in an anycast group and transmit the anycast group PPR-ID367in the PPR information170to one or more NEs in the network. In this embodiment, the PPR information170further includes the anycast group PPR-ID367uniquely representing each member within an anycast group. In an embodiment, the anycast group PPR-ID367may be a unicast address, label, or identifier representing the egress NE705, which may be encoded according to a transmission protocol implemented by the network. In an embodiment, the PPR-PDE190representing egress NE705carries the anycast group PPR-ID367for egress NE705. In this embodiment, the egress NE705is addressed by both the anycast PPR-ID364and the anycast group PPR-ID367. In an embodiment, each of the NEs703-705in the PPR tree700stores the PPR information170in a forwarding entry359of a local forwarding database365. Referring now toFIG.7B, shown is a PPR tree715including NEs704,706, and707, each of which are similar to NEs101-121ofFIGS.1A-B. NEs704,706, and707are interconnected by links712-713, which are similar to links122-154ofFIGS.1A-B. As shown byFIGS.7A and7B, PPR tree700and PPR tree715share NE704. However, unlike PPR tree700, the NE704in PPR tree715is also an ingress NE704. In PPR tree715, ingress NEs704and706are represented with rectangles, and egress NE707is represented with a circle. In this example, the egress NE707is associated with the same anycast group as the egress NE705of PPR tree700. That is, the egress NE707is addressed by the same anycast PPR-ID364as the egress NE705of PPR tree700. In an embodiment, each of the members of an anycast group is assigned a different anycast group PPR-ID367such that intermediate NEs704can distinguish between egress NEs705and707of the same anycast group but different PPR trees700and715. In this case, the egress NE707is also addressed by an anycast group PPR-ID367, which is different from the anycast group PPR-ID367of egress NE705. In an embodiment, the PPR information170representing the PPR tree715may include a PPR graph type183indicating that the PPR tree715implements anycast addressing and routing mechanisms. The PPR information170further includes a PPR-ID186, which identifies the PPR tree715, and may include a label, address, or identifier of the egress NE707. The PPR information170further includes the anycast PPR-ID364assigned to the egress NE707. In an embodiment, the PPR information170further includes the anycast group PPR-ID367uniquely representing the egress NE707, within the anycast group identified by the anycast PPR-ID364. In an embodiment, each of the NEs706,704, and707in the PPR tree715stores the PPR information170in a forwarding entry359in a local forwarding database365. FIG.7Cis a diagram illustrating a PPR graph720including the PPR tree700and the PPR tree715. In particular,FIG.7Cis a diagram illustrating how data packets730are transmitted through the PPR graph720using the anycast group PPR-ID367. As described above with reference toFIGS.7A-B, data packets730that are sourced at the ingress NE703with a destination of the anycast PPR-ID364should be transmitted towards egress NE705via intermediate NE704. Data packets730that are sourced at the ingress NE706or ingress NE704with a destination of the anycast PPR-ID364should be transmitted towards egress NE707via intermediate NE704. In some embodiments, NEs703-707are configured transmit data packets730with the same anycast PPPR-ID364to different egress NEs705and707using the anycast group PPR-ID367. For example, ingress NE703receives an anycast data packet730, which includes a destination address and user data. The destination address may include the anycast PPR-ID364of egress NE705. In this case, the ingress NE703determines the anycast group PPR-ID367for the egress NE705toward which to forward the anycast data packet730based on the ingress NE703being identified as the source or ingress NE703of the PPR tree700and the anycast PPR-ID364. For example, ingress NE703performs a lookup at the local forwarding database365to identify the forwarding entry359indicating the ingress NE703as the source and an anycast group PPR-ID367of the egress NE705based on the anycast PPR-ID364. In an embodiment, the ingress NE703inserts the anycast group PPR-ID367of egress NE705into the anycast data packet730, for example, via encapsulation or encoding. Ingress NE703then forwards the anycast data packet730to the next element (e.g., NE704) on the PPR graph720via link710. NE704performs determines a next element (e.g., egress NE705) by which to forward the anycast data packet730based on the forwarding entry359and the anycast group PPR-ID367of egress NE705. NE704forwards the data packet over link711to egress NE705. The egress NE705receives the anycast data packet730with the anycast group PPR-ID367of egress NE705, and then determines that the egress NE705is the destination of the anycast data packet730, which was initially addressed to the egress NE705using the anycast PPR-ID364. In an embodiment, the egress NE705removes the anycast group PPR-ID367from the anycast data packet730and inserts the anycast PPR-ID364back into the anycast data packet730before forwarding the anycast data packet730to the final destination or application. For example, the egress NE705may encode the anycast data packet730to include the anycast PPR-ID364instead of the anycast group PPR-ID367, or decapsulate the anycast data packet730to remove the anycast group PPR-ID367. As another example, ingress NE704receives an anycast data packet730, which includes the anycast PPR-ID364as the destination of the anycast data packet730. The ingress NE704determines the anycast group PPR-ID367for the egress NE707toward which to forward the anycast data packet730based on the ingress NE704being identified as the source or ingress NE704of the PPR tree715and the anycast PPR-ID364. Similar to that described above, the ingress NE704inserts the anycast PPR-ID364of the egress NE707into the anycast data packet730and forwards the anycast data packet730to the next element (e.g., egress NE707) via link713. The egress NE707may determine that the egress NE707is the destination of the anycast data packet730, which was originally addressed to the egress NE707using the anycast PPR-ID364. The egress NE707replaces the anycast group PPR-ID367for the egress NE707in the anycast data packet730with the anycast PPR-ID364before forwarding the anycast data packet730to the final destination or application. While not described herein, ingress NE706in the PPR tree715may perform similar steps when forwarding an anycast data packet730to the egress NE707. FIG.8is a diagram illustrating the use of an anycast group PPR-ID367in an anycast PPR graph800to implement anycast addressing and routing methodologies for a single egress NE according to various embodiments of the disclosure. The anycast PPR graph800includes NEs801-805, each of which are similar to NEs101-121ofFIGS.1A-B. The NEs801-805are interconnected by links806-810, which are similar to links122-154ofFIGS.1A-B. The anycast graph800includes ingress NEs801and802and a single egress NE804. In an embodiment, the egress NE804is associated with an anycast PPR-ID364and an anycast group PPR-ID367. In this embodiment, there may be only one egress NE804currently assigned to the anycast group corresponding to the anycast PPR-ID364. In this embodiment, the anycast group PPR-ID367is assigned to the egress NE804to better allocate paths and resources toward egress NE804. For example, there may be a case in which one of the links806-810on the anycast PPR graph800has insufficient resources to transmit data to the egress804. There also may be a case in which one of these links806-810fails. For example, link808and/or link810may no longer be configured to carry traffic from ingress NEs801and802at the speed that is required for the traffic to be transmitted. In this case, two different anycast group PPR-IDs367may be associated with two different branches815and817within the anycast PPR graph800and assigned to the egress NE804such that traffic from ingress NE801travels through one branch815and traffic from ingress NE802passes through the other branch817. For example, within the anycast PPR graph800, there may be branches815and816, similar to PPRs160, from an ingress NE801or802to the egress NE804. A first branch815may include ingress NE801, link806, NE803, link808, and egress NE804. A second branch817may include ingress NE802, link807, NE803, link809, NE805, link810, and egress NE804. A first anycast group PPR-ID367corresponding to the first branch815and the ingress NE801may be assigned to the egress NE804, for example, by the central entity165. In this case, when the ingress NE801receives an anycast data packet730destined for egress NE804(either by including the PPR-ID186or anycast PPR-ID364of the egress NE804), the ingress NE801inserts the first anycast group PPR-ID367into the anycast data packet730. Then, ingress NE801forwards the anycast data packet730along the first branch815to egress NE804. A second anycast group PPR-ID367corresponding to the second branch817and the ingress NE802may also be assigned to the egress NE804, for example, by the central entity165. In this case, when the ingress NE802receives a anycast data packet730destined for egress NE804(either by the PPR-ID183or anycast PPR-ID364of the egress NE804), the ingress NE802inserts the second anycast group PPR-ID367into the anycast data packet730. Then ingress NE802forwards the anycast data packet730along the second branch817to egress NE804. In this way, the anycast group PPR-ID367can be used not only to distinguish between members within an anycast group, but also to better allocate network resources within a network implementing anycast PPR graphs800. By having multiple anycast group PPR-IDs367corresponding to a single egress NE804, NEs801-803and805can forward traffic more efficiently and effectively within the network. In this way, traffic reaches the egress NE804faster, while better balancing the load of network utilization within the network. FIG.9is a flowchart of a method900of implementing anycast addressing and routing methodologies according to various embodiments of the disclosure. Method900may be implemented by NE300, which may be implemented as one of the NEs410-422,703-707, or801-805. Method900may be implemented after PPR information170describing an anycast PPR graph180,400,500,600,720, or800is received from a central entity165or another one of the NEs in the network100or175. At step903, PPR information170describing an anycast PPR graph180,400,500,600,720, or800is received. The anycast PPR graph180,400,500,600,720, or800represents a plurality of PPRs160between an ingress NE and an egress NE in the network. The PPR information170includes a PPR-ID186identifying the anycast PPR graph180,400,500,600,720, or800and a plurality of PPR-PDEs190describing one or more elements included in the anycast PPR graph180,400,500,600,720, or800. A PPR-PDE190describing an ingress NE includes a source flag191. A PPR-PDE190describing an egress NE includes a destination flag193, an anycast PPR-ID364, and an anycast group PPR-ID367associated with the egress NE. For example, the Rx320receives the PPR information170from the central entity165or another NE in the network100or175. At step906, a forwarding database365is updated to include a forwarding entry359for the egress NE. The forwarding entry359includes the PPR-ID186, the anycast PPR-ID364, and the anycast group PPR-ID367. The forwarding entry359also indicates a next element on the anycast PPR graph180,400,500,600,720, or800by which to forward an anycast data packet730comprising the anycast PPR-ID364. For example, the network configuration module335is executed by the processor330to update a forwarding database365to include a forwarding entry359for the egress NE in response to identifying the NE in the PPR-PDEs190received in the PPR information170. At step909, the anycast data packet730is forwarded to the next element of the anycast PPR graph180,400,500,600,720, or800based on the forwarding entry359indicating the PPR-PDEs190. For example, Tx340forwards the anycast packet730. FIG.10is a diagram illustrating a PPR graph1000configured to enforce QoS parameters370according to various embodiments of the disclosure. The PPR graph1000is similar to the anycast PPR graphs180,400,500,600,720, or800, in that the PPR graph1000includes at least one path between at an ingress NE and an egress NE. However, the NEs in PPR graph1000are additionally configured to enforce QoS parameters370at each ingress NE of the PPR graph1000. As shown byFIG.10, PPR graph1000includes NEs1001-1008, each of which are similar to NEs101-121ofFIGS.1A-B. NEs1001-1008are interconnected by links1110-1116, which are similar to links122-154ofFIGS.1A-B. The PPR graph1000includes ingress NEs1001,1002,1003,1005,1006, and1007, each of which are represented with circles. The PPR graph1000also includes one egress NE1008, which is represented with a rectangle. The NEs1001-1008within PPR graph1000are configured to enforce QoS attributes370at a per-ingress NE level instead of at a broader PPR graph level. That is, instead of enforcing a single QoS attribute370for all of the resources along the PPR graph1000, the embodiments disclosed herein enable each ingress NE1001,1002,1003,1005,1006, and1007within a PPR graph1000to reserve resources differently as required along the PPR graph1000. In an embodiment, each ingress NE1001,1002,1003,1005,1006, and1007is associated with a QoS attribute370, which refers to a network attribute associated with a resource that is permitted to be enforced or required to be enforced by the ingress NE1001,1002,1003,1005,1006, and1007of the PPR graph1000. The QoS attribute370refers to any type of network resource that may be reserved at an NE1001-1008or link1110-1116of a PPR graph1000. For example, the QoS attribute370may be at least one of a bandwidth required to transmit a data packet along the PPR graph1000, a buffer size of a buffer at an NE1001-1008, a burst size permitted to be transmitted along the outgoing element of an NE1001-1008, a bounded latency permitted to occur at an NE1001-1008, or a lifetime indicating a time period during which the resource is to be reserved at an NE1001-1008or link1110-1116of a PPR graph1000. In the control plane, the PPR information170for the PPR graph1000includes the QoS attribute370for each ingress NE1001,1002,1003,1005,1006, and1007on the PPR graph1000. In an embodiment, the PPR-PDEs190for each of the ingress NEs1001,1002,1003,1005,1006, and1007includes the respective QoS attribute370for the ingress NE1001,1002,1003,1005,1006, and1007. In an embodiment, the PPR information170also includes a maximum QoS attribute373for the PPR graph1000. The maximum QoS attribute373refers to a maximum amount of the particular resource that is permitted to be reserved at an NE1001-1008or link1110-1116of a PPR graph1000. Continuing with the control plane, when each ingress NE1001,1002,1003,1005,1006, and1007receives the PPR information170including the QoS attribute370for each of the ingress NEs1001,1002,1003,1005,1006, and1007within the PPR graph1000, each ingress NE1001,1002,1003,1005,1006, and1007computes an aggregated QoS attribute1035based on the QoS attribute370for each of the previous ingress NEs1001,1002,1003,1005,1006, and1007. In an embodiment, the aggregate QoS attribute1035refers to a sum of each of the QoS attributes370for each of a plurality of previous ingress NEs1001,1002,1003,1005,1006, and1007that are downstream (e.g., opposite direction from the egress NE1008) on the PPR graph1000. For example, the QoS attribute370for the ingress NE1001is 2 megabits per second (Mbps), the QoS attribute370for the ingress NE1002is 1 Mbps, the QoS attribute370for the ingress NE1003is 1 Mbps, the QoS attribute370for the ingress NE1005is 1 Mbps, the QoS attribute370for the ingress NE1006is 2 Mbps, and the QoS attribute370for the ingress NE1007is 1 Mbps. In this example, after the ingress NE1005receives the PPR information170for PPR graph1000, including the QoS attributes370for each of the ingress NEs1001,1002,1003,1005,1006, and1007, the ingress NE1005determines an aggregate QoS attribute1035for the ingress NE1005. The ingress NE1005determines the aggregate QoS attribute1035by computing a sum of all of the QoS attributes370for the previous ingress NE1003downstream of ingress NE1005and the QoS attribute370for the ingress NE1005. In this case, the ingress NE1005computes a sum of 1 Mbps (e.g., the QoS attribute370for ingress NE1003) and 1 Mbps (e.g., the QoS attribute370for the ingress NE1005), which is 2 Mbps. In this case, the aggregate QoS attribute1035at ingress NE1005is 2 Mbps. In an embodiment, the ingress NE1005compares the aggregate QoS attribute1035with the maximum QoS attribute373. When the aggregate QoS attribute1035is less than the maximum QoS attribute373, the ingress NE1005reserves a resource along an outgoing element of the ingress NE1005(e.g., at link1114) based on the aggregate QoS attribute1035. For example, when the maximum QoS attribute373is 5 Mbps, the aggregate QoS attribute1035at the ingress NE1005of 2 Mpbs is less than the maximum QoS attribute373of 5 Mbps. In this case, the ingress NE1005may reserve 2 Mbps (e.g., the aggregate QoS attribute1035) of bandwidth along the outgoing element (e.g., link1114) for transmitting traffic from the ingress NE1005to the egress NE1008along PPR graph1000. For example, the link-state database361and/or the forwarding database365may be updated to reflect the resource reservation. Continuing with this example, each of the ingress NEs1001,1002,1003,1005,1006, and1007on the PPR graph1000computes the aggregate QoS attribute1035at the respective ingress NE1001,1002,1003,1005,1006, and1007and then compares the aggregate QoS attribute1035with the maximum QoS attribute373. As another illustrative example, after the ingress NE1006receives the PPR information170for PPR graph1000, including the QoS attributes370for each of the ingress NEs1001,1002,1003,1005,1006, and1007, the ingress NE1006determines an aggregate QoS attribute1035for the ingress NE1006. The ingress1006determines the aggregate QoS attribute1035by computing a sum of all of the QoS attributes370for the previous ingress NEs1001,1002,1003, and1005downstream of ingress NE1006and the QoS attribute370for the ingress NE1006. In this case, the ingress NE1006computes a sum of 2 Mbps (e.g., the QoS attribute370for ingress NE1001), 1 Mbps (e.g., the QoS attribute370for ingress NE1002), 1 Mbps (e.g., the QoS attribute370for ingress NE1003), 1 Mbps (e.g., the QoS attribute370for the ingress NE1005), and 2 Mbps (e.g., the QoS attribute370for the ingress NE1006), which is 7 Mbps. In this case, the aggregate QoS attribute1035at ingress NE1006is 7 Mbps. In an embodiment, the ingress NE1006compares the aggregate QoS attribute1035with the maximum QoS attribute373. When the aggregate QoS attribute1035is greater than the maximum QoS attribute373, the ingress NE1006reserves a resource along an outgoing element of the ingress NE1006(e.g., at link1115) based on the maximum QoS attribute373. For example, when the maximum QoS attribute373is 5 Mbps, the aggregate QoS attribute1035at ingress NE1006of 7 Mbps is greater than the maximum QoS attribute373of 5 Mbps. In this case, the ingress NE1006may reserve 5 Mbps (e.g., the maximum QoS attribute373) of bandwidth along the outgoing element (e.g., link1115) for transmitting traffic from the ingress NE1006to the egress NE1008along PPR graph1000. For example, the link-state database361and/or the forwarding database365may be updated to reflect the resource reservation. These embodiments take direct advantage of the compact and scalable forms of PPR graphs1000to implement QoS attributes370, which may be particularly applicable to large scale platforms with a large number of users, such as videoconferencing. In particular, the methods of encoding QoS attributes370for each ingress NE1001,1002,1003,1005,1006, and1007enable a more fine-tuned and accurate mechanism for guaranteeing QoS for a client or a user. These embodiments also allow bandwidth to be shared between different ingress NEs1001,1002,1003,1005,1006, and1007without having to allocate non-shared bandwidth. WhileFIG.10shows QoS attribute370enforcement in a PPR graph1000with a single egress NE1008, PPR graphs implemented as a forest or bidirectional forest may also perform QoS attribute370enforcement. In this case, bandwidth may be shared not only between different ingress NEs1001,1002,1003,1005,1006, and1007, but also between different egress NEs1008. FIG.11is a flowchart of a method1100of enforcing QoS attributes370within PPR graphs1000according to various embodiments of the disclosure. Method1100may be implemented by one of the ingress NEs1001,1002,1003,1005,1006, and1007of the PPR graph1000. Method1100may be implemented after PPR information170including QoS attributes370for each of the ingress NEs1001,1002,1003,1005,1006, and1007and/or the maximum QoS attribute373is received by each of the ingress NEs1001,1002,1003,1005,1006, and1007in network100or175from a central entity165or another NE. At step1103, PPR information170describing a PPR graph1000is received. The PPR graph1000represents a plurality of PPRs160between an ingress NE and an egress NE in the network. The PPR information170includes a PPR-ID186and multiple PPR-PDEs190, each describing an element on the PPR graph1000. A PPR-PDE190describing an egress NE1008includes a destination flag193. A PPR-PDE190describing an ingress NE1001,1002,1003,1005,1006, and1007includes a source flag191and a QoS attribute370associated with a resource to be reserved on an outgoing element of the NE. In an embodiment, the PPR information170additionally includes the maximum QoS attribute373. For example, the Rx320receives the PPR information170from another NE in the network or from the central entity165. At step1106, the forwarding database365is updated to include a forwarding entry359for the egress NE1108. The forwarding entry359includes the PPR-ID186and the QoS attribute370. The forwarding entry359also indicates a next element on the PPR graph by which to forward a data packet comprising the PPR-ID186. For example, the network configuration module335is executed by the processor330to update the forwarding database365to include the forwarding entry359for the egress NE1108. At step1109, the resource along the outgoing element of the NE is reserved based on the PPR-PDEs190and the QoS attribute370. In an embodiment, an aggregate QoS attribute1035may be determined by the NE based on the QoS attributes370for one or more previous ingress NEs1001,1002,1003,1005,1006, and1007positioned downstream on the PPR graph1000. In an embodiment, the aggregate QoS attribute1035may be compared to the maximum QoS attribute373. When the aggregate QoS attribute1035is less than the maximum QoS attribute373, the resource is reserved along the outgoing element of the NE based on the aggregate QoS attribute1035. When the aggregate QoS attribute1035is greater than the maximum QoS attribute373, the resource is reserved along the outgoing element of the NE based on the maximum QoS attribute373. In an embodiment, the network configuration module335is executed by the processor330to determine the aggregate QoS attribute1035, compare the aggregate QoS attribute1035to the maximum QoS attribute373, and reserve the resource along the outgoing element of the NE based on the PPR-PDEs190, aggregate QoS attribute1035, and/or the maximum QoS attribute373. FIGS.12A-Care diagrams illustrating the implementation of a fast reroute mechanism for PPR graphs upon failure of an NE or a link in a PPR graph according to various embodiments of the disclosure. In particular,FIG.12Ashows a PPR graph1200and PPR information170describing the PPR graph1200, in which the PPR information170includes a backup PPR flag376for each ingress NE in the PPR graph1200.FIG.12Bshows a first backup PPR graph1240for the PPR graph1200, andFIG.12Cshows a second backup PPR graph1270for the PPR graph1200. In an embodiment, the backup PPR flag376is included in a PPR-PDE190for each ingress NE of the PPR graph1200and indicates whether the ingress NE should use the first backup PPR graph1240or the second backup PPR graph1270upon failure of an adjacent NE or link on the PPR graph1200. Referring now toFIG.12A, shown is a diagram illustrating a PPR graph1200configured to perform fast rerouting mechanisms using backup PPR flags376according to various embodiments of the disclosure. The PPR graph1200is similar to the PPR graphs180,400,500,600,720,800, and1000, in that the PPR graph1200includes at least one path between an ingress NE and an egress NE. However, the NEs in PPR graph1200are additionally configured to perform fast rerouting mechanisms when an element on the PPR graph1200fails or is no longer available to transmit traffic. As shown byFIG.12A, PPR graph1200includes NEs1201-1213, each of which are similar to NEs101-121ofFIGS.1A-B. NEs1201-1213are interconnected by links1215-1226, which are similar to links122-154ofFIGS.1A-B. The PPR graph1200includes ingress NEs1201-1204and1206-1213, each of which is represented with rectangles. The PPR graph1200also includes one egress NE1205, which is represented with a circle. In some embodiments, instead of having the same backup PPR graph for each of the ingress NEs1201-1204and1206-1213in the PPR graph1200, the embodiments disclosed herein enable each ingress NE1201-1204and1206-1213within a PPR graph1200to have a particular backup PPR graph assigned specifically for the ingress NE1201-1204and1206-1213. In the control plane, the PPR information170is flooded to each of the NEs1201-1213in a network or PPR graph1200and locally saved in a forwarding entry359of the forwarding database365. As shown byFIG.12A, the PPR information170describing the PPR graph1200includes a backup PPR flag376A-B and backup PPR information379. In an embodiment, the backup PPR flag376A-B is included in a PPR-PDE190for each ingress NE1201-1204and1206-1213. The backup PPR information379includes backup PPR-IDs382A-B and backup PPR graph PDEs381A-B. In an embodiment, the backup PPR-IDs382A-B include a label, address, or identifier identifying the backup PPR graph1240and the backup PPR graph1270, respectively. In an embodiment, the backup PPR graph PDEs381A-B includes one or more PDEs, similar to PPR-PDEs190, describing one or more elements on backup PPR graphs1240and1270, respectively. In this embodiment, the backup PPR flag376A-B is a flag or bit indicating one of the backup PPR graphs1240or1270as being assigned to the ingress NE1201-1204and1206-1213. For example, as shown byFIG.12A, the ingress NEs1204-1204and1206-1208are each associated with a PPR-PDE190including a backup PPR flag376A, indicating that the backup PPR graph1240should be used as a backup path for ingress NEs1204-1204and1206-1208. The backup PPR graph1240is shown and described below with reference toFIG.12B. Meanwhile, ingress NEs1209-1213are each described by a PPR-PDE190including a backup PPR flag376B, indicating that the backup PPR graph1270should be used as a backup path for ingress NEs1209-1213. The backup PPR graph1270shown and described below with reference toFIG.12C. In an embodiment, in the data plane, when an ingress NE1201-1204or1206-1213or interface/link adjacent to the ingress NE1201-1204or1206-1213fails or is no longer available to transmit traffic, the ingress NE1201-1204or1206-1213searches the forwarding entry359for the PPR-PDE190corresponding the ingress NE1201-1204or1206-1213. The forwarding entry359may indicate the backup PPR flag376A-B for the ingress NE1201-1204or1206-1213. For example, when link1224adjacent to ingress NE1211and on the path to the egress NE1205fails, ingress NE1211searches the forwarding database365for the forwarding entry359corresponding to the egress NE1205and the ingress NE1211to determine the backup PPR flag376B. The backup PPR flag376B indicates that when a failure occurs adjacent to or at ingress NE1211on the path to egress NE1205, then the ingress NE1211reroutes data packets to the backup PPR graph1270shown and described below with reference toFIG.12C. The path by which the data packets are rerouted through the backup PPR graph1270is indicated by backup PPR-ID382B and the backup PPR graph PDE381B. In the data plane, when ingress NE1211receives a data packet destined for egress NE1208and when link1224fails, the PPR-ID186included in the data packet is replaced with the backup PPR-ID382B identifying the backup PPR graph1270. Then, the data packet is forwarded along the backup PPR graph1270. Similarly, when link1219adjacent to or at ingress NE1206on the path to egress NE1205fails, ingress NE1206searches the forwarding database365for the forwarding entry359corresponding to the egress NE1205and the ingress NE1206to determine the backup PPR flag376A. The backup PPR flag376A indicates that when a failure occurs adjacent to or at ingress NE1206on the path to egress NE1205, then the ingress NE1206reroutes data packets to the backup PPR graph1240shown and described below with reference toFIG.12B. The path by which the data packets are rerouted through the backup PPR graph1240is indicated by backup PPR-ID382A and the backup PPR graph PDE381A. In the data plane, when ingress NE1206receives a data packet destined for egress NE1208and when link1219fails, the PPR-ID186included in the data packet is replaced with the backup PPR-ID382A identifying the backup PPR1240. Then, the data packet is forwarded along the backup PPR graph1240. Referring now toFIG.12B, shown is a diagram illustrating the backup PPR graph1240, which may be used to forward data packets upon a failure occurring adjacent to or at ingress NEs1204-1204and1206-1208of PPR graph1200. The backup PPR graph1240is similar to the PPR graph1200in that the backup PPR graph1240includes NEs1201-1213. Backup PPR graph1240includes links1215-1216and1219-1228, which again are similar to links122-154ofFIGS.1A-B. Similar to PPR graph1200ofFIG.12A, the backup PPR graph1240includes a single egress NE1205, represented with a circle, while the remaining NEs1201-1204and1206-1213are ingress NEs, represented with rectangles. As discussed above, the backup PPR graph1240may be used when ingress NEs1204-1204and1206-1208or a link/interface adjacent to ingress NEs1204-1204and1206-1208on PPR graph1200fails. For example, when the link1219of PPR graph1200fails, ingress NE1206searches the forwarding database365for the forwarding entry359corresponding to the egress NE1205and the ingress NE1206to determine the backup PPR flag376A. The backup PPR flag376A indicates that when a failure occurs adjacent to or at ingress NE1206on the path to egress NE1205, then the ingress NE1206reroutes data packets to the backup PPR graph1240. In particular, data packets destined for egress NE1205are rerouted at the ingress NE1206from link1219to link1220, as would be indicated in the backup PPR graph PDE381A. The data packets will be forwarded along backup PPR graph1240(through NE1207, link1221, NE1208, link1227, NE1209, link1222, NE1210, link1223, NE1211, and link1224) to finally reach egress NE1205. In this way, ingress NE1206is configured to detect a failure at or adjacent to the ingress NE1206and reroute a packet based on the backup PPR graph1240designated particularly for the ingress NE1206. Referring now toFIG.12C, shown is a diagram illustrating the backup PPR graph1270, which may be used to forward data packets upon a failure occurring adjacent to ingress NEs1209-1213of PPR graph1200. The backup PPR graph1270is similar to the PPR graph1200in that the backup PPR graph1270includes NEs1201-1213. Backup PPR graph1270includes links1215-1222and1225-1228, which again are similar to links122-154ofFIGS.1A-B. Similar to PPR graph1200ofFIG.12Aand the backup PPR graph1240, the backup PPR graph1270includes a single egress NE1205, represented with a circle, while the remaining NEs1201-1204and1206-1213are ingress NEs, represented with rectangles. As discussed above, the backup PPR graph1270may be used when ingress NEs1209-1213or a link/interface adjacent to ingress NEs1209-1213on PPR graph1200fails. For example, when the link1224of PPR graph1200fails, ingress NE1211searches the forwarding database365for the forwarding entry359corresponding to the egress NE1205and the ingress NE1211to determine the backup PPR flag376B. The backup PPR flag376B indicates that when a failure occurs adjacent to or at ingress NE1211on the path to egress NE1205, then the ingress NE1211reroutes data packets to the backup PPR graph1270. In particular, data packets destined for egress NE1205are rerouted at the ingress NE1211from link1224to link1225, as would be indicated in the backup PPR graph PDE381B. The data packets will be forwarded along backup PPR graph1270(through link1225, NE1212, link1226, NE1213, link1228, NE1201, link1215, NE1202, link1216, NE1203, link1217, and NE1204) to finally reach egress NE1205. In this way, ingress NE1211is configured to detect a failure adjacent to or at the ingress NE1211and reroute the packet based on a particular backup PPR graph1270designated particularly for the ingress NE1211 The embodiments disclosed herein are advantageous for several reasons. First, the use of the backup PPR flags376within the PPR-PDEs190enables a much more compact mechanism of signaling backup routes for nodes in a network. In addition, the central entity165computes the backup PPR information379for each PPR graph1200, and sends the backup PPR information379in the PPR information170for each PPR graph1200to an NE1201-1213in the network, which is then flooded through all the NEs in the network. Therefore, by using the backup PPR flag376and the backup PPR information379, the NEs1201-1213on the PPR graph1200are configured to perform fast rerouting at an ingress NE level, while maintaining efficient and effective use of networking resources. FIG.13is a flowchart of a method1300for performing fast rerouting mechanism within a PPR graph according to various embodiments of the disclosure. Method1300may be implemented by one of the ingress NEs1201-1204and1205-1213in PPR graph1200. Method1300may be implemented after PPR information170including backup PPR information379and backup PPR flags376is received by one of the ingress NEs1201-1204and1205-1213in PPR graph1200. At step1303, PPR information170describing the PPR graph1200between at least one ingress NE1201-1204and1205-1213and at least one egress NE1205in a network. The PPR information170also includes a PPR-ID186and multiple PPR-PDEs190, each describing an element on the PPR graph1200. At step1304, backup PPR information379is received. The backup PPR information379describes at least two backup PPR graphs1240and1270between at least one ingress NE1201-1204and1205-1213and at least one egress NE1205in the network. A PPR-PDE190describing an ingress NE1201-1204or1205-1213includes a backup PPR flag376indicating a backup PPR graph1240or1270of the at least two backup PPR graphs1240and1270along which to forward a data packet in response to a failure occurring adjacent to the ingress NE1201-1204or1205-1213. For example, the Rx320receives the PPR information170and backup PPR information379from another NE in the network or from the central entity165. At step1306, the forwarding database365is updated to include a forwarding entry359for the egress NE1205in response to identifying the NE in the PPR-PDEs190. The forwarding entry359including the PPR information170and the backup PPR flag376. For example, the network configuration module335is executed by the processor330to update the forwarding database365to include the forwarding entry359for the egress NE1205. At step1309, the data packet is forwarded to a next element based on the backup PPR information and the backup PPR flag instead of the PPR information in response to the failure occurring at the ingress NE. For example, the Tx340transmits the data packet to the next element based on the backup PPR information and the backup PPR flag instead of the PPR information in response to the failure occurring at the ingress NE. FIGS.14-15are diagrams of apparatuses1400and1500configured to implemented the advanced PPR graph features disclosed herein according to various embodiments of the disclosure. The apparatus1400ofFIG.14is configured to implement method900and/or method1300. The apparatus1500ofFIG.15is configured to implement method1100. In an embodiment, the apparatus1400comprising a means for receiving1403PPR information170describing an anycast PPR graph180,400,500,600,720, or800is received. The anycast PPR graph180,400,500,600,720, or800represents a plurality of PPRs160between an ingress NE and an egress NE in the network. The PPR information170includes a PPR-ID186identifying the anycast PPR graph180,400,500,600,720, or800and a plurality of PPR-PDEs190describing one or more elements included in the anycast PPR graph180,400,500,600,720, or800. A PPR-PDE190describing an ingress NE includes a source flag191. A PPR-PDE190describing an egress NE includes a destination flag193, an anycast PPR-ID364, and an anycast group PPR-ID367associated with the egress NE. The apparatus1400comprises a means for updating1406a forwarding database365to include a forwarding entry359for the egress NE in response to identifying the NE in the PPR-PDEs190received in the PPR information170. The forwarding entry359indicates a next element on the anycast PPR graph180,400,500,600,720, or800by which to forward an anycast data packet730comprising the anycast PPR-ID364. The apparatus1400includes a means for forwarding1409the anycast data packet730to the next element of the anycast PPR graph180,400,500,600,720, or800based on the PPR-PDEs190. In another embodiment, apparatus1400comprises a means for receiving1403PPR information170and backup PPR information379. The PPR information170describes the PPR graph1200between at least one ingress NE1201-1204and1205-1213and at least one egress NE1205in a network. The backup PPR information379describes at least two backup PPR graphs1240and1270between at least one ingress NE1201-1204and1205-1213and at least one egress NE1205in the network. The PPR information170also includes a PPR-ID186and multiple PPR-PDEs190, each describing an element on the PPR graph1200. A PPR-PDE190describing an ingress NE1201-1204or1205-1213includes a backup PPR flag376indicating a backup PPR graph1240or1270of the at least two backup PPR graphs1240and1270along which to forward a data packet in response to a failure occurring adjacent to the ingress NE1201-1204or1205-1213. For example, the Rx320receives the PPR information170and backup PPR information379from another NE in the network or from the central entity165. In this embodiment, apparatus1400comprises a means for updating1406the forwarding database365to include a forwarding entry359for the egress NE1205in response to identifying the NE in the PPR-PDEs190. The forwarding entry359including the PPR information170and the backup PPR flag376. In this embodiment, the apparatus1400comprises a means for forwarding1409the data packet to a next element based on the backup PPR information and the backup PPR flag instead of the PPR information in response to the failure occurring at the ingress NE. Apparatus1500comprises a means for receiving1503PPR information170describing a PPR graph1000. The PPR graph1000represents a plurality of PPRs160between an ingress NE and an egress NE in the network. The PPR information170includes a PPR-ID186and multiple PPR-PDEs190, each describing an element on the PPR graph190. A PPR-PDE190describing an egress NE1008includes a destination flag193. A PPR-PDE190describing an ingress NE1001,1002,1003,1005,1006, and1007includes a source flag191and a QoS attribute370associated with a resource to be reserved on an outgoing element of the NE. In an embodiment, the PPR information170additionally includes the maximum QoS attribute373. In this embodiment, the apparatus1500comprises a means for updating1506a forwarding database365to include a forwarding entry359for the egress NE1108in response to identifying the NE in the PPR-PDEs190. The forwarding entry359indicates a next element on the PPR graph by which to forward a data packet comprising the PPR-ID186. In this embodiment, apparatus1500comprises a means for reserving1508resource along the outgoing element of the NE based on the PPR-PDEs190and the QoS attribute370. In an embodiment, an aggregate QoS attribute1035may be determined by the NE based on the QoS attributes370for one or more previous ingress NEs1001,1002,1003,1005,1006, and1007positioned downstream on the PPR graph1000. In an embodiment, the aggregate QoS attribute1035may be compared to the maximum QoS attribute373. When the aggregate QoS attribute1035is less than the maximum QoS attribute373, the resource is reserved along the outgoing element of the NE based on the aggregate QoS attribute1035. When the aggregate QoS attribute1035is greater than the maximum QoS attribute373, the resource is reserved along the outgoing element of the NE based on the maximum QoS attribute373. In an embodiment, the network configuration module335is executed by the processor330to determine the aggregate QoS attribute1035, compare the aggregate QoS attribute1035to the maximum QoS attribute373, and reserve the resource along the outgoing element of the NE based on the PPR-PDEs190, aggregate QoS attribute1035, and/or the maximum QoS attribute373. While several embodiments have been provided in the present disclosure, it should be understood that the disclosed systems and methods might be embodied in many other specific forms without departing from the spirit or scope of the present disclosure. The present examples are to be considered as illustrative and not restrictive, and the intention is not to be limited to the details given herein. For example, the various elements or components may be combined or integrated in another system or certain features may be omitted, or not implemented. In addition, techniques, systems, subsystems, and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the present disclosure. Other items shown or discussed as coupled or directly coupled or communicating with each other may be indirectly coupled or communicating through some interface, device, or intermediate component whether electrically, mechanically, or otherwise. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and could be made without departing from the spirit and scope disclosed herein.
82,001
11943137
DETAILED DESCRIPTION Various embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure. Thus, the following description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding of the disclosure. However, in certain instances, well-known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in the present disclosure can be references to the same embodiment or any embodiment; and, such references mean at least one of the embodiments. Reference to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure, and in the specific context where each term is used. Alternative language and synonyms may be used for any one or more of the terms discussed herein, and no special significance should be placed upon whether or not a term is elaborated or discussed herein. In some cases, synonyms for certain terms are provided. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification including examples of any terms discussed herein is illustrative only and is not intended to further limit the scope and meaning of the disclosure or of any example term. Likewise, the disclosure is not limited to various embodiments given in this specification. Without intent to limit the scope of the disclosure, examples of instruments, apparatus, methods and their related results according to the embodiments of the present disclosure are given below. Note that titles or subtitles may be used in the examples for convenience of a reader, which in no way should limit the scope of the disclosure. Unless otherwise defined, technical and scientific terms used herein have the meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. In the case of conflict, the present document, including definitions will control. Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or can be learned by practice of the herein disclosed principles. The features and advantages of the disclosure can be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosure will become more fully apparent from the following description and appended claims, or can be learned by the practice of the principles set forth herein. OVERVIEW Disclosed herein are systems, methods, and computer-readable media for proactively and adaptively rerouting data to a healthier path through a network, as part of flow provisioning, based on environmental variables associated with devices. In one aspect, a method of proactively and adaptively rerouting data includes identifying a routing path for forwarding traffic flows in a network and receiving diagnostic data of a routing device on the routing path. The diagnostic data can include one or more environmental parameters associated with the internal state and surroundings of the routing device. Further, the method includes comparing the diagnostic data of the routing device with a predetermined threshold and modifying, prior to a failure of the routing device, the routing path to bypass the routing device for at least a portion of the traffic flows based on the comparison between the diagnostic data of the routing device and the predetermined threshold. In another aspect, the method further includes facilitating an exchange of the diagnostic data between a first routing device on the routing path and a second routing device on the routing path based on a link layer discovery protocol (LLDP) extension. In another aspect, the method includes delivering the diagnostic data of the routing device to other devices in the network based on a Multi-Protocol Border Gateway Protocol (MP-BGP) extension. In another aspect, the method includes comparing the diagnostic data of the routing device with diagnostic data of other devices in the network and updating a routing protocol in the network based on the comparison between the diagnostic data of the routing device and the diagnostic data of the other devices. The routing protocol can include links between the routing device and the other devices. In another aspect, the method includes prioritizing subsets of the traffic flows based on a network policy and modifying the routing path for each of the subsets of the traffic flows based on the diagnostic data of the routing device. In another aspect, the one or more environmental parameters are physical characteristics of the surroundings of the routing device including at least one of a temperature, humidity, an atmospheric pressure, an elevation, an altitude, a wind speed, an airborne particle concentration, a gas concentration, and a combination thereof. In another aspect, the one or more environmental parameters are electrical characteristics of the routing device including at least one of a power level, a power usage, a voltage, a signal-to-ratio, a temperature of components of the routing device, fan speeds, and a combination thereof. In another aspect, the one or more environmental parameters are performance characteristics of the surroundings of the routing device including at least one of link errors, packet drops, a duplex mismatch, mismatching optical power levels, and a combination thereof. In another aspect, the diagnostic data is obtained from one or more sensors distributed on the routing device. In another aspect, the portion of the traffic flows is determined based on a network policy. In one aspect, a system for proactively and adaptively rerouting data to a healthier path through a network, as part of flow provisioning, based on environmental variables associated with devices includes one or more computer-readable media comprising computer-readable instructions and one or more processors. The one or more processors are configured to execute the computer-readable instructions to identify a routing path for forwarding traffic flows in a network, receive diagnostic data of a routing device on the routing path, compare the diagnostic data of the routing device with a predetermined threshold, and modify, prior to a failure of the routing device, the routing path to bypass the routing device for at least a portion of the traffic flows based on the comparison between the diagnostic data of the routing device and the predetermined threshold. The diagnostic data can include one or more environmental parameters associated with the internal state and surroundings of the routing device. In one aspect, one or more non-transitory computer-readable media include computer-readable instructions, which when executed by one or more processors, cause the processors to identify a routing path for forwarding traffic flows in a network, receive diagnostic data of a routing device on the routing path, compare the diagnostic data of the routing device with a predetermined threshold, and modify, prior to a failure of the routing device, the routing path to bypass the routing device for at least a portion of the traffic flows based on the comparison between the diagnostic data of the routing device and the predetermined threshold. The diagnostic data can include one or more environmental parameters associated with the internal state and surroundings of the routing device. DESCRIPTION OF EXAMPLE EMBODIMENTS Troubleshooting a device in a network can be time-consuming and results in network downtime. Often, its connections to neighboring devices are compared to identify configuration issues, potential software/hardware failures. While routing protocols may identify a connection-level failure, there are many failure scenarios where routing protocols prove to be inadequate. Therefore, there exists a need for a solution, which can not only aid in efficient and quick diagnostics but also re-route the traffic away from the faulty device in order to preserve network service. Furthermore, commonly and widely used forwarding based on standard switching/routing protocols do not include local or peer environmental health metrics. This may result in unfavorable flow failovers and convergence following critical thresholds being hit and impacted. Accordingly, with growing high speed and density networking gears, there exists a need for environmental health aware forwarding or flow provisioning. A proposed solution consists of a network in which devices share diagnostic information of other devices, especially the environmental parameters of the devices. More specifically, a proposed solution proactively discover, monitor end-to-end environmental health metrics, and adaptively integrate the metrics as part of flow provisioning and forwarding logic. As follows, it can help proactively steer flows through healthier paths and distribute them away from hot spots or segments. FIG.1Aillustrates a diagram of an example cloud computing architecture100. The architecture can include a cloud102. The cloud102can include one or more private clouds, public clouds, and/or hybrid clouds. Moreover, the cloud102can include cloud elements104-114. The cloud elements104-114can include, for example, servers104, virtual machines (VMs)106, one or more software platforms108, applications or services110, software containers112, and infrastructure nodes114. The infrastructure nodes114can include various types of nodes, such as compute nodes, storage nodes, network nodes, management systems, etc. The cloud102can provide various cloud computing services via the cloud elements104-114, such as software as a service (SaaS) (e.g., collaboration services, email services, enterprise resource planning services, content services, communication services, etc.), infrastructure as a service (IaaS) (e.g., security services, networking services, systems management services, etc.), platform as a service (PaaS) (e.g., web services, streaming services, application development services, etc.), and other types of services such as desktop as a service (DaaS), information technology management as a service (ITaaS), managed software as a service (MSaaS), mobile backend as a service (MBaaS), etc. The client endpoints116can connect with the cloud102to obtain one or more specific services from the cloud102. The client endpoints116can communicate with elements104-114via one or more public networks (e.g., Internet), private networks, and/or hybrid networks (e.g., virtual private network). The client endpoints116can include any device with networking capabilities, such as a laptop computer, a tablet computer, a server, a desktop computer, a smartphone, a network device (e.g., an access point, a router, a switch, etc.), a smart television, a smart car, a sensor, a GPS device, a game system, a smart wearable object (e.g., smartwatch, etc.), a consumer object (e.g., Internet refrigerator, smart lighting system, etc.), a city or transportation system (e.g., traffic control, toll collection system, etc.), an internet of things (IoT) device, a camera, a network printer, a transportation system (e.g., airplane, train, motorcycle, boat, etc.), or any smart or connected object (e.g., smart home, smart building, smart retail, smart glasses, etc.), and so forth. The client endpoints116can communicate with the elements104-114as part of accessing network services through infrastructure intermediation messaging. Specifically, communications between the elements104-114and the client endpoints116can be managed and otherwise controlled through a network infrastructure between the client endpoints116and the cloud102. For example, any of a 5G infrastructure, an LTE infrastructure and a Wi-Fi infrastructure can communicate a physical location of a client endpoint to a cloud service. In turn, the cloud service can cause the infrastructure to send specific signaling to the client endpoint for accessing network services through the cloud service. For example, the cloud service can use the LTE infrastructure, e.g. through an LTE S14 interface, to alert the client endpoint of Wi-Fi availability through the Wi-Fi infrastructure. In another example, the cloud service can use the Wi-Fi infrastructure, e.g. through MBO Wi-Fi messaging, to alert the client endpoint of LTE availability through the LTE infrastructure. FIG.1Billustrates a diagram of an example fog computing architecture150. The fog computing architecture150can include the cloud layer154, which includes the cloud102and any other cloud system or environment, and the fog layer156, which includes fog nodes162. The client endpoints116can communicate with the cloud layer154and/or the fog layer156. The architecture150can include one or more communication links152between the cloud layer154, the fog layer156, and the client endpoints116. Communications can flow up to the cloud layer154and/or down to the client endpoints116. The fog layer156or “the fog” provides the computation, storage and networking capabilities of traditional cloud networks, but closer to the endpoints. The fog can thus extend the cloud102to be closer to the client endpoints116. The fog nodes162can be the physical implementation of fog networks. Moreover, the fog nodes162can provide local or regional services and/or connectivity to the client endpoints116. As a result, traffic and/or data can be offloaded from the cloud102to the fog layer156(e.g., via fog nodes162). The fog layer156can thus provide faster services and/or connectivity to the client endpoints116, with lower latency, as well as other advantages such as security benefits from keeping the data inside the local or regional network(s). The fog nodes162can include any networked computing devices, such as servers, switches, routers, controllers, cameras, access points, gateways, etc. Moreover, the fog nodes162can be deployed anywhere with a network connection, such as a factory floor, a power pole, alongside a railway track, in a vehicle, on an oil rig, in an airport, on an aircraft, in a shopping center, in a hospital, in a park, in a parking garage, in a library, etc. In some configurations, one or more fog nodes162can be deployed within fog instances158,160. The fog instances158,160can be local or regional clouds or networks. For example, the fog instances158,160can be a regional cloud or data center, a local area network, a network of fog nodes162, etc. In some configurations, one or more fog nodes162can be deployed within a network, or as standalone or individual nodes, for example. Moreover, one or more of the fog nodes162can be interconnected with each other via links164in various topologies, including star, ring, mesh or hierarchical arrangements, for example. In some cases, one or more fog nodes162can be mobile fog nodes. The mobile fog nodes can move to different geographic locations, logical locations or networks, and/or fog instances while maintaining connectivity with the cloud layer154and/or the endpoints116. For example, a particular fog node can be placed in a vehicle, such as an aircraft or train, which can travel from one geographic location and/or logical location to a different geographic location and/or logical location. In this example, the particular fog node may connect to a particular physical and/or logical connection point with the cloud154while located at the starting location and switch to a different physical and/or logical connection point with the cloud154while located at the destination location. The particular fog node can thus move within particular clouds and/or fog instances and, therefore, serve endpoints from different locations at different times. FIG.2illustrates a schematic block diagram of an example network architecture200. In some cases, the architecture200can include a data center, which can support and/or host the cloud102as illustrated inFIGS.1A and1B. Moreover, the architecture200includes a network fabric212with spine switches202A,202B, . . . ,202N (collectively “202”) connected to leaf switches204A,204B,204C, . . . ,204N (collectively “204”) in the network fabric212. Spine switches202can be Layer 3 (“L3”) switches in the fabric212. However, in some cases, the spine switches202can also, or otherwise, perform Layer 2 (“L2”) functionalities. Spine switches202connect to leaf switches204in the fabric212. Leaf switches204can include access ports (or non-fabric ports) and fabric ports. Fabric ports can provide uplinks to the spine switches202, while access ports can provide connectivity for devices, hosts, endpoints, VMs, or external networks to the fabric212. Leaf switches204can reside at the boundary between the fabric212and the tenant or customer space. In some cases, the leaf switches204can be top-of-rack (“ToR”) switches, aggregation switches, end-of-row (EoR), middle-of-row (MoR) switches, etc. The leaf switches204can be responsible for routing and/or bridging the tenant packets and applying network policies. In some cases, a leaf switch can perform one or more additional functions, such as implementing a mapping cache, sending packets to the proxy function when there is a miss in the cache, encapsulate packets, enforce ingress or egress policies, etc. Moreover, the leaf switches204can contain virtual switching and/or tunneling functionalities, such as a virtual tunnel endpoint (VTEP) function. Thus, leaf switches204can connect the fabric212to an overlay (e.g., VXLAN network). Network connectivity in the fabric212can flow through the leaf switches204. The leaf switches204can provide servers, resources, endpoints, external networks, or VMs access to the fabric212, and can connect the leaf switches204to each other. In some cases, the leaf switches204can connect endpoint groups (“EPGs”) to the fabric212and/or any external networks. Each EPG can connect to the fabric212via one or more of the leaf switches204, for example. Endpoints210A-E (collectively “210”) can connect to the fabric212via leaf switches204. For example, endpoints210A and210B can connect directly to leaf switch204A, which can connect endpoints210A and210B to the fabric212and/or any other of the leaf switches204. Similarly, endpoint210E can connect directly to leaf switch204C, which can connect endpoint210E to the fabric212and/or any other of the leaf switches204. On the other hand, endpoints210C and210D can connect to leaf switch204A and204B via network206. Moreover, the wide area network (WAN)208can connect to the leaf switches204N. Endpoints210can include any communication device, such as a computer, a server, a switch, etc. In some cases, the endpoints210can include a server or switch configured with a virtual tunnel endpoint functionality that connects an overlay network with the fabric212. For example, in some cases, the endpoints210can represent hosts (e.g., servers) with virtual tunnel endpoint capabilities, and running virtual environments (e.g., hypervisor, virtual machine(s), containers, etc.). An overlay network associated with the endpoints210can host physical devices, such as servers; applications; EPGs; virtual segments; virtual workloads; etc. Likewise, endpoints210can also host virtual workloads and applications, which can connect with the fabric212or any other device or network, including an external network. Controller214can provide centralized access to fabric information, application configuration, resource configuration, application-level configuration modeling for a software-defined network (SDN) infrastructure, integration with management systems or servers, etc. Controller214can define and manage application-level model(s) for configurations in network architecture200. In some cases, application or device configurations can also be managed and/or defined by other components in the network. FIG.3illustrates an example network environment300in which packets are routed over a network path according to some examples of the present disclosure. Network environment300comprises Cores302-1, . . .302-n(collectively,302), Spines304-1, . . .304-nin Site-1 (collectively,304) and314-1, . . .314-nin Site-n (collectively,314), Leafs306-1,306-2, . . .306-nin Site 1 (collectively,306) and316-1, . . .316-nin Site-n (collectively,316), and Hosts308-1, . . .308-2in Site-1 (collectively,308) and318-nin Site-n (collectively,318). Cores302, Spines304and314, Leafs306and316, and Hosts308and318can include core routers or switches, spine routers or switches, leaf routers or switches, and host routers or switches, respectively, which can be interconnected for routing or switching traffic in network environment300. In network environment300, network connectivity can flow from Cores302to Spines304and314to Leafs306and316to Hosts308and318, and vice versa. The interconnections between Cores302, Spines304and314, Leafs306and316, and Hosts308and318can have multiple interconnections to avoid a failure in routing. According to some examples, each network path can be defined by a set of nodes, interconnected by a set of links. For example, Core-1302-1and Spine-1304-1is interconnected by a link, which then is also connected to Leaf-1306-1by a link and then to Host-1308-1by another link, thereby forming a network path from Core-1302-1to Host-1308-1. Also, as shown inFIG.3, a network path flows from Core-1302-1to Spine-n304-nto Leaf-n306-n. Network paths (i.e., data flows) can be hashed to any one of the links in the multiple paths based on routing protocol and forwarding logic. According to some aspects, diagnostic data of all the devices in the network (e.g., Cores, Spines, Leafs, and Hosts) can be proactively monitored. In some instances, the diagnostic data can be collected by sensors distributed on the devices. The collected diagnostic data include dynamic environmental variables that may be changing and non-static depending on the internal state or surroundings of the device. The diagnostic data can depend on various factors relating to the device and the surrounding environment. Also, the diagnostic data can be relative parameters instead of an absolute value. In some examples, the diagnostic data of each device in the network can pertain to environmental parameters such as physical characteristics of the surroundings of the devices. For example, the physical characteristics can include temperature, humidity, atmospheric pressure, elevation, altitude, wind speed, airborne particle concentration, gas concentration, and a combination thereof. In some instances, the diagnostic data can be environmental parameters associated with electrical characteristics of the device (e.g., the internal state of the device) such as a power level, a power usage, a voltage, a signal-to-ratio, a temperature of components of the routing device, fan speeds, and a combination thereof. In some examples, the diagnostic data can be environmental parameters associated with performance characteristics of the surroundings of the device including link errors, packet drops, a duplex mismatch, mismatching optical power levels, and a combination thereof. In some instances, examples of the diagnostic data can include, but are not limited to customized user defined policies, a network design and role of the device in the network, traffic design and load conditions, system design, architecture, and implementations of hardware, application-specific integrated circuit (ASIC), and software components of the device, bandwidth/speed, a model, and capacity of the device. According to some examples, such diagnostic data can be included in forwarding decisions along with existing algorithms or hashing logic to derive and steer the traffic (i.e., packets) through healthier end-to-end network paths in the network. Various implementations can be applied for the proactive and adaptive flow management based on the diagnostic data of the network devices (e.g., management and integration of end-to-end environmental health metrics in flow provisioning) as described further in detail below. In some implementations, a forwarding decision can be made local to a given device and its next-hops through a link-level discovery protocol. Specifically, the diagnostic data including environmental metrics of a device can be exchanged between neighboring devices based on a Link Layer Discovery Protocol (LLDP) extension (e.g., optional type-length-value or tag-length-value extension). The diagnostic data can be locally integrated into link metrics and used by forwarding, hashing, and next-hop resolutions. Forwarding next-hop resolution based on the resultant link metric can dynamically eliminate next hop or neighboring devices with unhealthy metrics. In other implementations, a control plane can be extended and included in flow provisioning and a forwarding decision. Specifically, the diagnostic data including environmental metrics can be carried along with next hops for routes in an extension of a Multi-Protocol Border Gateway Protocol (MP-BGP) (i.e., through an MP-BGP Multicast Virtual Private Network (MVPN) control plane). As follows, alternative paths (i.e., next-hop) on receiving nodes can be derived based on the diagnostic data (e.g., by routing engines and forwarding tables). In other implementations, flow provisioning based on the environmental health metrics can be managed by a centralized network controller (e.g., centralized telemetry drive controller). A network controller can collect diagnostic data along with other network insights and proactively manage new flow provisioning. Specifically, a network controller (e.g., controller214illustrated inFIG.2) can collect, monitor, and manage the diagnostic data including environmental metrics through software or hardware telemetry streamed from network elements integrated with network insights. As follows, the network controller can regulate and provision policy-based prioritized flows in an end-to-end coordinated fashion. In some examples, forwarding and steering flows can be differentiated or classified based on priorities or policies. For example, based on network policies or rules, subsets of traffic flows (i.e., a portion of traffic or packets such as select critical application traffic or flows) can be prioritized. Based on the priorities, each of the subsets of the traffic flows can be routed over the network paths. Referring toFIG.3, based on the diagnostic data, the health condition of the devices (e.g., Cores, Spines, Leafs, and Hosts) can be determined. In some instances, the diagnostic data of the device can be compared with a predetermined threshold. Based on the various implementations, the health condition of each device on the network paths can be determined, more specifically, can be compared against the predetermined threshold. In some instances, the base healthy stable sate and threshold, which may be specific to a given device can be defined in a data sheet by vendors based on the bandwidth/speed, capacity, model, system design, and other various factors that impact those values. The data sheet can be used as a reference for a heathy base or thresholds to compare against the diagnostic data of a given device. For example, the diagnostic data of Core-1302-1or Spine-1304-1can indicate, when compared with a predetermined threshold, an unhealthy status of the device (e.g., a potential hard failure of the device). In the local device scope, for example, LLDP notifications can be triggered based on the comparison between the diagnostic data of the device and the predetermined threshold to the neighboring devices. Furthermore, different threshold ranges can be defined and mapped to different levels. Instead of waiting until a critical threshold is reached, proactive and adaptive actions can be taken at the receiving devices (i.e., nodes) based on the threshold ranges and metric mapping. Further, based on the comparison between the diagnostic data of the device and the predetermined threshold, prior to a failure of the device, the routing path can be proactively and adaptively modified to bypass the device for a portion of or entire traffic flows. For example, once Core-1302-1is determined unhealthy or failing, it can be pruned by its neighboring devices (e.g., Spine-1304-2, Spine-n302-n, Spine-1314-1, or Spine-n314-n) so that the network flow is modified to reroute the traffic to a healthier device such as Core-n302-n. Similarly, referring toFIG.3, based on the comparison between the diagnostic data of Spine-1304-1and the predetermined threshold, Spine-1304-1can be determined unhealthy. A network path can be updated to prune Spine-1304-1or steer a portion of or entire incoming traffic flows to another device such as Spine-n304-n. As follows, Leaf-1306-1, Leaf-2306-2, and Leaf-n306-nforward their traffic to Spine-n304-ninstead of Spine-1304-1. FIG.4illustrates an example diagram400of data exchange between devices according to some examples of the present disclosure. InFIG.4, Cases410,420,430,440, and450show various decisions made by neighboring devices (e.g., Devices A, B, C, and D) depending on the diagnostic data of Device X. In Case410, Device X is operational. Device X can send status information to other nodes. Device A, once it receives the status information, can compare the information with Devices B-D. As Device X is in perfect operating condition, Devices A-D can record Device X′s health condition and continue to send packets to Device X. In Case420, Device X is operational with small anomalies. Then, Device X can signal other nodes about the minor error details. Device A can compare the information with Devices B-D. As Device X has some anomalies, Devices A-D can record Device X′s health condition and consider Device Y to route a portion of packets (e.g., a small percentage of packets). In Case430, Device X is operational with minor issues. Device X can signal other nodes about minor error details. Then, Device A can compare the information with Devices B-D. As Device X has minor issues, Devices A-D can record Device X′s health condition and split incoming traffic to route it to Devices X and Y. In Case440, Device X is operational with major issues (e.g., ASIC/PHY/memory failure). Device X can signal other nodes about major error details. Device A can compare such information with Devices B-D. As Device X has severe issues that can cause packet drops, prior to a failure of Device X, Devices A-D can record Device X′s health condition and prioritize Device Y for traffic and use Device X sparingly. In Case450, Device X is non-operational and unable to send any information to other nodes. Absent any health condition information from Device X, Device A can compare connection loss information with Devices B-D. Devices A-D can record Device's X health condition and only use Device Y for traffic until Device X is back up. In some embodiments, as shown in Cases410-450, devices (e.g., Devices X and A-D) can be configured to share diagnostic data of other devices. Such correlating data between network devices can provide insights into the location or propagation of a potential failure, help identify failing devices, and trigger alarms to repair the device. Furthermore, the other devices can bypass the failing device until the issue is resolved. In some instances, the data transfer can occur automatically between nodes (e.g., devices) by using encryption to prevent unauthorized users from accessing it. In some examples, devices (e.g., Devices X and A-D as shown inFIG.4) in the network can communicate their diagnostic data to each other. The diagnostic data can include device information (e.g., persistent identifier (PID), serial number, hardware/software/firmware versions), environmental data (e.g., component temperatures, power, voltages, or fan speeds), platform data (e.g., boot Power-On-Self-Test (POST) codes, the status of programmable devices, the status of line cards/assemblies within the device if applicable), and diagnostic data (e.g., any error messages arising from internal components like forwarding Application Specific Integrated Circuit (ASIC), physical layer (PHY), Solid State Drive (SSD), Central Processing Unit (CPU), Clocks or Field Programmable Gate Arrays (FPGAs), which can cause packet loss during internal forwarding). In some implementations, a network discovery protocol can allow a device to view information of its neighbors. The information can include Device ID, IP, vLAN data, hardware information (e.g., part number and/or model number), and software information (e.g., software version). In some examples, the network discovery protocol can convey information regarding network topology, inventory management, emergency services, VLAN configuration, and power negotiation to its neighboring devices. The advantages of having diagnostic data shared between devices can include but are not limited to a reduced downtime in case of a failure of a device, improved failure management in the network, and prevention of hard failures by finding trends in collated data. Prior to any failure of any devices, incoming traffic can be split between routes based on the health of the devices in the network. FIG.5is a flowchart of an example method500of proactively and adaptively rerouting data to healthier paths through a network, as part of flow provisioning, based on environmental variables associated with devices in the network. Although example method500depicts a particular sequence of operations, the sequence may be altered without departing from the scope of the present disclosure. For example, some of the operations depicted may be performed in parallel or in a different sequence that does not materially affect the function of method500. In other examples, different components of an example device or system that implements the method500may perform functions at substantially the same time or in a specific sequence. According to some examples, at step510, method500includes identifying a routing path for forwarding traffic flows in a network. For example, a network controller (e.g., controller214illustrated inFIG.2) can identify a routing path for forwarding traffic flow in a network (e.g., from Core-1302-1to Host-1308-1or from Core-1302-1to Host-2308-2as illustrated inFIG.2). According to some examples, at step520, method500includes receiving diagnostic data of a routing device on the routing path. The diagnostic data include one or more environmental parameters associated with the internal state and surroundings of the routing device. For example, a network controller (e.g., controller214illustrated inFIG.2) can receive diagnostic data of a routing device on the routing path where the diagnostic data include environmental parameters associated with the internal state and surroundings of the routing device. In some instances, the diagnostic data is obtained from one or more sensors distributed on the routing device. In some examples, the one or more environmental parameters are physical characteristics of the surroundings of the routing device including at least one of a temperature, humidity, an atmospheric pressure, an elevation, an altitude, a wind speed, an airborne particle concentration, a gas concentration, and a combination thereof. In some instances, the one or more environmental parameters are electrical characteristics of the routing device including at least one of a power level, a power usage, a voltage, a signal-to-ratio, a temperature of components of the routing device, fan speeds, and a combination thereof. In some examples, wherein the one or more environmental parameters are performance characteristics of the surroundings of the routing device including at least one of link errors, packet drops, a duplex mismatch, mismatching optical power levels, and a combination thereof. According to some examples, at step530, method500includes comparing the diagnostic data of the routing device with a predetermined threshold. For example, network controller (e.g., controller214illustrated inFIG.2) can compare the diagnostic data of the routing device with a predetermined threshold. According to some examples, at step540, method500includes modifying, prior to a failure of the routing device, the routing path to bypass the routing device for at least a portion of the traffic flows based on the comparison between the diagnostic data of the routing device and the predetermined threshold. For example, a network controller (e.g., controller214illustrated inFIG.2) can modify, prior to a failure of the routing device, the routing path to bypass the routing device for at least portion of the traffic flows based on the comparison between the diagnostic data of the routing device and the predetermined threshold. In some examples, the portion of the traffic flows is determined based on a network policy. Routing devices can be configured to apply network policies or rules. Network policies and rules can be driven by a network controller (e.g., controller214illustrated inFIG.2) and/or implemented or enforced by one or more devices such as Leafs204as illustrated inFIG.2. According to some aspects, method500further includes facilitating an exchange of the diagnostic data between a first routing device on the routing path and a second routing device on the routing path based on a link layer discovery protocol (LLDP) extension. For example, a network controller (e.g., controller214illustrated inFIG.2) can facilitate an exchange of the diagnostic data between devices (e.g., Cores, Spines, Leafs, and Hosts as shown inFIG.2) on the routing path based on an LLDP extension. For example, the diagnostic data can be included in the LLDP extension as an optional TLV extension to be exchanged across neighboring devices. According to some instances, method500includes delivering the diagnostic data of the routing device to other devices in the network based on a Multi-Protocol Border Gateway Protocol (MP-BGP) extension. For example, a network controller (e.g., controller214illustrated inFIG.2) can facilitate the delivery of the diagnostic data of the routing device to other devices via an MP-BGP extension. According to some examples, method500further includes comparing the diagnostic data of the routing device with diagnostic data of other devices in the network. Further, method500includes updating a routing protocol in the network based on the comparison between the diagnostic data of the routing device and the diagnostic data of the other devices, the routing protocol including links between the routing device and the other devices. For example, a network controller (e.g., controller214illustrated inFIG.2) can compare the diagnostic data of each of the routing devices in the network and update a routing protocol based on the comparison. For example, a network controller (e.g., controller214illustrated inFIG.2) can prioritize or rank the health condition of each of the devices based on the diagnostic data and modify the network paths. According to some aspects, method500includes prioritizing subsets of the traffic flows based on a network policy. Also, method500includes modifying the routing path for each of the subsets of the traffic flows based on the diagnostic data of the routing device. For example, a network controller (e.g., controller214illustrated inFIG.2) can prioritize subsets of the traffic flows based on a network policy and modify the routing path for each of the subsets of the traffic flows based on the diagnostic data. FIG.6illustrates an example computing system600including components in electrical communication with each other using a connection605upon which one or more aspects of the present disclosure can be implemented. Connection605can be a physical connection via a bus, or a direct connection into processor610, such as in a chipset architecture. Connection605can also be a virtual connection, networked connection, or logical connection. In some examples, computing system600is a distributed system in which the functions described in this disclosure can be distributed within a datacenter, multiple datacenters, a peer network, etc. In some instances, one or more of the described system components represents many such components each performing some or all of the function for which the component is described. In some examples, the components can be physical or virtual devices. Example system600includes at least one processing unit (CPU or processor)610and connection605that couples various system components including system memory615, such as read only memory (ROM)620and random access memory (RAM)625to processor610. Computing system600can include a cache of high-speed memory612connected directly with, in close proximity to, or integrated as part of processor610. Processor610can include any general purpose processor and a hardware service or software service, such as services632,634, and636stored in storage device630, configured to control processor610as well as a special-purpose processor where software instructions are incorporated into the actual processor design. Processor610may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric. To enable user interaction, computing system600includes an input device645, which can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech, etc. Computing system600can also include output device635, which can be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems can enable a user to provide multiple types of input/output to communicate with computing system600. Computing system600can include communications interface640, which can generally govern and manage the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed. Storage device630can be a non-volatile memory device and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs), read only memory (ROM), and/or some combination of these devices. The storage device630can include software services, servers, services, etc., that when the code that defines such software is executed by the processor610, it causes the system to perform a function. In some examples, a hardware service that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as processor610, connection605, output device635, etc., to carry out the function. FIG.7illustrates an example network device700suitable for performing switching, routing, load balancing, and other networking operations. Network device700includes a central processing unit (CPU)704, interfaces702, and a bus710(e.g., a PCI bus). When acting under the control of appropriate software or firmware, the CPU704is responsible for executing packet management, error detection, and/or routing functions. The CPU704preferably accomplishes all these functions under the control of software including an operating system and any appropriate applications software. CPU704may include one or more processors708, such as a processor from the INTEL X86 family of microprocessors. In some cases, processor708can be specially designed hardware for controlling the operations of network device700. In some cases, a memory706(e.g., non-volatile RAM, ROM, etc.) also forms part of CPU704. However, there are many different ways in which memory could be coupled to the system. The interfaces702are typically provided as modular interface cards (sometimes referred to as “line cards”). Generally, they control the sending and receiving of data packets over the network and sometimes support other peripherals used with the network device700. Among the interfaces that may be provided are Ethernet interfaces, frame relay interfaces, cable interfaces, DSL interfaces, token ring interfaces, and the like. In addition, various very high-speed interfaces may be provided such as fast token ring interfaces, wireless interfaces, Ethernet interfaces, Gigabit Ethernet interfaces, ATM interfaces, HSSI interfaces, POS interfaces, FDDI interfaces, WIFI interfaces, 3G/4G/5G cellular interfaces, CAN BUS, LoRA, and the like. Generally, these interfaces may include ports appropriate for communication with the appropriate media. In some cases, they may also include an independent processor and, in some instances, volatile RAM. The independent processors may control such communications intensive tasks as packet switching, media control, signal processing, crypto processing, and management. By providing separate processors for the communications intensive tasks, these interfaces allow the master CPU704to efficiently perform routing computations, network diagnostics, security functions, etc. Although the system shown inFIG.7is one specific network device of the present technology, it is by no means the only network device architecture on which the present technology can be implemented. For example, an architecture having a single processor that handles communications as well as routing computations, etc., is often used. Further, other types of interfaces and media could also be used with the network device700. Regardless of the network device's configuration, it may employ one or more memories or memory modules (including memory706) configured to store program instructions for the general-purpose network operations and mechanisms for roaming, route optimization and routing functions described herein. The program instructions may control the operation of an operating system and/or one or more applications, for example. The memory or memories may also be configured to store tables such as mobility binding, registration, and association tables, etc. Memory706could also hold various software containers and virtualized execution environments and data. The network device700can also include an application-specific integrated circuit (ASIC), which can be configured to perform routing and/or switching operations. The ASIC can communicate with other components in the network device700via the bus710, to exchange data and signals and coordinate various types of operations by the network device700, such as routing, switching, and/or data storage operations, for example. For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software. Any of the steps, operations, functions, or processes described herein may be performed or implemented by a combination of hardware and software services or services, alone or in combination with other devices. In some embodiments, a service can be software that resides in memory of a client device and/or one or more servers of a content management system and perform one or more functions when a processor executes the software associated with the service. In some embodiments, a service is a program, or a collection of programs that carry out a specific function. In some embodiments, a service can be considered a server. The memory can be a non-transitory computer-readable medium. In some embodiments the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se. Methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, solid state memory devices, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on. Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include servers, laptops, smart phones, small form factor personal computers, personal digital assistants, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example. The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures. Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims. Claim language or other language reciting “at least one of” a set and/or “one or more” of a set indicates that one member of the set or multiple members of the set (in any combination) satisfy the claim. For example, claim language reciting “at least one of A and B” or “at least one of A or B” means A, B, or A and B. In another example, claim language reciting “at least one of A, B, and C” or “at least one of A, B, or C” means A, B, C, or A and B, or A and C, or B and C, or A and B and C. The language “at least one of” a set and/or “one or more” of a set does not limit the set to the items listed in the set. For example, claim language reciting “at least one of A and B” or “at least one of A or B” can mean A, B, or A and B, and can additionally include items not listed in the set of A and B.
52,654
11943138
DETAILED DESCRIPTION In the following description, for purposes of explanation, numerous examples and specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be evident, however, to one skilled in the art that various embodiments of the present disclosure as defined by the claims may include some or all of the features in these examples alone or in combination with other features described below, and may further include modifications and equivalents of the features and concepts described herein. Described herein are techniques for detecting duplicate network device identifiers for routing protocols. In some embodiments, a network includes several network devices belonging to the same area that are each configured to exchange network topology information using a routing protocol. Each of the network devices is configured to detect whether it is configured with the same system identifier (ID), which is used to exchange network topology information via n the routing protocol, as another network device in the network. Each network device communicates its topology information to other network devices in the network by flooding network devices with link state packets. To detect whether another network device is configured with the same system ID, a network device monitors the link state packets that it receives. If (1) the link state packet identifier in a received link state packet, which is the system ID of the network device that sent the link state packet, is the same as the system ID of a network device and the link state packet has a higher sequence number than the sequence number in the last link state packet transmitted by the network device or (2) the link state packet identifier in the received link state packet is the same as the system ID of a network device and the checksum of the link state packet is different than the checksum of the last link state packet transmitted by the network device, the network device starts a timer having a defined amount of time. While the timer has not elapsed, the network device keeps track of the number of received link state packets that either has (1) a link state packet identifier that is the same as the system ID of the network device and the link state packet has a higher sequence number than the sequence number in the last link state packet transmitted by the network device or (2) the link state packet identifier is the same as the system ID of a network device and the checksum of the link state packet is different than the checksum of the last link state packet transmitted by the network device. After timer has elapsed, the network device determines if the number of such link state packets that the network device received during the timer is greater than a defined threshold value. If so, the network device determines that another network device in the same area of the network is configured with the same system ID as itself. FIG.1illustrates a network100according to some embodiments. As shown, network100includes area105and area125. Area105includes network devices110-120while area125includes network devices130-140. For this example, each of the network devices110-120is configured with an area identifier (ID) associated with area105. Similarly, each of the network devices130-140is configured with an area ID associated with area125. In area105, network devices110-120are communicatively coupled to each other. Specifically, network device110is connected to network device115and network device115is connected to network device120. In this example, network device115is adjacent to network device110. Network device115is also adjacent to network device120. Network devices110and120are not adjacent to each other since network device115is positioned between network devices110and120. Network devices110and120may be referred to as non-adjacent network devices. In general, network devices110-120route and/or forward network traffic (e.g., packets) through network100so that the network traffic reaches its intended destination. In order to do so, each of the network devices110-120routes and/or forwards network traffic based on a network topology that it determines from network topology information exchanged with other network devices in area105via a routing protocol. Here, network devices110-120utilize a link-state routing protocol to exchange such information. Examples of link-state routing protocols include an open shortest path first (OSPF) routing protocol, an intermediate system to intermediate system (IS-IS) routing protocol, etc. Area125and its network devices are configured to operate in a similar manner. As depicted inFIG.1, area125includes network devices130-140. Network device130is connected to network device135and network device135is connected to network device140. For this example, network devices130and135are adjacent network devices. Network devices135and140are also adjacent network devices. Network devices130and140are non-adjacent network devices. Network devices130-140route and/or forward network traffic (e.g., packets) through network100so that the network traffic reaches its intended destination. To do so, each of the network devices130-140routes and/or forwards network traffic based on a network topology that it determines from network topology information exchanged with other network devices in area125via a routing protocol. In this example, network devices130-140use a link-state routing protocol to exchange network topology information. As mentioned above, examples of link-state routing protocols include an OSPF routing protocol, an IS-IS routing protocol, etc. FIGS.2A-2Dillustrate an example of detecting duplicate network device identifiers for a routing protocol in network100according to some embodiments. This example will be explained from the perspective of network device110. However, one of ordinary skill in the art will understand that each of the network devices in network100is configured to employ this technique. As such, for this example, network device120would also detect that another network device is configured with the same system ID. FIG.2Aillustrates the configuration of the network devices in network100for this example. As shown, network devices110and120are each configured with a system ID of 4, network device115is configured with a system ID of 2, network device130is configured with a system ID of 10, network device135is configured with a system ID of 6, and network device140is configured with a system ID of 8. The example begins by network device120flooding area105with link state packet200. Here, link state packet200includes a list of network devices directly connected to network device120(e.g., neighboring or adjacent network devices), a link state packet identifier (LSP ID) having the same value as the system ID of network device120(4 in this example), and a sequence identifier of 10. In some embodiments, each of the network devices110-120and130-140uses a sequence identifier to keep track of the link state packets with which the network device has flooded a network area. Each time a network device sends out a link state packet to other network devices in the same area, the network device adjusts its current sequence identifier (e.g., by incrementing the sequence identifier, by decrementing the sequence identifier, etc.) and includes the newly adjusted current sequence identifier in the link state packet. When network device120floods area105with link state packet200, network device115receives it. In response, network device115forwards a copy of link state packet200, which is depicted as link state packet205inFIG.2A, to network device110. Whenever network device110receives a link state packet, network device110determines whether the link state packet satisfies a set of conditions. If so, network device110generates a timer having a define amount of time. In some embodiments, network device110determines the amount of time for the timer based on the interval at which network device110sends out link state packets (also referred to as a link state packet generation interval) and a defined value (e.g., 5, 10, 25, 50, etc.). The link state packet generation interval of network device110can be configured with a value from a range of possible values (e.g., 1 to 500 milliseconds, 1 to 1000 milliseconds, 1 to 3000 milliseconds, etc.). In some instances, network device110determines the amount of time for the timer by multiplying the defined value and the maximum possible value that can be set for the link state packet generation interval. For example, if the defined value is 15 and the range of possible values that can be used for the link state packet generation interval is 1 to 2500 milliseconds, then network device110determines the amount of time for the timer to be 37,500 milliseconds (15×2500 milliseconds). In this example, the set of conditions specifies that a link state packet with an LSP ID that is the same as the system ID of network device110and a sequence identifier that is higher than network device110's current sequence identifier. For this example, the current sequence identifier of network device110is 8. When network device110receives link state packet205, it determines that the LSP ID of link state packet205is the same as the system ID with which network device110is configured (4 in this example) and the sequence identifier of link state packet205is higher than network device110's current sequence identifier. As such, network device110generates a timer having a define amount of time determined using the techniques described above. In addition, network device110starts tracking the number of link state packets that network device110receives while the timer is active (e.g., has not elapsed) that satisfy the set of conditions by maintaining a count of the number of such received link state packets. Here, network device110initializes the value of the count to 1 to account for link state packet205. Receiving a link state packet that satisfies the set of conditions also causes network device110to use the sequence identifier in the receive link state packet as its current sequence identifier. Thus, network device110updates its current sequence identifier from 8 to 10. Next, network device110floods area105with link state packet210, as depicted inFIG.2B. Link state packet210includes a list of network devices directly connected to network device110(e.g., neighboring or adjacent network devices), an LSP ID having the same value as the system ID of network device110(4 in this example), and a sequence identifier of 11. As mentioned above, each time a network device sends out a link state packet to other network devices in the same area, the network device adjusts its current sequence identifier (e.g., by incrementing the sequence identifier, by decrementing the sequence identifier, etc.) and includes the newly adjusted current sequence identifier in the link state packet. To send out link state packet210, network device110first increments its current sequence identifier from 10 to 11 and then includes the sequence identifier of 11 in link state packet210. Network device115receives link state packet210upon network device110flooding area105with link state packet210. In response to receiving link state packet210, network device115forwards a copy of link state packet210, which is illustrated inFIG.2Bas link state packet215, to network device120. Receiving a link state packet with an LSP ID that is the same as the system ID of network device120and a sequence identifier that is higher than network device120's current sequence identifier causes network device120to use the sequence identifier in the receive link state packet as its current sequence identifier. Once network device120receives link state packet215, network device120determines that the LSP ID of link state packet215is the same as the system ID with which network device120is configured (4 in this example) and the sequence identifier of link state packet215that is higher than network device120's current sequence identifier. The current sequence identifier of network device120is 10 so network device120updates its current sequence identifier to 11. Network device120then floods area105with link state packet220, as depicted inFIG.2C. Link state packet220includes a list of network devices directly connected to network device120(e.g., neighboring or adjacent network devices), an LSP ID having the same value as the system ID of network device120(4 in this example), and a sequence identifier of 12. As described above, each time a network device sends out a link state packet to other network devices in the same area, the network device adjusts its current sequence identifier (e.g., by incrementing the sequence identifier, by decrementing the sequence identifier, etc.) and includes the newly adjusted current sequence identifier in the link state packet. Thus, network device120sends out link state packet220by incrementing its current sequence identifier from 11 to 12 and including the sequence identifier of 12 in link state packet220. When network device115receives link state packet220from network device120, network device115forwards a copy of link state packet220, which is depicted inFIG.2Cas link state packet225, to network device110. In response to receiving link state packet225, network device110determines that it satisfies the set of conditions. Here, the timer is still active (e.g., has not elapsed) so network device110increments the count of the number of received link state packets that satisfy the set of conditions from 1 to 2. Because link state packet225satisfies the set of conditions (i.e., it has an LSP ID that is the same as the system ID of network device110and a sequence identifier that is higher than network device110's current sequence identifier), network device110updates its current sequence identifier to the sequence identifier in link state packet225. Therefore, network device110updates its current sequence identifier from 11 to 12. Then, network device110floods area105with link state packet230, as illustrated inFIG.2D. Link state packet230includes a list of network devices directly connected to network device110(e.g., neighboring or adjacent network devices), an LSP ID having the same value as the system ID of network device110(4 in this example), and a sequence identifier of 13. As explained above, each time a network device sends out a link state packet to other network devices in the same area, the network device adjusts its current sequence identifier (e.g., by incrementing the sequence identifier, by decrementing the sequence identifier, etc.) and includes the newly adjusted current sequence identifier in the link state packet. As such, network device110sends out link state packet230by incrementing its current sequence identifier from 12 to 13 and then including the sequence identifier of 13 in link state packet230. Since network devices110and120are configured with the same system ID, this causes network devices110and120to repeatedly send out link state packets with continuously increasing sequence numbers because each of the network devices110and120thinks that the link state packets that it receives are its own, as demonstrated inFIGS.2A-2D. In this example, network device110continues to maintain the count of received link state packets that satisfy the set of conditions while the timer is active (e.g., has not elapsed). Once network device110detects that the timer has elapsed, network device110compares the count of the number of received link state packets that satisfy the set of conditions with a defined threshold value (e.g., 3, 5, 10, 15, etc.). If the count is greater than or equal to the defined threshold value, network device110determines that another network device in the same area of the network is configured with the same system ID as itself. In response to this determination, network device110stores a message in a storage (e.g., a system log) indicating that another network device in the network is determined to be configured with the same system ID used in the routing protocol as the system ID configured for the network device. A user of network device110can check for such messages by network device110(e.g., sending (via a command line interface provided by network device110) a request for any such messages. When network device110receives the request, network device110accesses the storage and retrieves the message and provides it to the user (e.g., via a command line interface provided by network device110). The example described above by reference toFIGS.2A-2Dshows how a particular type of link state packet that a network device receives (i.e., a link state packet with an LSP ID that is the same as the system ID of the network device and a sequence identifier that is higher than the network device's current sequence identifier) can be used to determine whether other network devices in the same are of a network are configured with the same system ID as the network device. One of ordinary skill in the art will appreciate that additional and/or different types of link state packets may be used to determine whether other network devices in the same are of a network are configured with the same system ID as the network device. For example, in some embodiments, a link state packet with an LSP ID that is the same as the system ID of a network device, a sequence identifier that is the same as the network device's current sequence identifier, but a different checksum than the checksum of the previous link state packet that the network device sent out can trigger the start of the timer and/or increase the count. This type of link state packet can be added to the set of conditions that trigger the start of the timer and/or increase the count of received link state packets that satisfy the set of conditions. In some such embodiments, the network device keeps track of the checksums of the link state packets that it sends out to the network. For example, for each link state packet that the network device sends out to the network, the network device uses a checksum function on the link state packet to generate a checksum value. When the network device receives a link state packet with an LSP ID that is the same as the system ID of the network device and a sequence identifier that is the same as the network device's current sequence identifier, the network device uses the checksum function on the received link state packet to generate another checksum value. Then, the network device compares the two checksum values. If the checksum values are different, the network device starts a timer or, if a timer is already running, increases the count of received link state packets that satisfy the set of conditions. FIG.3illustrates a process300for detecting duplicate network device identifiers for routing protocols according to some embodiments. In some embodiments, a network device (e.g., network device110, network device115, network device120, network device130, network device135, network device140, etc.) that is configured to use a routing protocol to exchange network topology information with other network devices performs process300. Process300starts by receiving, at310, a link state packet comprising a first network device identifier for use in a routing protocol. Referring toFIG.2Aas an example, network device110may receive link state packet205via a link state routing protocol. The LSP ID in link state packet205is used in the link state routing protocol. Next, upon determining that the link state packet satisfies a set of conditions, process300starts, at320, a timer. Referring toFIG.2Aas an example, when network device110receives link state packet205, network device110determines that link state packet205satisfies the set of conditions mentioned above. That is, link state packet205has an LSP ID that is the same as the system ID of network device110and a sequence identifier that is higher than network device110's current sequence identifier. While the timer has not elapsed, process300maintains, at330, a count value of link state packets received that satisfy the set of conditions. Referring toFIGS.2A and2Cas an example, network device110initializes the value of the count to 1 when it received link state packet205. Then, network device incremented the count to 2 upon receiving link state packet225. Finally, when the timer has elapsed, process300determines, at340, based on the count value and a defined threshold value, that a second network device in the network is configured with a second network device identifier for use in the routing protocol that is the same as the first network device identifier. Referring toFIGS.2A-2Das an example, network device110determines that another network device in network100is configured with the same system ID as itself the count of the number of received link state packets that network device110receive during the time the timer is active is greater than or equal to a defined threshold value. FIG.4illustrates the architecture of an example network device (e.g., a network switch or router)400that may implement the techniques of the present disclosure according to certain embodiments. For example, network device400may be used to implement network devices110-120and130-140shown inFIGS.1and2A-2D. Network device400includes a management module402, an internal fabric module404, and a number of I/O modules406(1)-(P). Management module402includes one or more management CPUs408for managing/controlling the operation of the device. Each management CPU408can be a general-purpose processor, such as an Intel/AMD x86 or ARM-based processor, that operates under the control of program code maintained in an associated volatile memory and/or stored in a non-transitory computer readable storage medium (not shown). In one set of embodiments, this program code can include code for implementing some or all of the techniques described in the foregoing sections. Internal fabric module404and I/O modules406(1)-(P) collectively represent the data, or forwarding, plane of network device400. Internal fabric module404is configured to interconnect the various other modules of network device400. Each I/O module406includes one or more input/output ports410(1)-(Q) that are used by network device400to send and receive network packets. Each I/O module406can also include a packet processor412, which is a hardware processing component that can make wire speed decisions on how to handle incoming or outgoing network packets. It should be appreciated that network device400is illustrative and other configurations having more or fewer components than network device400are possible. The following are some example embodiments of the present disclosure. In some embodiments, the techniques described herein relate to a method executable by a first network device in a network, the method including: receiving a link state packet including a first network device identifier for use in a routing protocol; upon determining that the link state packet satisfies a set of conditions, starting a timer; while the timer has not elapsed, maintaining a count value of link state packets received that satisfy the set of conditions; and when the timer has elapsed, determining, based on the count value and a defined threshold value, that a second network device in the network is configured with a second network device identifier for use in the routing protocol that is the same as the first network device identifier. In some embodiments, the techniques described herein relate to a method, wherein the link state packet is a first link state packet further including a first sequence identifier, the method further including, at a defined interval, flooding the network with a second link state packet, the second link state packet including the first network identifier and a second sequence identifier. In some embodiments, the techniques described herein relate to a method, wherein a condition in the set of conditions specifies that the first sequence identifier is greater than the second sequence identifier. In some embodiments, the techniques described herein relate to a method, wherein a condition in the set of conditions specifies that the first sequence identifier and the second sequence identifier are the same and that a first set of data in the first link state packet is different than a second set of data in the second link state packet. In some embodiments, the techniques described herein relate to a method further including determining that the first set of data in the first link state packet is different than the second set of data in the second link state packet by: performing a first checksum operation on the first link state packet to produce a first value, performing a second checksum operation on second link state packet to produce a second value, and comparing the first value and the second value. In some embodiments, the techniques described herein relate to a method, wherein the link state packet is a first link state packet further including a first sequence identifier, the method further including: receiving a second link state packet including the first network identifier and a second sequence identifier; determining that that the second link state packet satisfies the set of conditions; and incrementing the count value. In some embodiments, the techniques described herein relate to a method, wherein a condition in the set of conditions specifies that the second sequence identifier is greater than the first sequence identifier. In some embodiments, the techniques described herein relate to a method, wherein a condition in the set of conditions specifies that the first sequence identifier and the second sequence identifier are the same and that a first set of data in the first link state packet is different than a second set of data in the second link state packet. In some embodiments, the techniques described herein relate to a method further including determining that the first set of data in the first link state packet is different than the second set of data in the second link state packet by: performing a first checksum operation on the first link state packet to produce a first value, performing a second checksum operation on second link state packet to produce a second value, and comparing the first value and the second value. In some embodiments, the techniques described herein relate to a method, wherein the first network device and the second network device are not adjacent network devices in the network. In some embodiments, the techniques described herein relate to a non-transitory machine-readable medium storing a program executable by at least one processing unit of a first network device, the program including sets of instructions for: receiving a packet through a routing protocol, the packet including a first identifier for identifying the first network device in the routing protocol; upon determining that the packet satisfies a set of conditions, starting a timer; while the timer has not elapsed, maintaining a count value of packets received through the routing protocol that satisfy the set of conditions; and when the timer has elapsed, determining, based on the count value and a defined threshold value, that a second network device in the network is configured with a second identifier for identifying the second network device in the routing protocol that is the same as the first identifier. In some embodiments, the techniques described herein relate to a non-transitory machine-readable medium, wherein the routing protocol for exchanging network topology information with network devices in the network. In some embodiments, the techniques described herein relate to a non-transitory machine-readable medium, wherein the packet is a first packet further including a first sequence identifier, the method further including, at a defined interval, flooding the network with a second packet, the second packet including the first identifier and a second sequence identifier. In some embodiments, the techniques described herein relate to a non-transitory machine-readable medium, wherein a condition in the set of conditions specifies that the first sequence identifier is greater than the second sequence identifier. In some embodiments, the techniques described herein relate to a non-transitory machine-readable medium, wherein a condition in the set of conditions specifies that the first sequence identifier and the second sequence identifier are the same and that a first set of data in the first packet is different than a second set of data in the second packet. In some embodiments, the techniques described herein relate to a non-transitory machine-readable medium, wherein the program further includes a set of instructions for determining that the first set of data in the first packet is different than the second set of data in the second packet by: performing a first checksum operation on the first packet to produce a first value, performing a second checksum operation on second packet to produce a second value, and comparing the first value and the second value. In some embodiments, the techniques described herein relate to a system including: a set of processing units; and a non-transitory machine-readable medium storing instructions that when executed by at least one processing unit in the set of processing units cause the at least one processing unit to: receive a packet including a first network device identifier for use in a routing protocol; upon determining that the packet satisfies a set of conditions, start a timer; while the timer has not elapsed, maintain a count value of packets received that satisfy the set of conditions; and when the timer has elapsed, determine, based on the count value and a defined threshold value, that a second network device in the network is configured with a second network device identifier for use in the routing protocol that is the same as the first network device identifier. In some embodiments, the techniques described herein relate to a system, wherein the packet is a first packet further including a first sequence identifier, the method further including: receiving a second packet including the first network identifier and a second sequence identifier; determining that that the second packet satisfies the set of conditions; and incrementing the count value. In some embodiments, the techniques described herein relate to a system, wherein a condition in the set of conditions specifies that the second sequence identifier is greater than the first sequence identifier. In some embodiments, the techniques described herein relate to a system, wherein a condition in the set of conditions specifies that the first sequence identifier and the second sequence identifier are the same and that a first set of data in the first packet is different than a second set of data in the second packet. The above description illustrates various embodiments of the present disclosure along with examples of how aspects of the present disclosure may be implemented. The above examples and embodiments should not be deemed to be the only embodiments, and are presented to illustrate the flexibility and advantages of the present disclosure as defined by the following claims. Based on the above disclosure and the following claims, other arrangements, embodiments, implementations, and equivalents will be evident to those skilled in the art and may be employed without departing from the spirit and scope of the disclosure as defined by the claims.
31,766
11943139
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS In descriptions of this application, unless otherwise specified, “/” means “or”. For example, A/B may represent A or B. A term “and/or” in this specification describes only an association relationship between associated objects and represents that there may be three relationships. For example, A and/or B may represent the following three cases: Only A exists, both A and B exist, and only B exists. In addition, “at least one” means one or more, and “a plurality of” means two or more. Terms such as “first” and “second” do not limit a quantity and an execution sequence, and the terms such as “first” and “second” do not indicate a definite difference. It should be noted that, in this application, words such as “example” or “for example” are used to represent giving an example, an illustration, or a description. Any embodiment or design scheme described as an “example” or “for example” in this application should not be explained as being more preferred or having more advantages than another embodiment or design scheme. Specifically, use of the word “example”, “for example”, or the like is intended to present a related concept in a specific manner. To facilitate understanding of the technical solutions in this application, the following first briefly describes technical terms in this application. 1. Protocol Data Unit (PDU) Session A 5G core network supports a PDU connection service, and the PDU connection service is a service of exchanging a PDU data packet between a terminal and a data network (DN). The PDU connection service is implemented by initiating PDU session establishment by the terminal. The terminal establishes a PDU session, in other words, establishes a data transmission channel between the terminal and the DN. It should be noted that the terminal may initiate establishment of one or more PDU sessions to be connected to a same DN or different DNs. The terminal may be served by a group of core network elements (such as a session management function (SMF) network element, a policy control function (PCF) network element, and a user plane function (UPF) network element). These core network elements coordinate and manage PDU session resources of the terminal. 2. N4 Session The N4 session is created by an SMF on a UPF to manage a function of the UPF that serves a PDU session. For example, when the terminal establishes a PDU session, the SMF may indicate the UPF to create an N4 session corresponding to the PDU session. When the SMF receives a PDU session deletion request, the SMF triggers the UPF to delete the N4 session corresponding to the PDU session. In some cases, the SMF may further trigger the UPF to modify the N4 session corresponding to the PDU session. In embodiments of this application, N4 sessions may be classified into a user-level (or terminal-level) N4 session and a group-level N4 session. In a process of creating the N4 session, both the SMF and the UPF generate an N4 session context to store parameters related to the N4 session. The N4 session context may include an N4 session ID, and N4 routing rules used for the N4 session, including a packet detection rule (PDR), a forwarding action rule (FAR), a quality of service (QoS) enforcement rule (QER), and a usage reporting rule (URR). The UPF uses parameters such as the PDR, the FAR, the QER, and the URR in the N4 session context to detect and forward a packet of the PDU session. For example, after the UPF receives a packet from an ingress, the UPF determines, by using a PDR, an N4 session to which the packet belongs. Then, the UPF performs matching between a PDR (which may be one or more PDRs) in the N4 session context and characteristic information of the packet, to find a PDR that matches the packet. The PDR is associated with or specifies a FAR, a QER, and a URR corresponding to the packet. Therefore, the UPF may drop the packet, forward the packet, buffer the packet, notify a control plane of the packet, duplicate the packet, or the like according to the FAR. The UPF may perform a QoS operation on the packet according to the QER. The UPF network element may perform usage reporting on the packet according to the URR. 3. Tunnel In embodiments of this application, tunnels may be classified into the following two types: (1) A tunnel between an access network device and a UPF. The tunnel between the access network device and the UPF may have another name, for example, a next generation (next generation, N) 3 interface (N3 for short) tunnel. (2) A tunnel between two UPFs. The tunnel between the two UPFs may have another name, for example, an N19 tunnel. It should be noted that the tunnel may be a tunnel at a session granularity, a tunnel at a device granularity, or a tunnel at a group granularity. The tunnel at the session granularity is a tunnel used only for one PDU session. The tunnel at the session granularity is used to transmit only a data packet of a corresponding PDU session. The tunnel at the device granularity is a unique tunnel established between two devices (for example, a RAN node and a UPF). The tunnel is used to transmit data of all terminals served by the two devices. The tunnel at the group granularity is a tunnel used by all terminals in one group. The tunnel at the group granularity is used to transmit only a data packet of a terminal in a corresponding group. 4. Ethernet Frame The Ethernet frame is a data packet transmitted on an Ethernet link. Ethernet frames are classified into a tagged frame and an untagged frame. The tagged frame carries a 4-byte VLAN tag. The untagged frame does not carry a VLAN tag. The VLAN tag usually includes a 2-byte tag protocol identifier (TPID) and a 2-byte tag control information (TCI). The TPID is a type defined by the IEEE, and indicates that a frame carrying the TPID is a frame to which a tag is added. The TCI includes the following parameters: a priority (priority), a canonical format indicator (CFI), and a VLAN identifier (VID). In embodiments of this application, handling a VID of the Ethernet frame may be replaced with handling a VLAN tag of the Ethernet frame. For example, adding a VID to the Ethernet frame may be replaced with adding a VLAN tag to the Ethernet frame. For another example, removing a VID carried in the Ethernet frame may be replaced with removing a VLAN tag carried in the Ethernet frame. In embodiments of this application, the Ethernet frame may be replaced with a packet or a data packet. 5. Port of a Fixed Network To adapt to different connection manners and networking manners, the fixed network introduces an access port, a trunk port, and a hybrid port. The access port is usually used to be connected to a user terminal that cannot identify a tag, for example, a user host or a server. The access port can belong to only one VLAN. A default VID is configured for the access port. The trunk port is usually used to be connected to a device that can identify a tag, for example, a switch, a router, an access point (AP), and a voice terminal that may send and receive a tagged frame and an untagged frame. The trunk port can belong to a plurality of VLANs. A default VID and a tag list are configured for the trunk port. The hybrid port may be used to be connected to both a user terminal that cannot identify a tag and a device that can identify a tag. The hybrid port can belong to a plurality of VLANs. A default VID, a tag list, and an untagged list are configured for the hybrid port. For example, for manners in which the foregoing three ports handles a packet, refer to Table 1. TABLE 1Access portTrunk portHybrid portProcessing of aThe packet isA default VID isA default VID isreceived packet thataccepted and a defaultadded to the packet.added to the packet.does not carry aVID is added to theIn addition, when theIn addition, when theVIDpacketdefault VID is in a tagdefault VID is in a taglist, the packet islist, the packet isaccepted; when theaccepted; when thedefault VID is not in adefault VID is not in atag list, the packet istag list, the packet isdiscardeddiscardedProcessing of aWhen the VID carriedWhen the VID carriedWhen the VID carriedreceived packet thatin the packet is thein the packet is in thein the packet is in thecarries the VIDsame as the defaulttag list, the packet istag list, the packet isVID, the packet isaccepted; when theaccepted; when theaccepted; when theVID carried in theVID carried in theVID carried in thepacket is not in thepacket is not in thepacket is differenttag list, the packet istag list, the packet isfrom the default VID,discardeddiscardedthe packet isdiscardedProcessing of a to-The VID carried inWhen the VID carriedWhen the VID carriedbe-sent packetthe packet is removedin the packet is thein the packet is in theand the packetsame as the defaulttag list but not in anwithout the VID isVID and the VIDuntagged list, thesentcarried in the packetpacket is sent; whenis in the tag list, thethe VID carried in theVID carried in thepacket is in the tag listpacket is removedand an untagged list,and the packetthe VID carried in thewithout the VID ispacket is removedsent; when the VIDand the packetcarried in the packetwithout the VID isis different from thesent; when the VIDdefault VID and thecarried in the packetVID carried in theis not in the tag list,packet is in the tagthe packet islist, the packet is sent;discardedwhen the VID carriedin the packet is not inthe tag list, the packetis discarded 6. QinQ Technology In a VLAN tag defined in the IEEE802.1Q, only 12 bits indicate a VID. Therefore, a device may support a maximum of 4094 VLANs. During actual application, especially in a metropolitan area network, a large quantity of VLANs are required to isolate users. 4094 VLANs cannot meet a requirement. To expand quantity space of VLANs, the QinQ technology is proposed in the industry. The QinQ technology is to add another 802.1Q tag to a packet with an 802.1Q tag. A packet using the QinQ technology may be referred to as a QinQ packet for short. The QinQ packet may include two VLAN tags. An inner VLAN tag is a customer tag (C-tag), and an outer VLAN tag is a service provider tag (S-tag). In embodiments of this application, the default VID may have another name, for example, a default VID or a port default VID (port default VLAN ID, PVID). This is not limited herein. The foregoing describes terms used in embodiments of this application. General descriptions are provided herein, and details are not described below again. For a 5GLAN service, 3rd generation partnership project (3GPP) technical rule (TR) 23.734 proposes to support one-to-one communication and one-to-many communication in a 5GLAN. Specifically, to support one-to-one communication and one-to-many communication in the 5GLAN, a 3GPP network is required to support group-based unicast, multicast, and broadcast, support duplication and distribution of multicast and broadcast packets, and support any terminal as a multicast source. To meet this requirement, it is currently defined in 3GPP technical standard (TS) 23.501 that one 5GLAN is managed by one SMF. As shown inFIG.1, the SMF manages one UPF or simultaneously manages a plurality of UPFs.FIG.1is drawn by using an example in which the SMF manages a UPF1and a UPF2. The UPF1and the UPF2maintain routing rules. The UPF1and the UPF2may perform packet forwarding according to the routing rules maintained by the UPF1and the UPF2. It is assumed that a terminal1, a terminal2, and a terminal3belong to a same group. When two terminals (for example, the terminal1and the terminal2inFIG.1) served by a same UPF perform one-to-one communication, data is transmitted through local switch (local switch) of the UPF. When two terminals (for example, the terminal1and the terminal3inFIG.1) served by different UPFs perform one-to-one communication, data needs to be transmitted through a tunnel between the UPF1and the UPF2. In embodiments of this application, two UPFs may exchange tunnel information through an SMF, to establish a tunnel. A tunnel (or a forwarding path) corresponding to tunnel information of a UPF network element may be constructed in a manner of a virtual local area network (VLAN), a virtual extensible local area network (VxLAN), a general packet radio service (GPRS) tunneling protocol-user plane (GTP-U), a generic routing encapsulation (GRE) protocol, or an IP tunnel. The foregoing construction manner may be dynamic, or may be pre-configured in a network. This is not specifically limited in embodiments of this application. Different construction manners correspond to different tunnel information. For example, when the VLAN is used for construction, the tunnel information may be a UPF ID, a UPF ID+a virtual local area network identifier (VLAN ID, VID), or media access control (media access control, MAC)+a VID. Alternatively, for example, when the VxLAN is used for construction, the tunnel information may be a UPF ID, a UPF ID+a VID, an IP address+a VID, or an IP address+a port number+a VID. Alternatively, for example, when the GTP-U is used for construction, the tunnel information may be a UPF ID, a UPF ID+a tunnel endpoint identifier (TEID), an IP address+a TEID, or an IP address+a port+a TEID. Alternatively, for example, when the GRE is used for construction, the tunnel information may be a UPF ID, a UPF ID+a key (key), an IP address+a key, or an IP address+a port+a key. Alternatively, when the IP tunnel is used for construction, the tunnel information may be a UPF ID, an IP address, or an IP address+a port. If the tunnel information includes the UPF ID, the UPF ID may be a MAC address or an IP address, or an IP address+a port, or the SMF network element or the UPF network element may determine a corresponding MAC address or IP address, or an IP address+a port based on the UPF ID. General descriptions are provided herein, and details are not described below again. In embodiments of this application, one 5GLAN may provide a group communication service for one group. In other words, one 5GLAN may correspond to one group. One SMF or UPF may also provide communication services for a plurality of groups. Optionally, in terms of expression, the 5GLAN may also be referred to as a 5G virtual network (5GVN), a local area network (LAN), a 5G LAN-VN LAN group, a LAN-type service, a LAN-VN, a 5G LAN-type service, or the like. A name of the 5GLAN is not specifically limited in embodiments of this application. In an architecture shown inFIG.1, the SMF is mainly responsible for all control plane functions of terminal session management, including UPF selection and control, IP address allocation and management, session quality of service (QoS) management, a function of obtaining a policy and charging control (PCC) policy from a PCF, and the like. In the architecture shown inFIG.1, the UPF serves as an anchor of a PDU session connection, and is responsible for data packet filtering, data transmission/forwarding, rate control, charging information generation, QoS handling for a user plane, uplink transmission authentication, transmission class verification, downlink data packet buffering, downlink data notification triggering, and the like of the terminal. In the architecture shown inFIG.1, a radio access network (RAN) is a RAN node, and the RAN node may also be referred to as an access network device. For example, the RAN node may be a transmission reception point (TRP), a base station, or a control node in various forms (for example, a network controller or a radio controller (for example, a radio controller in a cloud radio access network (CRAN) scenario)). Specifically, the RAN node may be a macro base station, a micro base station (also referred to as a small cell), a relay station, an access point (AP), or the like in various forms, or may be an antenna panel of a base station. The control node may be connected to a plurality of base stations, and configure resources for a plurality of terminals within coverage of the plurality of base stations. In systems using different radio access technologies, names of devices having functions of a base station may be different. For example, the device may be referred to as an evolved NodeB (eNB or eNodeB) in a long term evolution (LTE) system, or may be referred to as a next generation NodeB (gNB) in a 5G system or an NR system. A specific name of the base station is not limited in this application. The RAN node may alternatively be a network device in a future evolved public land mobile network (PLMN) or the like. In the architecture shown inFIG.1, the terminal is a user-side entity configured to receive a signal and/or send a signal. The terminal is configured to provide a user with one or more of a voice service and a data connectivity service. The terminal may also be referred to as user equipment (UE), a terminal device, an access terminal, a subscriber unit, a subscriber station, a mobile station, a remote station, a remote terminal, a mobile device, a user terminal, a wireless communication device, a user agent, or a user apparatus. The terminal may be a vehicle-to-everything (V2X) device, for example, a smart car (smart car or intelligent car), a digital car, an unmanned car (unmanned car, driverless car, pilotless car, or automobile), a self-driving car (self-driving car or autonomous car), a pure electric vehicle (pure EV or Battery EV), a hybrid electric vehicle (HEV), a range extended electric vehicle (range extended EV, REEV), a plug-in hybrid electric vehicle (plug-in HEV, PHEV), a new energy vehicle (new energy vehicle), or a road side unit (RSU). Alternatively, the terminal may be a device-to-device (D2D) device, for example, an electricity meter or a water meter. Alternatively, the terminal may be a mobile station (MS), a subscriber unit, an uncrewed aerial vehicle, an internet of things (IoT) device, a station (ST) in a WLAN, a cellular phone, a smartphone, a cordless telephone set, a wireless data card, a tablet computer, a session initiation protocol (SIP) phone, a wireless local loop (WLL) station, a personal digital assistant (PDA) device, a laptop computer, a machine type communication (MTC) terminal, a handheld device having a wireless communication function, a computing device, another processing device connected to a wireless modem, a vehicle-mounted device, or a wearable device (which may also be referred to as a wearable intelligent device). Alternatively, the terminal may be a terminal in a next generation communication system, for example, a terminal in a 5G system, a terminal in a future evolved PLMN, or a terminal in an NR system. Optionally, as shown inFIG.2, a current 5G network may further include the following network elements: an access and mobility management function (AMF) network element, an authentication server function (AUSF) network element, a network slice selection function (NSSF) network element, a network exposure function (NEF) network element, a network repository function (NRF) network element, a policy control function (PCF) network element, a unified data management (UDM) network element, an application function (AF) network element, and the like. This is not specifically limited in embodiments of this application. The terminal communicates with the AMF network element through a next generation (N) 1 interface (N1 for short), the RAN device communicates with the AMF network element through an N2 interface (N2 for short), the RAN device communicates with the UPF network element through an N3 interface (N3 for short), and the UPF network element communicates with the DN through an N6 interface (N6 for short). Control plane network elements such as the AMF network element, the SMF network element, the UDM network element, the AUSF network element, or the PCF network element may alternatively interact with each other through a service-oriented interface. For example, as shown inFIG.2, a service-oriented interface externally provided by the AMF may be Namf, a service-oriented interface externally provided by the SMF network element may be Nsmf, a service-oriented interface externally provided by the UDM network element may be Nudm, a service-oriented interface externally provided by the PCF network element may be Npcf, and a service-oriented interface externally provided by the AUSF network element may be Nausf. Details are not described herein. FIG.3shows a procedure in which a terminal establishes a PDU session to access a 5GLAN service. The procedure includes the following steps. S11. The terminal sends a session establishment request message to an AMF, so that the AMF receives the session establishment request message sent by the terminal. The session establishment request message is used to request to establish a PDU session. The session establishment request includes an identifier of the terminal and a group identifier. In this embodiment of this application, the identifier of the terminal in the session establishment request message may include one or more of the following: a subscription permanent identifier (SUPI), a generic public subscription identifier (GPSI), an international mobile subscriber identity (IMSI), a mobile subscriber integrated services digital network (MSISDN), a globally unique temporary identity (GUTI), and a data network-related identifier (for example, a network access identifier (NAI), a specific user identifier of a local area network, an internet protocol version 4 (IPv4) address, an internet protocol version 6 (IPv6) address, a MAC address, or a MAC address+a VID). In this embodiment of this application, the group identifier is used to determine a corresponding 5GLAN group. For example, the group identifier may be a data network name (DNN), a group identifier (Group ID), or a DNN+single network slice selection assistance information (S-NSSAI). S12. The AMF sends the session establishment request message to an SMF, so that the SMF receives the session establishment request message sent by the AMF. S13. The SMF determines a UPF that serves the terminal, and establishes an N4 session on the UPF. In this way, the N4 session established by the UPF stores a corresponding N4 routing rule, so that the UPF may forward a group packet according to the N4 routing rule. FIG.4shows a procedure of transmitting, in a local switch manner, a packet sent by UE1to UE2in a same group in a conventional technology. The procedure specifically includes the following steps. 1. A UPF receives, through an uplink tunnel of a session corresponding to the UE1, a data packet sent by the UE1to the UE2. 2. The UPF finds a PDR that matches the data packet in an N4 session corresponding to the UE1. A destination interface parameter of a FAR associated with the PDR found by the UPF for the first time is set to a value of an internal interface (for example, “5GLAN internal”), and a network instance is set to an identifier of a group to which the UE1belongs. 3. The UPF sends, to the internal interface according to the FAR associated with the PDR found for the first time, the data packet from which an outer GTP-U header is removed. 4. The UPF performs matching on a corresponding PDR based on a destination address of the data packet (that is, an address of the UE2) and a source interface of “5GLAN internal”, and identifies an N4 session corresponding to the UE2. A destination interface parameter of a FAR associated with the PDR found by the UPF for the second time is set to tunnel information of an access network device connected to the UE2. 5. The UPF adds an outer header to the data packet according to the FAR associated with the PDR found for the second time, and sends, through a corresponding downlink tunnel, the data packet to the access network device accessed by the UE2, and the access network device sends the data packet to the UE2. FIG.5shows a procedure in which a packet sent by UE1to UE3in a group is transmitted across UPFs in a conventional technology. The procedure specifically includes the following steps. 1. A UPF1receives, through an uplink tunnel of a session corresponding to the UE1, a data packet sent by the UE1to UE2. 2. The UPF1finds a PDR that matches the data packet in an N4 session corresponding to the UE1. A destination interface parameter of a FAR associated with the PDR found by the UPF1for the first time is set to “5GLAN internal”, and a network instance is set to an identifier of a group to which the UE1belongs. 3. The UPF1sends, to an internal interface according to the FAR associated with the PDR found for the first time, the data packet from which an outer header is removed. 4. The UPF1performs matching on a corresponding PDR based on a destination address of the data packet (that is, an address of the UE3) and a source interface of “5GLAN internal”, and identifies an N4 session at a group granularity. A destination interface parameter of a FAR associated with the PDR found by the UPF1for the second time is set to N19 tunnel information of a UPF2. 5. The UPF1encapsulates the N19 tunnel information of the UPF2into the data packet according to the FAR associated with the PDR found for the second time, and sends the data packet to the UPF2. 6. After receiving the data packet through an N19 tunnel, the UPF2performs matching on a corresponding PDR based on a header carrying the N19 tunnel information and/or a network instance, and identifies an N4 session at a group granularity. A destination interface parameter of a FAR associated with the PDR found by the UPF2for the first time is set to “5GLAN internal”, and the network instance is set to an identifier of a group to which the UE3belongs. 7. The UPF2removes an outer header of the data packet according to the FAR associated with the PDR found for the first time, and sends the data packet to an internal interface. 8. The UPF2performs matching on a corresponding PDR based on the destination address of the data packet (that is, the address of the UE3) and the source interface of “5GLAN internal”, and identifies an N4 session corresponding to the UE3. A destination interface parameter of a FAR associated with the PDR found by the UPF2for the second time is set to tunnel information of an access network device connected to the UE3. 9. The UPF2adds an outer header to the data packet according to the FAR associated with the PDR found for the second time, and sends, through a corresponding downlink tunnel, the data packet to the access network device accessed by the UE3, and the access network device sends the data packet to the UE3. A 5GLAN may support one or more VLANs. Correspondingly, a member in a 5GLAN group may be a terminal in one or more VLANs. After a terminal accesses the 5GLAN by establishing a PDU session, the PDU session between the terminal and a UPF may be considered as a link between switches, so that the 5GLAN communicates with a VLAN to which the terminal belongs. An Ethernet packet of the VLAN to which the terminal belongs may be transmitted in the 5GLAN. However, different terminals in the 5GLAN have different VLAN capabilities (for example, some terminals can support only an untagged frame while some terminals can support a tagged frame), and different terminals have different networking manners. Currently, a user plane network element in the 5GLAN does not perform corresponding handling on an Ethernet packet based on a terminal-side VLAN capability and networking manner, but directly forwards the Ethernet packet. Consequently, communication of an external LAN is affected. An example is used for description with reference toFIG.6. InFIG.6, a UPF1is connected to UE1and UE2in a data network, the UE1belongs to a VLAN200, and the UE2belongs to a VLAN100. A UPF1further serves UE3and UE4in a 5GLAN group. The UE3belongs to a VLAN300, and the UE4belongs to the VLAN100. The UPF2serves UE5, UE6, and UE7in the 5GLAN group, and the UE6belongs to the VLAN100. The UE7is further connected to UE8, UE9, and UE10. The UE8belongs to the VLAN100, the UE9belongs to the VLAN200, and the UE10belongs to the VLAN300. It is assumed that the UE7has a capability of reading a packet carrying a VLAN tag, and the UE4and the UE3do not have the capability of reading the packet carrying the VLAN tag. The UE7sends a packet carrying a VLAN tag, and a destination MAC address of the packet carrying the VLAN100tag is a MAC address of the UE4. Based on an existing procedure, after receiving the packet carrying the VLAN100tag, the UPF2sends the packet carrying the VLAN tag to the UPF1. Then, the UPF2sends the packet carrying the VLAN100tag to the UE4. Because the UE4does not have a capability of reading the packet carrying the VLAN100tag, the UE4discards the packet carrying the VLAN100tag. Consequently, communication of the VLAN100is affected. For another example, the UE3belongs to the VLAN300, the UE3sends a packet to the UPF1, and a destination MAC address of the packet is a MAC address or a broadcast address of the UE4. Based on an existing procedure, after receiving the packet, the UPF1forwards the packet to the UE4based on the destination address. As a result, the UE4belonging to the VLAN100incorrectly receives the packet, and communication of the VLAN100or the VLAN300is affected. Therefore, how to enable a network element in a 5GLAN to adapt to a terminal-side VLAN capability and networking manner, so that the network element in the 5GLAN may normally transmit an Ethernet packet of a terminal is an urgent technical problem to be resolved. To resolve this technical problem, an embodiment of this application provides a route configuration method. As shown inFIG.7, the method includes the following steps. S101. A session management network element obtains VLAN information of a first port. The first port is a port, which corresponds to an established PDU session of a first terminal, on a first user plane network element. The first user plane network element is a user plane network element that provides a 5GLAN service for the first terminal. Optionally, the first port may be a virtual port, for example, an N3 port. The session management network element or the first user plane network element may configure port information of the first port. It may be understood that when the first user plane network element configures the port information of the first port, the session management network element may obtain the port information of the first port from the first user plane network element. For example, the port information of the first port may include a combination of one or more of the following: an IP address, a port number, a MAC address, or a TEID. In this embodiment of this application, for port information of another port, refer to the port information of the first port. General descriptions are provided herein, and details are not described below again. In this embodiment of this application, the session management network element may allocate the first user plane network element to the first terminal based on factors such as a location of the first terminal, a service range of each user plane network element associated with the session management network element, and a service capability of each user plane network element managed by the session management network element. For a specific implementation thereof, refer to a conventional technology. Details are not described herein. The VLAN information of the first port includes a combination of one or more of the following: a default VID of the first port, an allowed VID list of the first port, or an untagged VID list of the first port. The default VID of the first port identifies a packet that is received through the first port and that does not carry a VID. Optionally, the default VID of the first port may be a VID corresponding to a VLAN to which the first terminal belongs. The allowed VID list of the first port records one or more VIDs. A VID recorded in the allowed VID list of the first port is the same as a VID carried in a packet allowed to pass through the first port. The untagged VID list of the first port records one or more VIDs. A VID recorded in the untagged VID list of the first port is the same as a VID removed from a packet to be sent through the first port. The allowed VID list may have another name, for example, a tagged list. This is not limited in this embodiment of this application. The untagged VID list may have another name, for example, an untagged list. This is not limited in this embodiment of this application. Optionally, the VLAN information of the first port may further include another parameter, for example, an interface mode, a priority of each VID in the allowed VID list, a priority of each VID in the untagged VID list, indication information, and/or a first action indication. A priority of a VID is used to determine a matching sequence of the VID in the allowed VID list or the untagged VID list. A higher priority of the VID indicates a higher matching sequence of the VID in the allowed VID list or the untagged VID list. For example, it is assumed that the allowed VID list sequentially includes a VID1, a VID2, and a VID3, a priority of the VID2is higher than that of the VID1, and the priority of the VID1is higher than that of the VID3. In this way, for a packet received through the first port, it is first determined whether a VID carried in the packet matches the VID2. When the VID carried in the packet does not match the VID2, it is further determined whether the VID carried in the packet matches the VID1. Then, when the VID carried in the packet does not match the VID1, it is further determined whether the VID carried in the packet matches the VID3. The interface mode includes an access type, a trunk type, or a hybrid type. Optionally, the interface mode may have another name, for example, an interface type or a port type. This is not limited herein. In this embodiment of this application, if the VLAN information of the first port includes the interface mode, the session management network element may determine an interface mode of the first port based on the interface mode included in the VLAN information of the first port. In other words, the session management network element may determine that the first port is an access port, a trunk port, or a hybrid port. In this embodiment of this application, if the VLAN information of the first port does not include the interface mode, the session management network element may determine an interface mode of the first port based on a type of a VLAN parameter included in the VLAN information of the first port. For example, if the VLAN information of the first port includes the default VID of the first port and does not include the allowed VID list or the untagged VID list of the first port, the session management network element determines that the first port is an access port. If the VLAN information of the first port includes the default VID and the allowed VID list of the first port and does not include the untagged VID list of the first port, the session management network element determines that the first port is a trunk port. If the VLAN information of the first port includes the default VID of the first port, the allowed VID list of the first port, and the untagged VID list of the first port, the session management network element may determine that the first port is a hybrid port. The indication information indicates whether the first terminal supports a VLAN. When the indication information indicates that the first terminal supports the VLAN, a packet sent by the first user plane network element to the first terminal through the first port may carry or not carry a VLAN tag. When the indication information indicates that the first terminal does not support the VLAN, a packet sent by the first user plane network element to the first terminal through the first port cannot carry a VLAN tag. Optionally, the first action indication indicates an operation on an uplink packet sent by the first terminal and/or a downlink packet to be sent to the first terminal. The operation indicated by the first action indication includes a combination of one or more of the following:an operation 1-1: skipping handling a VID carried in the uplink packet;an operation 1-2: adding the default VID of the first port to the uplink packet;an operation 1-3: skipping handling a VID carried in the downlink packet;an operation 1-4: removing the default VID of the first port or a VID in the untagged VID list of the first port from the downlink packet;an operation 1-5: skipping determining, based on the allowed VID list of the first port, whether to allow transmission of the uplink packet;an operation 1-6: skipping determining, based on the allowed VID list of the first port, whether to allow transmission of the downlink packet;an operation 1-7: determining, based on the allowed VID list of the first port, whether to allow transmission of the uplink packet;an operation 1-8: determining, based on the allowed VID list of the first port, whether to allow transmission of the downlink packet;an operation 1-9: skipping adding a S-TAG to the uplink packet;an operation 1-10: skipping removing the S-TAG from the downlink packet;an operation 1-11: adding the S-TAG to the uplink packet; oran operation 1-12: removing the S-TAG from the downlink packet. The operation 1-1 to the operation 1-12 are merely examples, and the first action indication may further indicate another operation. This is not limited herein. Optionally, the first action indication may include a plurality of indications that do not conflict with each other in the operation 1-1 to the operation 1-12. For example, the first action indication may include the operation 1-2 and the operation 1-7. In this embodiment of this application, the VLAN information of the first port may have another name, for example, VLAN information of a PDU session established by the first terminal. This is not specifically limited. In this embodiment of this application, the allowed VID list of the first port may have another name, for example, an allowed VID list of a PDU session established by the first terminal. This is not specifically limited. In this embodiment of this application, the untagged VID list of the first port may have another name, for example, a tagged VID list of a PDU session established by the first terminal. This is not specifically limited. In this embodiment of this application, the default VID of the first port may have another name, for example, a default VID of a PDU session established by the first terminal. This is not specifically limited. In a possible implementation, in a process in which the first terminal establishes the PDU session, the session management network element first obtains VLAN information of the first terminal. Then, the session management network element determines the VLAN information of the first port based on the VLAN information of the first terminal. Optionally, the VLAN information of the first terminal may include a combination of one or more of the following VLAN parameters: indication information, a default VID, an allowed VID list, an untagged VID list, an interface mode, a first action indication, a S-TAG, or a priority of a VID. Optionally, that the session management network element determines the VLAN information of the first port based on the VLAN information of the first terminal includes a combination of one or more of the following implementations: Implementation 1: When the VLAN information of the first terminal includes the allowed VID list, the session management network element may determine the allowed VID list included in the VLAN information of the first terminal as the allowed VID list of the first port. For example, the allowed VID list included in the VLAN information of the first terminal includes a VID1, a VID2, and a VID3, so that the allowed VID list of the first port also includes the VID1, the VID2, and the VID3. Implementation 2: When the VLAN information of the first terminal includes the untagged VID list, the session management network element may determine the untagged VID list included in the VLAN information of the first terminal as the untagged VID list of the first port. For example, the untagged VID list included in the VLAN information of the first terminal includes a VID2and a VID3, so that the untagged VID list of the first port includes the VID2and the VID3. Implementation 3: When the VLAN information of the first terminal includes the first action indication, the session management network element may determine the first action indication included in the VLAN information of the first terminal as the first action indication included in the VLAN information of the first port. Implementation 4: When the VLAN information of the first terminal includes the S-TAG, the session management network element may determine the S-TAG included in the VLAN information of the first terminal as the S-TAG included in the VLAN information of the first port. Implementation 5: When the VLAN information of the first terminal includes the default VID, the session management network element may determine the default VID included in the VLAN information of the first terminal as the default VID of the first port included in the VLAN information of the first port. Implementation 6: When the VLAN information of the first terminal does not include the interface mode or the default VID, and the VLAN information of the first terminal includes the allowed VID list, the session management network element may determine a VID that meets a preset condition in the allowed VID list included in the VLAN information of the first terminal as the default VID of the first port. Optionally, when the VLAN information of the first terminal includes the allowed VID list, and the VLAN information of the first terminal does not include the default VID, the session management network element may determine a VID that meets a preset condition in the allowed VID list (or the untagged VID list) included in the VLAN information of the first terminal as the default VID of the first port. Optionally, the VID that meets the preset condition in the allowed VID list included in the VLAN information of the first terminal may be:the first VID in the allowed VID list included in the VLAN information of the first terminal;the last VID in the allowed VID list included in the VLAN information of the first terminal;a VID with a highest priority in the allowed VID list included in the VLAN information of the first terminal; ora VID with a lowest priority in the allowed VID list included in the VLAN information of the first terminal. Implementation 7: When the VLAN information of the first terminal does not include the default VID, the VLAN information of the first terminal includes the allowed VID list and the interface mode, and the interface mode is the access type, the session management network element determines only one VID in the allowed VID list included in the VLAN information of the first terminal as the default VID of the first port. Implementation 8: When the VLAN information of the first terminal does not include the default VID, the VLAN information of the first terminal includes the allowed VID list and the interface mode, and the interface mode is the trunk type or the hybrid type, the session management network element may determine a VID that meets a preset condition in the allowed VID list included in the VLAN information of the first terminal as the default VID of the first port. Implementation 9: When the VLAN information of the first terminal does not include the untagged VID list, the session management network element may use the default VID of the first port as the untagged VID list of the first port. In other words, the untagged VID list of the first port includes only the default VID of the first port. Optionally, that the first terminal obtains the VLAN information of the first terminal includes at least one of the following manners: Manner 1: The session management network element obtains the VLAN information of the first terminal from the first terminal. For example, the session management network element receives a session establishment request message from the first terminal, where the session establishment request message includes the VLAN information of the first terminal. Manner 2: The session management network element obtains the VLAN information of the first terminal from a data management network element. Optionally, the data management network element may be a UDM in 5G. For example, the session management network element obtains subscription data from the data management network element, where the subscription data includes the VLAN information of the first terminal. It may be understood that for an implementation in which the session management network element obtains the subscription data, refer to the conventional technology. Details are not described herein. Manner 3: The session management network element obtains the VLAN information of the first terminal from a policy control function network element. Optionally, the policy control function network element may be a PCF in 5G. For example, the session management network element sends a policy control obtaining request to the policy control function network element, where the policy control obtaining request may include a session identifier, an identifier of the first terminal, and a policy index. Then, the session management network element receives a policy control obtaining response message sent by the policy control function network element, where the policy control obtaining response message carries a policy and charging control (policy and charging control, PCC) rule, and the PCC rule includes the VLAN information of the first terminal. Optionally, the policy control function network element may directly obtain the VLAN information of the first terminal from an application function entity. Alternatively, the policy function network element may obtain the VLAN information of the first terminal from an application function entity through an exposure function network element. The application function entity may be an AF in 5G. The exposure function network element may be an NEF in 5G. Manner 4: The session management network element obtains the VLAN information of the first terminal from an authentication server. Optionally, the authentication server may be an authentication, authorization, accounting (authentication, authorization, accounting, AAA) server corresponding to a data network. For example, the session management network element sends an authorization request to the authentication server. Then, the session management network element receives an authentication authorization response message sent by the authentication server. The authentication authorization response message includes the VLAN information of the first terminal. Optionally, the authentication authorization response message may further include other information, for example, a policy index. In this embodiment of this application, the session management network element may obtain the VLAN information of the first terminal in only one of the manner 1 to the manner 4. Alternatively, the session management network element may obtain a plurality of pieces of VLAN information of the first terminal in a plurality of the manner 1 to the manner 4. Optionally, when the session management network element obtains a plurality of pieces of VLAN information of the first terminal, if the plurality of pieces of VLAN information include a target VLAN parameter of a same type, a target VLAN parameter included in VLAN information with a highest priority in the plurality of pieces of VLAN information is used as a target VLAN parameter included in the VLAN information of the first port. If first VLAN information in the plurality of pieces of VLAN information includes a target VLAN parameter, and other VLAN information than the first VLAN information in the plurality of pieces of VLAN information does not include the target VLAN parameter, the target VLAN parameter in the first VLAN information is used as a target VLAN parameter included in the VLAN information of the first port. The target VLAN parameter may be any one of the VLAN parameters. The first VLAN information may be any one of the plurality of pieces of VLAN information. A priority of VLAN information depends on a priority of a source of the VLAN information. For example, priorities of sources of VLAN information in descending order are: the policy control function network element, the authentication server, the data management network element, and the first terminal. Correspondingly, priorities of VLAN information in descending order are: VLAN information obtained from the policy control function network element, VLAN information obtained from the authentication server, VLAN information obtained from the data management network element, and VLAN information obtained from the first terminal. For example, it is assumed that the session management network element obtains VLAN information1of the first terminal from the authentication server, where the VLAN information1includes a default VID1and an allowed VID list1. The session management network element obtains VLAN information2of the first terminal from the policy control function network element, where the VLAN information2includes a default VID2, an allowed VID list2, and an untagged VID list2. The session management network element obtains VLAN information3of the first terminal from the data management network element, where the VLAN information3includes an untagged VID list3and an interface mode3. Priorities of the VLAN information in descending order are: the VLAN information2, the VLAN information1, and the VLAN information3. Therefore, the session management network element may determine that the VLAN information of the first port includes: the default VID2, the allowed VID list2, the untagged VID list2, and the interface mode3. S102. The session management network element generates first routing information based on the VLAN information of the first port. The first routing information is used by the first user plane network element to handle a first packet, and the first packet is a packet of the first terminal received by the first user plane network element through the first port. In other words, the first packet is a packet received by the first user plane network element from the first terminal. For example, a source MAC address of the first packet may be a MAC address of the first terminal, or a MAC address of another terminal managed by the first terminal. In a possible design, a VLAN parameter included in the first routing information is unrelated to the interface mode of the first port. In other words, regardless of a type of the first port, the first routing information includes the default VID of the first port and the allowed VID list of the first port. In some cases, for example, when the first port is the access port, the VLAN information of the first port may not include the allowed VID list of the first port. Therefore, the session management network element may determine the allowed VID list of the first port based on the default VID of the first port. In this case, the allowed VID list of the first port includes only the default VID of the first port. Optionally, that the first routing information is used by the first user plane network element to handle a first packet includes a combination of one or more of the following:(1-1) the first routing information indicates the first user plane network element to add the default VID of the first port to the first packet when the first packet does not carry a VID;(1-2) the first routing information indicates the first user plane network element to accept the first packet when the allowed VID list of the first port includes a VID carried in the first packet; or(1-3) the first routing information indicates the first user plane network element to discard the first packet when the allowed VID list of the first port does not include a VID carried in the first packet. In another possible design, a VLAN parameter included in the first routing information is related to the interface mode of the first port. Therefore, the first routing information includes the following two cases: Case 1: When the first port is the access port, the first routing information includes the default VID, and the first routing information does not include the allowed VID list of the first port. In this case, that the first routing information is used by the first user plane network element to handle a first packet includes a combination of one or more of the following:(1-1) the first routing information indicates the first user plane network element to add the default VID of the first port to the first packet when the first packet does not carry a VID;(1-4) the first routing information indicates the first user plane network element to accept the first packet when a VID carried in the first packet is the same as the default VID of the first port; or(1-5) the first routing information indicates the first user plane network element to discard the first packet when a VID carried in the first packet is different from the default VID of the first port. Case 2: When the first port is the trunk port or the hybrid port, the first routing information includes the default VID of the first port and the allowed VID list of the first port. In this case, that the first routing information is used by the first user plane network element to handle a first packet includes at least one of (1-1) to (1-3). Optionally, when the VLAN information of the first port includes the first action indication, the session management network element may further determine the first routing information based on the first action indication. For example, if the first action indication includes the operation 1-1, the first routing information does not include the VLAN parameter. Therefore, the first routing information indicates the first user plane network element to accept the first packet. For another example, if the first action indication includes the operation 1-2, the first routing information includes at least the default VID of the first port. Therefore, that the first routing information is used by the first user plane network element to handle a first packet includes at least (1-1). For another example, if the first action indication includes the operation 1-5, the first routing information does not include the allowed VID list of the first port. Therefore, that the first routing information is used by the first user plane network element to handle a first packet does not include (1-2) or (1-3). For another example, if the first action indication includes the operation 1-7, the first routing information includes the allowed VID list of the first port. Therefore, that the first routing information is used by the first user plane network element to handle a first packet includes (1-2) and (1-3). For another example, if the first action indication includes the operation 1-9, the first routing information does not include the S-TAG. Therefore, that the first routing information is used by the first user plane network element to handle a first packet does not include (1-6) in the following. For another example, if the first action indication includes the operation 1-11, the first routing information includes the S-TAG. Therefore, that the first routing information is used by the first user plane network element to handle a first packet includes (1-6) in the following: (1-6) the first routing information indicates the first user plane network element to add the S-TAG to the first packet. In this embodiment of this application, the first routing information may be represented by a first PDR or a first FAR. For example, when the first routing information is represented by the first PDR, a packet detection information (packet detection information, PDI) information element (information element, IE), an outer header removal (outer header removal) IE, or another extended IE in the first PDR may include the first routing information. For example, when the first routing information is represented by the first FAR, an action IE, an outer header creation IE, a forwarding policy IE, a container for header enrichment IE, or another extended IE in the first FAR may include the first routing information. The foregoing extended IE may be named as a VLAN handling indication IE, or have another name. This is not limited. For an extended IE in another PDR or FAR below, refer to the extended IE in the first PDR or the first FAR. General descriptions are provided herein, and details are not described below again. S103. The session management network element sends the first routing information to the first user plane network element, so that the first user plane network element receives the first routing information sent by the session management network element. S104. The first user plane network element stores the first routing information. S105. The session management network element generates second routing information based on the VLAN information of the first port. The second routing information is used by the first user plane network element to handle a second packet, and the second packet is a packet to be sent by the first user plane network element to the first terminal through the first port. In other words, the second packet is a packet to be sent by the first user plane network element to the first terminal. For example, a destination MAC address of the second packet may be the MAC address of the first terminal, or a MAC address of another terminal managed by the first terminal. In a possible design, a VLAN parameter included in the second routing information is unrelated to the interface mode of the first port. In other words, regardless of a type of the first port, the second routing information includes the allowed VID list of the first port and the untagged VID list of the first port. It may be understood that, when the first port is the access port, both the allowed VID list and the untagged VID list of the first port include only the default VID of the first port. When the first port is the trunk port, the untagged VID list of the first port includes only the default VID of the first port. Optionally, that the second routing information is used by the first user plane network element to handle a second packet includes a combination of one or more of the following:(2-1) the second routing information indicates the first user plane network element to discard the second packet when the allowed VID list of the first port does not include a VID carried in the second packet;(2-2) the second routing information indicates the first user plane network element to send the second packet when the allowed VID list of the first port includes a VID carried in the second packet and the untagged VID list of the first port does not include the VID carried in the second packet; or(2-3) the second routing information indicates the first user plane network element to remove a VID carried in the second packet and send the second packet without the VID when the allowed VID list of the first port includes the VID carried in the second packet and the untagged VID list of the first port includes the VID carried in the second packet. In another possible design, a VLAN parameter included in the second routing information is related to the interface mode of the first port. Therefore, the second routing information includes the following three cases: Case 1: When the first port is the access port, the second routing information includes the default VID of the first port, and does not include the allowed VID list or the untagged VID list of the first port. In this case, that the second routing information is used by the first user plane network element to handle a second packet includes a combination of one or more of the following:(2-4) the second routing information indicates the first user plane network element to discard the second packet when a VID carried in the second packet is different from the default VID of the first port; or(2-5) the second routing information indicates the first user plane network element to remove a VID carried in the second packet and send the second packet without the VID when the VID carried in the second packet is the same as the default VID of the first port. Case 2: When the first port is the trunk port, the second routing information includes the default VID of the first port and the allowed VID list of the first port, and does not include the untagged VID list of the first port. In this case, that the second routing information is used by the first user plane network element to handle a second packet includes a combination of one or more of the following:(2-1) the second routing information indicates the first user plane network element to discard the second packet when the allowed VID list of the first port does not include a VID carried in the second packet;(2-6) the second routing information indicates the first user plane network element to send the second packet when the allowed VID list of the first port includes a VID carried in the second packet and the VID carried in the second packet is different from the default VID of the first port; or(2-7) the second routing information indicates the first user plane network element to remove a VID carried in the second packet and send the second packet without the VID when the allowed VID list of the first port includes the VID carried in the second packet and the VID carried in the second packet is the same as the default VID of the first port. Case 3: When the first port is the hybrid port, the second routing information includes the allowed VID list of the first port and the untagged VID list of the first port. In this case, that the second routing information is used by the first user plane network element to handle a second packet includes at least one of (2-1) to (2-3). Optionally, when the VLAN information of the first port includes the first action indication, the session management network element may further determine the second routing information based on the first action indication. For example, if the first action indication includes the operation 1-2, the second routing information does not include the VLAN parameter. Therefore, the second routing information indicates the first user plane network element to send the second packet. For another example, if the first action indication includes the operation 1-4, the second routing information includes the default VID or the untagged VID list of the first port. Therefore, that the second routing information is used by the first user plane network element to handle a second packet includes at least (2-3), (2-5), or (2-7). For another example, if the first action indication includes the operation 1-6, the second routing information does not include the allowed VID list of the first port. Therefore, that the second routing information is used by the first user plane network element to handle a second packet does not include (2-2), (2-3), (2-6), or (2-7). For another example, if the first action indication includes the operation 1-8, the second routing information includes the allowed VID list of the first port. Therefore, that the second routing information is used by the first user plane network element to handle a second packet includes at least (2-2) and (2-3). Alternatively, that the second routing information is used by the first user plane network element to handle a second packet includes at least (2-6) and (2-7). For another example, if the first action indication includes the operation 1-10, the second routing information does not include the S-TAG. Therefore, that the second routing information is used by the first user plane network element to handle a second packet does not include (2-8) in the following. For another example, if the first action indication includes the operation 1-12, the second routing information includes the S-TAG. Therefore, that the second routing information is used by the first user plane network element to handle a second packet includes (2-8) in the following:(2-8) the second routing information indicates the first user plane network element to remove the S-TAG from the second packet. In this embodiment of this application, the second routing information may be represented by a second PDR or a second FAR. For example, when the second routing information is represented by the second PDR, packet detection information, an outer header removal IE, or another extended IE in the second PDR may include the second routing information. For example, when the second routing information is represented by the second FAR, an action IE, an outer header creation IE, a forwarding policy IE, a container for header enrichment IE, or another extended IE in the second FAR may include the second routing information. It may be understood that a VLAN parameter included in the IE in the second PDR or the second FAR is the VLAN parameter included in the second routing information. S106. The session management network element sends the second routing information to the first user plane network element, so that the first user plane network element receives the second routing information sent by the session management network element. S107. The first user plane network element stores the second routing information. An order of performing steps S102to S104and steps S105to S107is not limited in this embodiment of this application. For example, steps S102to S104may be performed before steps S105to S107; steps S105to S107may be performed before steps S102to S104; or steps S102to S104and steps S105to S107are simultaneously performed. When steps S105and S107are simultaneously performed, the first routing information and the second routing information may be carried in different signaling; or the first routing information and the second routing information may be carried in same signaling, for example, an N4 session creation request message. The N4 session creation request message is used to create an N4 session corresponding to the PDU session established by the first terminal. Based on the technical solution shown inFIG.7, in a scenario in which a 5GLAN communicates with an external LAN, the VLAN information of the first port may reflect a networking manner and a VLAN configuration of a network in which the first terminal is located. The first routing information and the second routing information are generated based on the VLAN information of the first port. Therefore, the first user plane network element may handle the first packet based on the first routing information, and handle the second packet based on the second routing information. As a result, a manner in which the first user plane network element handles a packet may adapt to the networking manner and the VLAN configuration of the network in which the first terminal is located, to ensure normal communication of the external LAN. The technical solution shown inFIG.7may be applied to the process in which the first terminal establishes the PDU session. For the session management network element, the process in which the first terminal establishes the PDU session is a period of time after the session management network element receives a PDU session establishment request message from the first terminal and before the session management network element sends a PDU session establishment response message. In some scenarios, because the first terminal may not be the 1stterminal in a 5GLAN group that establishes the PDU session to access the 5GLAN service, there may be another user plane network element besides the first user plane network element in the 5GLAN. The another user plane network element may serve another terminal in the 5GLAN group. For ease of description, the another user plane network element besides the first user plane network element in the 5GLAN is referred to as a second user plane network element for short below. In a possible design, when there is further the second user plane network element in the 5GLAN, a first tunnel may be established between the first user plane network element and the second user plane network element, and the first tunnel may be configured to transparently transmit all packets. In this case, the session management network element may configure routing information related to the first tunnel on the first user plane network element and the second user plane network element according to the conventional technology. In another possible design, when there is further the second user plane network element in the 5GLAN, the session management network element may configure corresponding routing information on the first user plane network element and the second user plane network element according to the route configuration method shown inFIG.8. FIG.8shows a route configuration method according to an embodiment of this application. The method includes the following steps. S201. A session management network element obtains an allowed VID list of a first user plane network element and an allowed VID list of a second user plane network element. In a possible implementation, the session management network element obtains allowed VID lists of ports corresponding to various PDU sessions on the first user plane network element and allowed VID lists of ports corresponding to various PDU sessions on the second user plane network element. Then, the session management network element uses a union set of the allowed VID lists of the ports corresponding to the various PDU sessions on the first user plane network element as the allowed VID list of the first user plane network element. The session management network element uses a union set of the allowed VID lists of the ports corresponding to the various PDU sessions on the second user plane network element as the allowed VID list of the second user plane network element. For example, a UPF1provides services for a terminal1, a terminal2, and a terminal3. Correspondingly, the UPF1has a port1corresponding to a PDU session established by the terminal1, a port2corresponding to a PDU session established by the terminal2, and a port3corresponding to a PDU session established by the terminal3. An allowed VID list of the port1includes a VID1, a VID2, and a VID3. An allowed VID list of the port2includes the VID1, a VID4, and a VID5. An allowed VID list of the port3includes a VID6. Therefore, an allowed VID list of the UPF1includes the VID1, the VID2, the VID3, the VID4, the VID5, and the VID6. It should be noted that, for an implementation in which the session management network element obtains an allowed VID list of a port corresponding to a PDU session on the first user plane network element or the second user plane network element, refer to an implementation in which the session management network element obtains an allowed VID list of a first port. Details are not described herein. S202. The session management network element generates third routing information based on the allowed VID list of the first user plane network element. The third routing information is used by the first user plane network element to handle a third packet. The third packet is a packet received by the first user plane network element from the second user plane network element. In other words, the third packet is a packet received by the first user plane network element from the second user plane network element through a second port. The second port is a port of a first tunnel on the first user plane network element side. The first tunnel is a tunnel between the first user plane network element and the second user plane network element. For example, the first tunnel may be an N19 tunnel. Optionally, the session management network element or the first user plane network element may configure port information of the second port. When the first user plane network element is responsible for configuring the port information of the second port, the session management network element may obtain the port information of the second port from the first user plane network element. In this embodiment of this application, the third routing information includes the allowed VID list of the first user plane network element. Optionally, that the third routing information is used by the first user plane network element to handle a third packet includes a combination of one or more of the following:(3-1) the third routing information indicates the first user plane network element to discard the third packet when the allowed VID list of the first user plane network element does not include a VID carried in the third packet; or(3-2) the third routing information indicates the first user plane network element to accept the third packet when the allowed VID list of the first user plane network element includes a VID carried in the third packet. Based on the third routing information, when the VID carried in the third packet does not exist in the allowed VID list of the first user plane network element, the first user plane network element does not need to further handle the third packet, to reduce pressure of handling an Ethernet packet by the first user plane network element. In this embodiment of this application, the third routing information may be represented by a third PDR or a third FAR. For example, when the third routing information is represented by the third PDR, packet detection information, an outer header removal IE, or another extended IE in the third PDR may include the third routing information. For example, when the third routing information is represented by the third FAR, an action IE, an outer header creation IE, a forwarding policy IE, a container for header enrichment IE, or another extended IE in the third FAR may include the third routing information. S203. The session management network element generates fourth routing information based on the allowed VID list of the second user plane network element. The fourth routing information is used by the first user plane network element to handle a fourth packet. The fourth packet is a packet to be sent by the first user plane network element to the second user plane network element. In other words, the fourth packet is a packet to be sent by the first user plane network element to the second user plane network element through the second port. In this embodiment of this application, the fourth routing information includes the allowed VID list of the second user plane network element. Optionally, that the fourth user plane network element is used by the first user plane network element to handle a fourth packet includes a combination of one or more of the following:(4-1) the fourth routing information indicates the first user plane network element to discard the fourth packet when the allowed VID list corresponding to the second user plane network element does not include a VID carried in the fourth packet; or(4-2) the fourth routing information indicates the first user plane network element to send the fourth packet when the allowed VID list corresponding to the second user plane network element includes a VID carried in the fourth packet. Based on the fourth routing information, when the VID carried in the fourth packet does not exist in the allowed VID list of the second user plane network element, the first user plane network element does not need to send the fourth packet to the second user plane network element, to reduce unnecessary transmission overheads. In addition, the second user plane network element does not need to handle the fourth packet either, to reduce pressure of handling an Ethernet packet by the second user plane network element. In this embodiment of this application, the fourth routing information may be represented by a fourth PDR or a fourth FAR. For example, when the fourth routing information is represented by the fourth PDR, packet detection information, an outer header removal IE, or another extended IE in the fourth PDR may include the fourth routing information. For example, when the fourth routing information is represented by the fourth FAR, an action IE, an outer header creation IE, a forwarding policy IE, a container for header enrichment IE, or another extended IE in the fourth FAR may include the fourth routing information. S204. The session management network element sends the third routing information and the fourth routing information to the first user plane network element, so that the first user plane network element receives the third routing information and the fourth routing information. Optionally, the session management network element may simultaneously send the third routing information and the fourth routing information, or may separately send the third routing information and the fourth routing information at different moments. Optionally, the third routing information and the fourth routing information may be carried in same signaling, or may be carried in different signaling. The signaling that carries the third routing information and the fourth routing information may be an N4 session creation/modification request message. S205. The first user plane network element stores the third routing information and the fourth routing information. S206. The session management network element generates fifth routing information based on the allowed VID list of the first user plane network element. The fifth routing information is used by the second user plane network element to handle a fifth packet. The fifth packet is a packet to be sent by the second user plane network element to the first user plane network element. In other words, the fifth packet is a packet to be sent by the second user plane network element to the first user plane network element through a third port. The third port is a port of the first tunnel on the second user plane network element side. Optionally, the session management network element or the second user plane network element may configure port information of the third port. When the second user plane network element is responsible for configuring the port information of the third port, the session management network element may obtain the port information of the third port from the second user plane network element. In this embodiment of this application, the fifth routing information includes the allowed VID list of the first user plane network element. Optionally, that the fifth routing information is used by the second user plane network element to handle a fifth packet includes a combination of one or more of the following:(5-1) the fifth routing information indicates the second user plane network element to discard the fifth packet when the allowed VID list of the first user plane network element does not include a VID carried in the fifth packet; or(5-2) the fifth routing information indicates the second user plane network element to send the fifth packet when the allowed VID list of the first user plane network element includes a VID carried in the fifth packet. Based on the fifth routing information, when the VID carried in the fifth packet does not exist in the allowed VID list of the first user plane network element, the second user plane network element does not need to send the fifth packet to the first user plane network element, to reduce unnecessary transmission overheads. In addition, the first user plane network element does not need to handle the fifth packet either, to reduce pressure of handling an Ethernet packet by the first user plane network element. In this embodiment of this application, the fifth routing information may be represented by a fifth PDR or a fifth FAR. For example, when the fifth routing information is represented by the fifth PDR, packet detection information, an outer header removal IE, or another extended IE in the fifth PDR may include the fifth routing information. For example, when the fifth routing information is represented by the fifth FAR, an action IE, an outer header creation IE, a forwarding policy IE, a container for header enrichment IE, or another extended IE in the fifth FAR may include the fifth routing information. S207. The session management network element generates sixth routing information based on the allowed VID list of the second user plane network element. The sixth routing information is used by the second user plane network element to handle a sixth packet. The sixth packet is a packet received by the second user plane network element from the first user plane network element. In other words, the sixth packet is a packet received by the second user plane network element from the first user plane network element through the third port. In this embodiment of this application, the sixth routing information includes the allowed VID list of the second user plane network element. Optionally, that the sixth routing information is used by the second user plane network element to handle a sixth packet includes a combination of one or more of the following:(6-1) the sixth routing information indicates the second user plane network element to discard the sixth packet when the allowed VID list of the second user plane network element does not include a VID carried in the sixth packet; or(6-2) the sixth routing information indicates the second user plane network element to accept the sixth packet when the allowed VID list of the second user plane network element includes a VID carried in the sixth packet. Based on the sixth routing information, when the VID carried in the sixth packet does not exist in the allowed VID list of the second user plane network element, the second user plane network element does not need to further handle the sixth packet, to reduce pressure of handling an Ethernet packet by the second user plane network element. In this embodiment of this application, the sixth routing information may be represented by a sixth PDR or a sixth FAR. For example, when the sixth routing information is represented by the sixth PDR, packet detection information, an outer header removal IE, or another extended IE in the sixth PDR may include the sixth routing information. For example, when the sixth routing information is represented by the sixth FAR, an action IE, an outer header creation IE, a forwarding policy IE, a container for header enrichment IE, or another extended IE in the sixth FAR may include the sixth routing information. S208. The session management network element sends the fifth routing information and the sixth routing information to the second user plane network element, so that the second user plane network element receives the fifth routing information and the sixth routing information that are sent by the session management network element. Optionally, the session management network element may simultaneously send the fifth routing information and the sixth routing information, or may separately send the fifth routing information and the sixth routing information at different moments. Optionally, the fifth routing information and the sixth routing information may be carried in same signaling, or may be carried in different signaling. The signaling that carries the fifth routing information and the sixth routing information may be an N4 session creation/modification request message. S209. The second user plane network element stores the fifth routing information and the sixth routing information. An order of performing steps S202to S205and steps S206to S209is not limited in this embodiment of this application. For example, steps S202to S205may be performed before steps S206to S209; steps S206to S209may be performed before steps S202to S205; or steps S202to S205and steps S206to S209are simultaneously performed. Based on the technical solution shown inFIG.8, this helps reduce pressure of handling an Ethernet packet by a user plane network element in a 5GLAN, and reduce transmission overheads of the Ethernet packet. FIG.9shows a route configuration method according to an embodiment of this application. The method includes the following steps. S301. A session management network element obtains VLAN information of a fourth port. The fourth port is a port used by a first user plane network element to be connected to a data network. For example, the fourth port is an N6 port on the first user plane network element. In other words, the fourth port is a port of an N6 tunnel on the first user plane network element side. The VLAN information of the fourth port includes a combination of one or more of the following: a default VID of the fourth port, an allowed VID list of the fourth port, or an untagged VID list of the fourth port. The default VID of the fourth port identifies a packet that is received through the fourth port and that does not carry a VID. The allowed VID list of the fourth port records one or more VIDs. AVID recorded in the allowed VID list of the fourth port is the same as a VID carried in a packet allowed to pass through the fourth port. The untagged VID list of the fourth port records one or more VIDs. A VID recorded in the untagged VID list of the fourth port is the same as a VID removed from a packet to be sent through the fourth port. Optionally, the VLAN information of the fourth port may further include another VLAN parameter, for example, an interface mode, a priority of a VID, indication information, and a second action indication. Optionally, the second action indication indicates an operation on a packet sent by the data network and/or a packet to be sent to the data network. The operation indicated by the second action indication includes a combination of one or more of the following:an operation 2-1: skipping handling a VID carried in the packet sent by the data network;an operation 2-2: adding the default VID of the fourth port to the packet sent by the data network;an operation 2-3: skipping handling a VID carried in the packet to be sent to the data network;an operation 2-4: when the VID carried in the packet to be sent to the data network is the default VID of the fourth port or a VID in the untagged VID list of the fourth port, removing the VID carried in the packet to be sent to the data network;an operation 2-5: skipping determining, based on the allowed VID list of the fourth port, whether to allow transmission of the packet sent by the data network;an operation 2-6: skipping determining, based on the allowed VID list of the fourth port, whether to allow transmission of the packet to be sent to the data network;an operation 2-7: determining, based on the allowed VID list of the fourth port, whether to allow transmission of the packet sent by the data network;an operation 2-8: determining, based on the allowed VID list of the fourth port, whether to allow transmission of the packet to be sent to the data network;an operation 2-9: skipping adding a S-TAG to the packet sent by the data network;an operation 2-10: skipping removing the S-TAG from the packet to be sent to the data network;an operation 2-11: adding the S-TAG to the packet sent by the data network; oran operation 2-12: removing the S-TAG from the packet to be sent to the data network. The operation 2-2 to the operation 2-12 are merely examples, and the second action indication may further indicate another operation. This is not limited herein. Optionally, the second action indication may include a plurality of indications that do not conflict with each other in the operation 2-2 to the operation 2-12. For example, the second action indication may include the operation 2-2 and the operation 2-7. In a possible implementation, the session management network element first obtains VLAN information of the data network and/or locally configured VLAN information. Then, the session management network element determines the VLAN information of the fourth port based on the VLAN information of the data network and/or the locally configured VLAN information. It may be understood that, for a manner of determining the VLAN information of the fourth port, refer to the manner of determining the VLAN information of the first port in the embodiment shown inFIG.7. Details are not described herein again. The locally configured VLAN information may be preconfigured on the session management network element, or may be preconfigured on the first user plane network element. It may be understood that when the locally configured VLAN information is preconfigured on the first user plane network element, the session management network element may obtain the locally configured VLAN information from the first user plane network element. In this embodiment of this application, for VLAN parameters that may be included in the VLAN information of the data network and the locally configured VLAN information, refer to VLAN information of a first terminal. Details are not described herein again. Optionally, that the session management network element obtains the VLAN information of the data network includes at least one of the following manners: Manner 1: The session management network element obtains the VLAN information of the data network from an authentication server. For example, the session management network element sends an authorization request to the authentication server. Then, the session management network element receives an authentication authorization response message sent by the authentication server. The authentication authorization response message includes the VLAN information of the data network. Manner 2: The session management network element obtains the VLAN information of the data network from a policy control function network element. For example, the session management network element sends a policy control obtaining request to the policy control function network element. Then, the session management network element receives a policy control obtaining response message sent by the policy control function network element, where the policy control obtaining response message carries a policy and charging control (PCC) rule, and the PCC rule includes the VLAN information of the data network. Optionally, the policy control function network element may directly obtain the VLAN information of the data network from an application function entity. Alternatively, the policy function network element may obtain the VLAN information of the data network from an application function entity through an exposure function network element. S302. The session management network element generates seventh routing information based on the VLAN information of the fourth port. The seventh routing information is used by the first user plane network element to handle a seventh packet. The seventh packet is a packet received by the first user plane network element from the data network through the fourth port. In other words, the seventh packet is a packet received by the first user plane network element from a device or a server in the data network through the fourth port. In a possible design, a VLAN parameter included in the seventh routing information is unrelated to the interface mode of the fourth port. In other words, regardless of a type of the fourth port, the seventh routing information includes the default VID of the fourth port and the allowed VID list of the fourth port. It may be understood that, when the fourth port is an access port, the allowed VID list of the fourth port includes only the default VID of the fourth port. Optionally, that the seventh routing information is used by the first user plane network element to handle a seventh packet includes a combination of one or more of the following:(7-1) the seventh routing information indicates the first user plane network element to add the default VID of the fourth port to the seventh packet when the seventh packet does not carry a VID;(7-2) the seventh routing information indicates the first user plane network element to accept the seventh packet when the allowed VID list of the fourth port includes a VID carried in the seventh packet; or(7-3) the seventh routing information indicates the first user plane network element to discard the seventh packet when the allowed VID list of the fourth port does not include a VID carried in the seventh packet. In another possible design, a VLAN parameter included in the seventh routing information is related to the interface mode of the fourth port. Therefore, the seventh routing information includes the following two cases: Case 1: When the fourth port is an access port, the seventh routing information includes the default VID of the fourth port, and does not include the allowed VID list of the fourth port. In this case, that the seventh routing information is used by the first user plane network element to handle a seventh packet includes a combination of one or more of the following:(7-1) the seventh routing information indicates the first user plane network element to add the default VID of the fourth port to the seventh packet when the seventh packet does not carry a VID;(7-4) the seventh routing information indicates the first user plane network element to accept the seventh packet when a VID carried in the seventh packet is the same as the default VID of the fourth port; or(7-5) the seventh routing information indicates the first user plane network element to discard the seventh packet when a VID carried in the seventh packet is different from the default VID of the fourth port. Case 2: When the fourth port is a trunk port or a hybrid port, the seventh routing information includes the default VID of the fourth port and the allowed VID list of the fourth port. In this case, that the seventh routing information is used by the first user plane network element to handle a seventh packet includes at least one of (7-1) to (7-3). Optionally, when the VLAN information of the fourth port includes the second action indication, the session management network element may further determine the seventh routing information based on the second action indication. For example, if the second action indication includes the operation 2-1, the seventh routing information does not include the VLAN parameter. Therefore, the seventh routing information indicates the first user plane network element to accept the seventh packet. For another example, if the second action indication includes the operation 2-2, the seventh routing information includes at least the default VID of the fourth port. Therefore, that the seventh routing information is used by the first user plane network element to handle a seventh packet includes at least (7-1). For another example, if the second action indication includes the operation 2-5, the seventh routing information does not include the allowed VID list of the fourth port. Therefore, that the seventh routing information is used by the first user plane network element to handle a seventh packet does not include (7-2) or (7-3). For another example, if the second action indication includes the operation 2-7, the seventh routing information includes the allowed VID list of the fourth port. Therefore, that the seventh routing information is used by the first user plane network element to handle a seventh packet includes (7-2) and (7-3). For another example, if the second action indication includes the operation 2-9, the seventh routing information does not include the S-TAG. Therefore, that the seventh routing information is used by the first user plane network element to handle a seventh packet does not include (7-6) in the following. For another example, if the second action indication includes the operation 2-11, the seventh routing information includes the S-TAG. Therefore, that the seventh routing information is used by the first user plane network element to handle a seventh packet includes (7-6) in the following: (7-6) the seventh routing information indicates the first user plane network element to add the S-TAG to the seventh packet. In this embodiment of this application, the seventh routing information may be a seventh PDR or a seventh FAR. For example, when the seventh routing information is the seventh PDR, packet detection information, an outer header removal IE, or another extended IE in the seventh PDR may include one or more VLAN parameters (for example, the allowed VID list of the fourth port and the default VID of the fourth port). For example, when the seventh routing information is the seventh FAR, an action IE, an outer header creation IE, a forwarding policy IE, a container for header enrichment IE, or another extended IE in the seventh FAR may include one or more VLAN parameters. S303. The session management network element sends the seventh routing information to the first user plane network element, so that the first user plane network element receives the seventh routing information sent by the session management network element. S304. The first user plane network element stores the seventh routing information. S305. The session management network element generates eighth routing information based on the VLAN information of the fourth port. The eighth routing information is used by the first user plane network element to handle an eighth packet. The eighth packet is a packet to be sent by the first user plane network element to the data network through the fourth port. In other words, the eighth packet is a packet to be sent by the first user plane network element to the device or the server in the data network through the fourth port. In a possible design, a VLAN parameter included in the eighth routing information is unrelated to the interface mode of the fourth port. In other words, regardless of a type of the fourth port, the eighth routing information includes the allowed VID list of the fourth port and the untagged VID list of the fourth port. It may be understood that, when the fourth port is an access port, both the allowed VID list and the untagged VID list of the fourth port include only the default VID of the fourth port. When the fourth port is a trunk port, the untagged VID list of the fourth port includes only the default VID of the fourth port. Optionally, that the eighth routing information is used by the first user plane network element to handle an eighth packet includes a combination of one or more of the following:(8-1) the eighth routing information indicates the first user plane network element to discard the eighth packet when the allowed VID list of the fourth port does not include a VID carried in the eighth packet;(8-2) the eighth routing information indicates the first user plane network element to send the eighth packet when the allowed VID list of the fourth port includes a VID carried in the eighth packet and the untagged VID list of the fourth port does not include the VID carried in the eighth packet; or(8-3) the eighth routing information indicates the first user plane network element to remove a VID carried in the eighth packet, and send the eighth packet without the VID when the allowed VID list of the fourth port includes the VID carried in the eighth packet and the untagged VID list of the fourth port includes the VID carried in the eighth packet. In another possible design, a VLAN parameter included in the eighth routing information is related to the interface mode of the fourth port. Therefore, the fourth routing information includes the following three cases: Case 1: When the fourth port is an access port, the eighth routing information includes the default VID of the fourth port, and does not include the allowed VID list or the untagged VID list of the fourth port. In this case, that the eighth routing information is used by the first user plane network element to handle an eighth packet includes a combination of one or more of the following:(8-4) the eighth routing information indicates the first user plane network element to discard the eighth packet when a VID carried in the eighth packet is different from the default VID of the fourth port; or(8-5) the eighth routing information indicates the first user plane network element to remove a VID carried in the eighth packet, and send the eighth packet without the VID when the VID carried in the eighth packet is the same as the default VID of the fourth port. Case 2: When the fourth port is a trunk port, the eighth routing information includes the default VID of the fourth port and the allowed VID list of the fourth port, and does not include the untagged VID list of the fourth port. In this case, that the eighth routing information is used by the first user plane network element to handle an eighth packet includes a combination of one or more of the following:(8-1) the eighth routing information indicates the first user plane network element to discard the eighth packet when the allowed VID list of the fourth port does not include a VID carried in the eighth packet;(8-6) the eighth routing information indicates the first user plane network element to send the eighth packet when the allowed VID list of the fourth port includes a VID carried in the eighth packet and the VID carried in the eighth packet is different from the default VID of the fourth port; or(8-7) the eighth routing information indicates the first user plane network element to remove a VID carried in the eighth packet, and send the eighth packet without the VID when the allowed VID list of the fourth port includes the VID carried in the eighth packet and the VID carried in the eighth packet is the same as the default VID of the fourth port. Case 3: When the fourth port is a hybrid port, the eighth routing information includes the allowed VID list of the fourth port and the untagged VID list of the fourth port. In this case, that the eighth routing information is used by the first user plane network element to handle an eighth packet includes at least one of (8-1) to (8-3). Optionally, when the VLAN information of the fourth port includes the second action indication, the session management network element may further determine the eighth routing information based on the second action indication. For example, if the second action indication includes the operation 2-2, the eighth routing information does not include the VLAN parameter. Therefore, the eighth routing information indicates the first user plane network element to send the eighth packet. For another example, if the second action indication includes the operation 2-4, the eighth routing information includes the default VID or the untagged VID list of the fourth port. Therefore, that the eighth routing information is used by the first user plane network element to handle an eighth packet includes at least (8-3), (8-5), or (8-7). For another example, if the second action indication includes the operation 2-6, the eighth routing information does not include the allowed VID list of the fourth port. Therefore, that the eighth routing information is used by the first user plane network element to handle an eighth packet does not include (8-2), (8-3), (8-6), or (8-7). For another example, if the second action indication includes the operation 2-8, the eighth routing information includes the allowed VID list of the fourth port. Therefore, that the eighth routing information is used by the first user plane network element to handle an eighth packet includes at least (8-2) and (8-3). Alternatively, that the eighth routing information is used by the first user plane network element to handle an eighth packet includes at least (8-6) and (8-7). For another example, if the second action indication includes the operation 2-10, the eighth routing information does not include the S-TAG. Therefore, that the eighth routing information is used by the first user plane network element to handle an eighth packet does not include (8-8) in the following. For another example, if the second action indication includes the operation 2-12, the eighth routing information includes the S-TAG. Therefore, that the eighth routing information is used by the first user plane network element to handle an eighth packet includes (8-8) in the following:(8-8) the eighth routing information indicates the first user plane network element to remove the S-TAG from the eighth packet. In this embodiment of this application, the eighth routing information may be an eighth PDR or an eighth FAR. For example, when the eighth routing information is the eighth PDR, packet detection information, an outer header removal IE, or another extended IE in the eighth PDR may include one or more VLAN parameters (for example, the allowed VID list of the fourth port and the untagged VID list of the fourth port). For example, when the eighth routing information is the eighth FAR, an action IE, an outer header creation IE, a forwarding policy IE, a container for header enrichment IE, or another extended IE in the eighth FAR may include one or more VLAN parameters. S306. The session management network element sends the eighth routing information to the first user plane network element, so that the first user plane network element receives the eighth routing information sent by the session management network element. S307. The first user plane network element stores the eighth routing information. An order of performing steps S302to S304and steps S305to S307is not limited in this embodiment of this application. For example, steps S302to S304may be performed before steps S305to S307; steps S305to S307may be performed before steps S302to S304; or steps S302to S304and steps S305to S307are simultaneously performed. The technical solution shown inFIG.9may be applied to a process in which the first terminal or another terminal served by the first user plane network element establishes/modifies a PDU session. Based on the technical solution shown inFIG.9, the VLAN information of the fourth port may reflect a networking manner and a VLAN configuration of the data network. The seventh routing information and the eighth routing information are generated based on the VLAN information of the fourth port. Therefore, the first user plane network element may handle the seventh packet based on the seventh routing information, and handle the eighth packet based on the eighth routing information, so that a manner in which the first user plane network element handles a packet may adapt to the networking manner and the VLAN configuration of the data network, to ensure forwarding of an Ethernet packet in a scenario in which a 5GLAN communicates with an external LAN. In this embodiment of this application, that the packet carries the VID means that the packet carries a VLAN tag, where the VLAN tag includes the VID. Correspondingly, the VID carried in the packet is the VID in the VLAN tag carried in the packet. In this embodiment of this application, that the default VID is added to the packet means that a VLAN tag is added to the packet, where the VLAN tag includes the default VID. In this embodiment of this application, that the VID of the packet is removed means that the VLAN tag of the packet is removed. The foregoing mainly describes the solutions in embodiments of this application from a perspective of interaction between the network elements. It may be understood that, to implement the foregoing functions, the network elements such as the session management network element and the user plane network element each include a corresponding hardware structure and/or a software module for performing each function. A person skilled in the art should easily be aware that, in combination with the units and algorithm steps in the examples described in embodiments disclosed in this specification, this application can be implemented by hardware or a combination of hardware and computer software. Whether a function is performed by hardware or hardware driven by computer software depends on particular applications and design constraint conditions of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of this application. In embodiments of this application, the session management network element and the user plane network element may be divided into functional units based on the foregoing method examples. For example, each functional unit may be obtained through division based on a corresponding function, or two or more functions may be integrated into one processing unit. The integrated unit may be implemented in a form of hardware, or may be implemented in a form of a software functional unit. It should be noted that, in embodiments of this application, division into the units is an example, and is merely logical function division. During actual implementation, another division manner may be used. When an integrated unit is used,FIG.10shows a communication apparatus in the foregoing embodiments. The communication apparatus may include a processing unit101and a communication unit102. In an example, the communication apparatus may be the session management network element mentioned above, or a chip used in the session management network element. In this case, the processing unit101is configured to support the session management network element in performing steps S101, S102, and S105inFIG.7, steps S201to S203, S206, and S207inFIG.8, steps S301, S302, and S305inFIG.9, and/or another processing operation that needs to be performed by the session management network element in embodiments of this application. The communication unit102is configured to support the session management network element in performing steps S103and S106inFIG.7, steps S204and S208inFIG.8, steps S303and S306inFIG.9, and/or another communication operation that needs to be performed by the session management network element in embodiments of this application. In another example, the communication apparatus may be the first user plane network element mentioned above, or a chip used in the first user plane network element. In this case, the processing unit101is configured to support the first user plane network element in performing steps S104and S107inFIG.7, step S205inFIG.8, steps S304and S307inFIG.9, and/or another processing operation that needs to be performed by the first user plane network element in embodiments of this application. The communication unit102is configured to support the first user plane network element in performing steps S103and S106inFIG.7, step S204inFIG.8, steps S303and S306inFIG.9, and/or another communication operation that needs to be performed by the first user plane network element in embodiments of this application. In another example, the communication apparatus may be the second user plane network element mentioned above, or a chip used in the second user plane network element. In this case, the processing unit101may support the second user plane network element in performing step S209inFIG.8, and/or another processing operation that needs to be performed by the second user plane network element in embodiments of this application. The communication unit102is configured to support the second user plane network element in performing step S208inFIG.8, and/or another communication operation that needs to be performed by the second user plane network element in embodiments of this application. When the integrated unit inFIG.10is implemented in a form of a software functional module and sold or used as an independent product, the integrated unit may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions in embodiments of this application essentially, or the part contributing to the conventional technology, or all or a part of the technical solutions may be implemented in a form of a software product. The computer software product is stored in a storage medium and includes several instructions for instructing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor to perform all or some of the steps of the methods in embodiments of this application. The storage medium that stores the computer software product includes any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disc. A unit inFIG.10may also be referred to as a module. For example, a processing unit may be referred to as a processing module, and a communication unit may be referred to as a communication module. FIG.11is a schematic diagram of a hardware structure of a communication apparatus according to an embodiment of this application. The communication apparatus200includes one or more processors201and a communication interface203. Optionally, the communication apparatus200further includes a memory204. The memory204may include a read-only memory and a random access memory, and provide operation instructions and data for the processor201. A part of the memory204may further include a non-volatile random access memory (NVRAM). In this embodiment of this application, the communication apparatus200invokes the operation instructions (where the operation instructions may be stored in an operating system) stored in the memory204, to perform a corresponding operation. The processor201may also be referred to as a central processing unit (CPU). The processor201, the communication interface203, and the memory204are coupled together by using a bus system202. The bus system202may further include a power bus, a control bus, a status signal bus, and the like in addition to a data bus. However, for clarity of description, various types of buses inFIG.11are marked as the bus system202. The methods disclosed in embodiments of this application may be applied to the processor201or may be implemented by the processor201. The processor201may be an integrated circuit chip and has a signal processing capability. In an implementation process, the steps in the foregoing methods may be implemented by using a hardware integrated logical circuit in the processor201, or by using instructions in a form of software. The processor201may be a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or another programmable logic device, a discrete gate or a transistor logic device, or a discrete hardware component. The processor may implement or perform the methods, the steps, and logical block diagrams that are disclosed in embodiments of this application. The general-purpose processor may be a microprocessor, or the processor may be any conventional processor or the like. Steps of the methods disclosed with reference to embodiments of this application may be directly performed and completed by a hardware decoding processor, or may be performed and completed by using a combination of hardware and software modules in the decoding processor. The software module may be located in a mature storage medium in the art, such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, an electrically erasable programmable memory, or a register. The storage medium is located in the memory204, and the processor201reads information in the memory204and completes the steps in the foregoing methods in combination with hardware of the processor201. For example, the processing unit101inFIG.10may be implemented by the processor201inFIG.11, and the communication unit102inFIG.10may be implemented by the communication interface203inFIG.11. The foregoing communication unit or communication interface may be an interface circuit or a communication interface in the apparatus for receiving a signal from another apparatus. For example, when the apparatus is implemented as a chip, the communication unit or communication interface is an interface circuit or a communication interface in the chip for receiving a signal from or sending a signal to another chip or apparatus. In the foregoing embodiment, the instructions that are stored in the memory and that are to be executed by the processor may be implemented in a form of a computer program product. The computer program product may be written into the memory in advance, or may be downloaded in a form of software and installed in the memory. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the procedures or functions according to embodiments of this application are all or partially generated. The computer may be a general-purpose computer, a dedicated computer, a computer network, or another programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or may be transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired (for example, a coaxial cable, an optical fiber, or a digital subscriber line (DSL)) or wireless (for example, infrared, radio, or microwave) manner. The computer-readable storage medium may be any usable medium accessible to a computer, or a data storage device, for example, a server or a data center, integrating one or more usable media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, a DVD), a semiconductor medium (for example, a solid state disk, SSD), or the like. Optionally, an embodiment of this application further provides a computer-readable storage medium. The computer-readable storage medium stores computer instructions. When the computer-readable storage medium runs on a computer, the computer is enabled to perform the route configuration method provided in embodiments of this application. An embodiment of this application further provides a computer program product that includes computer instructions. When the computer program product runs on a computer, the computer is enabled to perform the route configuration methods provided in embodiments of this application. An embodiment of this application provides a chip. The chip includes a processor. When the processor executes instructions, the chip is enabled to perform the route configuration methods provided in embodiments of this application. All or some of the foregoing embodiments may be implemented by using software, hardware, firmware, or any combination thereof. When a software program is used to implement embodiments, all or some of embodiments may be implemented in a form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the procedures or functions according to embodiments of this application are all or partially generated. The computer may be a general-purpose computer, a dedicated computer, a computer network, or another programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or may be transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired (for example, a coaxial cable, an optical fiber, or a digital subscriber line (DSL)) or wireless (for example, infrared, radio, or microwave) manner. The computer-readable storage medium may be any usable medium accessible to a computer, or a data storage device, for example, a server or a data center, integrating one or more usable media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, a DVD), a semiconductor medium (for example, a solid state disk (SSD)), or the like. Although this application is described with reference to embodiments, in a process of implementing this application that claims protection, a person skilled in the art may understand and implement another variation of the disclosed embodiments by viewing the accompanying drawings, disclosed content, and the appended claims. In the claims, “comprising” (comprising) does not exclude another component or another step, and “a” or “one” does not exclude a case of “a plurality of”. A single processor or another unit may implement several functions enumerated in the claims. Some measures are set forth in dependent claims that are different from each other, but this does not mean that these measures cannot be combined to produce a great effect. Although this application is described with reference to specific features and embodiments thereof, it is clear that various modifications and combinations may be made to them without departing from the spirit and scope of this application. Correspondingly, this specification and the accompanying drawings are merely example descriptions of this application defined by the appended claims, and are considered as any of or all modifications, variations, combinations or equivalents that cover the scope of this application. Clearly, a person skilled in the art can make various modifications and variations to this application without departing from the spirit and scope of this application. This application is intended to cover these modifications and variations of this application provided that they fall within the scope of the claims of this application and their equivalent technologies.
122,642
11943140
DETAILED DESCRIPTION Embodiments of the present disclosure will be described in connection with a vehicle system and vehicle network communication. Some automotive electronics systems may support a layered communication architecture (e.g., a software architecture such as AUTOSAR, etc.) for network communication within a vehicle. For example, a layered communication architecture may support routing (e.g., by a PDU router) PDUs among modules located in different layers of the layered communication architecture, using a communication layer. Some examples of a layered communication architecture include a communication stack (also referred to herein as a COM stack, for example, as implemented in software architectures such as AUTOSAR, etc.) including communication services, communication hardware abstraction, and communication drivers. In some cases, PDUs routed among the different modules and layers of the layered communication architecture may include PDU identifiers (also referred to herein as ‘PduIds’), which are labels used by a communication layer to identify interaction layer PDUs (also referred to herein as ‘I-PDUs’) for routing through the communication layer. PDU identifiers, for example, may be static values (e.g., fixed based on a static configuration table) that are processed by the communication layer to determine the correct destinations of the PDUs. Some communication architectures (e.g., AUTOSAR, etc.) may use a multicast transmission from a source upper layer to a lower layer, in which the same data is addressed to multiple modules (e.g., transport modules, interface modules, etc.) in the lower layer. In such cases, for example, for a PDU router to route or transmit the data to the multiple modules, the PDU router must request the data multiple times from a PDU service module (e.g., a unified diagnostic services (UDS) server module, a PDU manager, etc.) at the source upper layer in order to route or transmit the data to the modules at the lower layers. FIG.1shows a perspective view of a vehicle100in accordance with example aspects of the present disclosure. The vehicle100may include one or more interior components (e.g., components inside an interior space, or user space, of a vehicle100, etc.), exterior components (e.g., components outside of the interior space, or user space, of a vehicle100, etc.), drive systems, controls systems, structural components, etc. Although shown in the form of a car, it should be appreciated that the vehicle100described herein may include any conveyance or model of a conveyance, where the conveyance was designed for the purpose of moving one or more tangible objects, such as people, animals, cargo, and the like. The term “vehicle” does not require that a conveyance moves or is capable of movement. Typical vehicles may include but are in no way limited to cars, trucks, motorcycles, busses, automobiles, trains, railed conveyances, boats, ships, marine conveyances, submarine conveyances, airplanes, space craft, flying machines, human-powered conveyances, and the like. In some embodiments, the vehicle100may include a number of sensors, devices, and/or systems that are capable of assisting in driving operations, e.g., autonomous or semi-autonomous control. Examples of the various sensors and systems may include, but are in no way limited to, one or more of cameras (e.g., independent, stereo, combined image, etc.), infrared (IR) sensors, radio frequency (RF) sensors, ultrasonic sensors (e.g., transducers, transceivers, etc.), RADAR sensors (e.g., object-detection sensors and/or systems), LIDAR (Light Imaging, Detection, And Ranging) systems, odometry sensors and/or devices (e.g., encoders, etc.), orientation sensors (e.g., accelerometers, gyroscopes, magnetometer, etc.), navigation sensors and systems (e.g., GPS, etc.), and other ranging, imaging, and/or object-detecting sensors. The sensors may be disposed in an interior space of the vehicle100and/or on an outside of the vehicle100. In some embodiments, the sensors and systems may be disposed in one or more portions of a vehicle100(e.g., a frame of the vehicle100, a body panel, a compartment, etc. According to example aspects of the present disclosure, a network communications system101(e.g., a vehicle communications system) is described herein that may support context-based (e.g., context-dependent) PDU identifier provisioning. The network communications system101may include a PDU router104. The PDU router104may support routing of PDUs between various modules (e.g., PDU service modules, transport modules, interface modules, etc.) (also referred to herein as software components) implemented at different layers of a layered communication architecture of the network communications system101. In some example aspects, the layered communication architecture may be an AUTOSAR layered architecture including a communication stack (e.g., ComStack) capable of facilitating network communication (e.g., vehicle network communication). In some aspects, the layered communication architecture may support communication according to various network protocols (e.g., a controller area network (CAN) communication protocol, a local interconnect network (LIN) communication protocol, Diagnostics over Internet Protocol (DoIP), etc.). For example, layers included in the layered communication architecture may include an application layer, an application abstraction layer (e.g., supportive of a runtime environment (RTE)), and a basic software (BSW) layer. The BSW layer may include a service layer, a controller (e.g., electronic control unit (ECU)) abstraction layer, a microcontroller abstraction layer (MCAL), and device drivers. Aspects of the present disclosure described herein may be applied to any communications system (e.g., other network communications systems, other vehicle communications systems, etc.). Aspects of an example layered communication architecture and the various layers and modules implemented therein are later described with reference toFIG.2. According to example aspects of the present disclosure, the network communications system101may support generating and/or assigning a PDU identifier112as a handle for a PDU at every layer of the layered communication architecture (e.g., every abstraction layer). In some aspects, each PDU identifier112may be unique or specific to each module (e.g., PDU service modules, transport modules, interface modules, etc.) of the layered communication architecture. In some examples, different layers of the layered communication architecture may have different PDU identifiers112for the same PDU. For example, for a PDU communicated from a PDU service module (e.g., a UDS server module, a PDU manager, etc.) at an upper layer to a transport module at a lower layer, a PDU identifier112for processing a PDU at the upper layer (e.g., by the PDU service module) may be different from a PDU identifier112for processing the same PDU at the lower layer (e.g., by the transport module). In an example, PDU identifiers112used by the PDU service module (e.g., a UDS server module, a PDU manager, etc.) for referencing a set of PDUs may be different from PDU identifiers112used by an interface module (e.g., CAN IF) for referring the same set of PDUs. Accordingly, for example, the network communications system101may support processing and communication of PDUs among the various modules (e.g., PDU service modules, transport modules, interface modules, etc.) of the layered communication architecture, without implementing a global PDU identifier for each PDU. That is, for example, the network communications system101may support processing and communication of a PDU by the various modules, without the modules using the same PDU identifier112for the PDU. In some aspects, the network communications system101may support processing and communication of a PDU by the various modules, in combination with lookup tables (LUTs)108(e.g., LUT108-athrough LUT108-n, where n is an integer value) associated with each module. In some cases, the LUTs108may be associated with each layer of the layered communication architecture. In some aspects, the LUTs108may be translation tables for translating between PDU identifiers112(e.g., local values) applicable to each module and/or layer of the layered communication architecture. Example aspects of the LUTs108are later described with reference to the following figures. Each PDU identifier112may be context-based with respect to each module (e.g., PDU service module, transport module, interface module, etc.) of the layered communication architecture. For example, each PDU identifier112may be context-dependent upon each module of the layered communication architecture. In an example, as a PDU traverses through (e.g., is processed by, received by, transmitted by, etc.) different modules of the layered communication architecture, the different modules may reference the PDU using different respective PDU identifiers112. In some example aspects, a PDU may have a 1:1 relation with a lower-level interface module (e.g., CAN IF module254, LIN IF module278, etc. later described with reference toFIG.2) and/or an upper-level module (e.g., a PDU service module). Therefore, example aspects of the network communications system101may support directly linking respective PDU identifiers112to lower-level modules (e.g., lower-level interface modules) and/or upper-level modules (e.g., PDU service module). The network communications system101may support single cast transmission of a PDU, example aspects of which are later described with reference to the following figures. In some cases, the network communications system101may support multicast transmission of a PDU. In some other cases, the network communications system101may support a combination of single cast transmission and multicast transmission of PDUs. The network communications system101(e.g., the PDU router104, other modules of the layered communication architecture described herein, etc.) may be implemented by aspects of a communications system400(later described with reference toFIG.4), vehicle computing device604(later described with reference toFIG.6), and/or a computer system700(later described with reference toFIG.7). Aspects of the LUTs108and/or PDU identifiers112may be implemented at a database (e.g., database618later described with reference toFIG.6) and/or a memory (e.g., storage device(s)720later described with reference toFIG.7). FIG.2Aillustrates an example of a system200that supports context-based PDU identifier provisioning in accordance with aspects of the present disclosure. The system200may be an example of the network communications system101described with reference toFIG.1. For example, the system200may include a layered communication architecture described herein. Example aspects of elements and/or features of the system200may include aspects of like elements described with reference toFIG.1. In an example, the system200may include a PDU router204and one or more PDU service modules (e.g., UDS208, PDU manager212, etc.). The PDU router204and the PDU service modules may be included in the service layer of the layered communication architecture. In some aspects, the system200may be referred to as a vehicle operating system (VOS), and the PDU router204may be referred to as a VOS PDU router. The system200may support communication using a various communication protocols. For example, the system200may support a CAN communication protocol, a LIN communication protocol, and/or DoIP, but is not limited thereto. With reference toFIG.2A, the system200may include a CAN TP module250, a CAN IF module254, a MCAL CAN driver (DRV)258(also referred to herein as CAN DRV258), a LIN TP module274, a LIN IF module278, a DoIP module294, and a socket adapter (SoAd) module298. In some aspects, the system200may include other transport modules, interface modules, drivers (e.g., communication drivers, bus drivers, etc.) other than those in the examples illustrated herein. The CAN TP module250provides services for segmentation, transmission with flow control, and reassembly of messages. In some aspects, the CAN TP module250may support transmitting and receiving messages that may or may not fit into a single CAN frame. The CAN TP module250may be included in the service layer of the layered communication architecture. The CAN IF module254assists in the hardware abstraction of the system200. In some cases, the CAN IF module254is responsible for services like transmit request, controller mode control etc. In some aspects, the CAN IF module254carries out all hardware independent tasks related to the flow of data to upper layer modules of a CAN stack. The CAN IF module254may be included in the controller (ECU) abstraction layer of the layered communication architecture. The MCAL CAN DRV258is part of the MCAL layer of the layered communication architecture. The MCAL CAN DRV258provides hardware access to upper layer services and a hardware-independent interface to the upper layers. The MCAL CAN DRV258may be included in microcontroller abstraction layer of the layered communication architecture. The LIN TP module274may support the transport of diagnostic service requests and responses. The LIN TP module274may support services for segmentation, transmission with flow control, and reassembly of messages. The LIN TP module274may be included in the service layer of the layered communication architecture. The LIN IF module278connects to the PDU Router204and/or alternative modules above the LIN IF module278for transmission and reception of frames. In some aspects, the LIN IF module278is responsible for the copying of the data of the frames for reception and transmission. The LIN IF module278may be included in the controller (ECU) abstraction layer of the layered communication architecture. DoIP facilitates the use of automotive diagnostic services exposed through UDS over TCP/IP on an Ethernet network. The DoIP module294may be an operating system independent software module which may support transmission of communications amongst components (e.g., devices, applications, etc.) internal and/or external to the system200, using IP. For example, the system200may support communications using DoIP, including an Ethernet IF (ETH IF) module (not illustrated), a TCP/IP module (e.g., a socket based TCP/IP stack) (not illustrated), the SoAd module298, and the DoIP module294. The SoAd module298creates an interface between the PDU router204and the TCP-IP module (e.g., a socket based TCP/IP stack). The SoAd module240may map I-PDU identifiers to socket connections and vice versa. In some cases, the SoAd module240may receive UDP messages or TCP streams and convert the same into PDUs compatible with the system200(e.g., compatible with the PDU router204, other modules of the system200, etc.). Example aspects of the present disclosure are described with reference toFIG.2AthroughFIG.2C. For example,FIG.2Billustrates an example201supported by the system200of generating, routing, and transmitting a PDU (e.g., a message including the PDU).FIG.2Cillustrates an example202supported by the system200of receiving a PDU (e.g., a message including the PDU). The system200may support generating and/or assigning a PDU identifier (e.g., ‘PdumId’, ‘SignalCollectId’, ‘CanIfId’, ‘CanCtlrId’, ‘FwrdCanIfId’, ‘DoipId’, ‘LinId’, etc. illustrated atFIG.2BandFIG.2C) as a handle for a PDU at every layer of the layered communication architecture (e.g., every abstraction layer). In some aspects, each PDU identifier may be unique or specific to each module (e.g., PDU service modules, transport modules, interface modules, etc.) of the layered communication architecture. For example, each PDU identifier may be unique or specific to each of the UDS server module208, the PDU manager212, the PDU router204, the CAN TP module250, the CAN IF module254, etc. In some examples, different layers of the layered communication architecture may have different PDU identifiers for the same PDU. For example, for a PDU communicated from a PDU service module (e.g., UDS server module208, PDU manager212, etc.) at an upper layer to a transport module (e.g., CAN TP module250, LIN TP module274, etc.) at a lower layer, a PDU identifier for processing the PDU at the upper layer (e.g., by the PDU service module) may be different from a PDU identifier for processing the same PDU at the lower layer (e.g., by the transport module). In an example, PDU identifiers used by a PDU service module (e.g., UDS server module208, PDU manager212, etc.) for referencing a set of PDUs may be different from PDU identifiers used by a transport module (e.g., CAN TP module250, LIN TP module274, etc.) for referring the same set of PDUs. Additionally, or alternatively, PDU identifiers used by the PDU service module and/or the transport module may be different from PDU identifiers used by an interface module (e.g., CAN IF module254, LIN IF module278, etc.) for referring to the same set of PDUs. Accordingly, for example, the system200may support processing and communication of PDUs among the various modules (e.g., PDU service modules, transport modules, interface modules, etc.) of the layered communication architecture, without implementing a global PDU identifier for each PDU. That is, for example, the system200may support processing and communication of a PDU by the various modules, without using the same PDU identifier for the PDU. In some aspects, the system200may support processing and communication of a PDU by the various modules, in combination with LUTs associated with each module. In some cases, the LUTs may be associated with each layer of the layered communication architecture. In some aspects, the LUTs may be translation tables for translating between PDU identifiers (e.g., local values) applicable to each module and/or layer of the layered communication architecture. For example, the system200may include a LUT205associated with the PDU router204, a LUT(s)216associated with the UDS server module208, and a LUT(s)220associated with the PDU manager212. In some aspects, the system200may include a LUT(s)230associated with CAN transport and interface modules (e.g., CAN TP module250, CAN IF module254, etc.), a LUT(s)240associated with MCAL CAN DRV258. In some examples, the system200may include a LUT(s)260associated with LIN transport and interface modules (e.g., LIN TP module274, LIN IF module278, etc.). In some other examples, the system200may include a LUT(s)280associated with the DoIP module294. Each PDU identifier may be context-based with respect to each module (e.g., PDU service module, transport module, interface module, etc.) of the layered communication architecture. For example, each PDU identifier112be context-dependent upon each module of the layered communication architecture. In an example, as a PDU traverses through (e.g., is processed by, received by, transmitted by, etc.) different modules of the layered communication architecture, the different modules may reference the PDU using different respective PDU identifiers. In some aspects, a PDU may have a 1:1 relation with a lower-level interface module (e.g., CAN TP module250, CAN IF module254, LIN TP module274, LIN IF module278, DoIP module294, etc.) and/or an upper-level module (e.g., a PDU service module such as UDS server module208or PDU manager212). Therefore, example aspects of the system200may support directly linking respective PDU identifiers to lower-level modules (e.g., lower-level interface modules) and/or upper-level modules (e.g., PDU service modules). Each module (e.g., PDU service module, transport module, interface module, etc.) of the system200may use a PDU identifier corresponding to the module as a handle for looking up or obtaining secondary information for processing or working on the PDU at the layer of abstraction at which the module is implemented. In an example, with reference toFIG.2Band ‘Example A’, the PDU manager212may use PDU identifier ‘PdumId’ (e.g., 0x2241), PDU identifier ‘SignalCollectId’ (e.g., 0x12), and a LUT220-ato determine how to multiplex and demultiplex signals into a PDU. In some aspects, the PDU manager212may use PDU identifier ‘SignalCollectId’ (e.g., 0x12) and LUT220-bto determine how to multiplex and demultiplex the signals into a PDU. The PDU router204may use the PDU identifier ‘PdumId’ (e.g., 0x2241) to determine a transmission interface (e.g., CAN IF module254, LIN IF module278, DoIP module294, etc.) to which to send a PDU. For example, based on a PDU identifier ‘PdumId’ having a value of ‘0x2241’ and the LUT205, the PDU router204may identify that the PDU identifier ‘PdumId’ (e.g., 0x2241) corresponds to a CAN interface type (e.g., ‘IfType’ equal to ‘CAN’). In another example, using the PDU identifier ‘PdumId’, the PDU router204may determine which buffer to route an incoming PDU to. Additionally, or alternatively, with reference to ‘Example B’ ofFIG.2B, based on a PDU identifier ‘PdumId’ having a value of ‘0x3111’ and the LUT205, the PDU router204may identify that the PDU identifier ‘PdumId’ (e.g., 0x3111) corresponds to a LIN interface type (e.g., ‘IfType’ equal to ‘LIN). The CAN IF module254may use PDU identifier ‘CanIfId’ (e.g., 0x2303) and LUT240-ato determine a controller identifier ‘CanCtrlId’ (e.g., 0x02) for transmitting the PDU. In some aspects, the CAN IF module254may use the PDU identifier ‘CanIfId’ (e.g., 0x2303) and LUT240-bto determine a message identifier ‘MsgId’ (e.g., 0x300) for transmitting the PDU. Accordingly, for example, the PDU identifier ‘CanIfId’ (e.g., 0x2303) can be used to look up the MsgId and CAN bus of for transmitting the PDU. According to example aspects of the present disclosure, the PDU identifier of a PDU may change between layers. For example, the PDU identifier of the PDU may change according to the domain of a module (e.g., PDU manager212, PDU router204, CAN TP module250, CAN IF module254, etc.) processing the PDU at a given layer, as illustrated in the examples ofFIGS.2B and2C. Accordingly, for example, when routing a PDU between different layers (e.g., different domains), the PDU router204may support features for (e.g., gains responsibility of) translating the PDU identifier corresponding to the PDU between the layers. In some aspects, for communication between layers, the PDU router204may reference a LUT (e.g., a translation table) associated with the layers. For example, the PDU router204may reference a LUT that includes PDU identifiers for the local values implemented at both layers. In an example, when routing a PDU from the PDU manager212(e.g., located at an upper layer) to the CAN IF module254(e.g., located at a controller (ECU) abstraction layer), the PDU router204may reference LUT230-a(‘CAN TX CanIfId LUT’) to translate the PDU identifier ‘PdumId’ (‘0x2241’) to a PDU identifier ‘CanIfId’ (0x2303). Additionally, or alternatively, with reference to ‘Example B’ ofFIG.2B, when routing a PDU from the PDU manager212(e.g., located at an upper layer) to the LIN IF module278(e.g., located at a controller (ECU) abstraction layer), the PDU router204may reference LUT260-a(‘LIN TX LinId LUT’) to translate a PDU identifier ‘PdumId’ (‘0x3111’) to a PDU identifier ‘LinId’ (0x55). From an architectural perspective, aspects of the present disclosure support features in which all static PDU information (e.g., PDU data) is accessible by each module (e.g., PDU router204, UDS server module208, PDU manager212, CAN IF module254, etc.) based on a PDU identifier for the module. In some aspects, the PDU identifier may be specific to the layer in which the module is located. In an example, when receiving a PDU, a module (e.g., layer in which the module is located) receives a PDU identifier as a handle. The module may look up whatever it needs, using the PDU identifier and a LUT specific to the module. In some aspects, the module may extract data from the PDU using an internal application programming interface (API). For example, with reference to ‘Example A’ ofFIG.2B, the CAN IF module254may be provided with a PDU and a PDU identifier ‘CanIfId’ (0x2303) to transmit. Using the PDU identifier ‘CanIfId’ (0x2303) and LUT240-a(‘CAN Controller TX LUT’), the CAN IF module254may determine that the PDU identifier ‘CanIfId’ (0x2303) is mapped to a PDU identifier ‘CanCtlrId’ (0x02). Accordingly, for example, the CAN IF module254may determine that the CAN IF module254is to provide the PDU to a ‘CAN Controller 02’. The CAN IF module254may provide the PDU and the PDU identifier ‘CanIfId’ (0x2303) to ‘CAN Controller 02’. Using the PDU identifier ‘CanIfId’ (0x2303) and LUT240-c(‘CAN Controller 0x02 TX LUT’), the ‘CAN Controller 02’ may determine that the PDU identifier ‘CanIfId’ (0x2303) is mapped to a PDU identifier ‘MsgId’ (0x0300) (also referred to herein as a message identifier). Accordingly, for example, the ‘CAN Controller 02’ may determine that the ‘CAN Controller 02’ is to generate a message including the PDU identifier ‘MsgId’ (0x0300). The message may include a set of signals (e.g., Signal1, Signal2, Signal3, etc.) that have been multiplexed into the PDU by the PDU manager212. According to example aspects of the present disclosure, when receiving a PDU, a module (e.g., PDU router204, UDS server module208, PDU manager212, CAN IF module254, etc.) may access PDU information (e.g., PDU data) specific to the module using a PDU identifier and a LUT that are specific to the module. The module (or layer in which the module is implemented) may access the PDU information, without knowing details associated with storage at the LUT. For example, the module may access the PDU information, without knowing details associated with the efficiency of the table look-up, associated hashing algorithms, etc. In some aspects, the system200may support a system-wide PDU generation database (also referred to herein as a global PDU database). In an example, the system-wide PDU generation database may store information (e.g., data) from CAN database files (DBCs), LIN Description Files (LDFs), and input configuration files. In some aspects, the system-wide PDU generation database may include PDU identifiers referring to the same PDU for each abstraction level of the system200(e.g., each layer of the layered communication architecture described herein). The system-wide PDU generation database may be accessible to any generator (e.g., device and/or application associated with the system200) capable of generating tables (e.g., LUTs, translation tables, etc.) based on PDU identifiers. That is, for example, the system-wide PDU generation database may serve as a common reference point for generating the tables (e.g., LUTs, translation tables, etc.) described herein, based on which modules (e.g., PDU router204, UDS server module208, PDU manager212, CAN IF module254, etc.) of the system200can translate PDU identifiers from one level (e.g., layer) to another. Aspects of the present disclosure may support LUTs having improved efficiency with respect to data storage and data access compared to other LUT implementations in which a LUT includes a single global PDU identifier generated for each PDU. For example, aspects of the present disclosure support PDU identifiers selected at each level of the system200(e.g., each level of the layered communication architecture) such that look-ups at each level and/or at each LUT have improved lookup efficiency (e.g., reduced lookup durations) compared to other LUT implementations. In some aspects, the LUTs of the present disclosure may have improved lookup efficiency in any programming language (e.g., C, C++, etc.) compared to other LUT implementations. According to example aspects of the present disclosure, when assigning PDU identifiers at each level of a layered communication architecture, assignment strategies that translate well into each other may be implemented. In some aspects, the PDU identifiers may be any random number. In some other aspects, the PDU identifiers may be generated and/or assigned such that the PDU identifiers follow a pattern (e.g., a naming pattern, a numbering pattern, etc.). In some other aspects, the PDU identifiers may be generated and/or assigned according to a sequential order. Aspects of the generating the PDU identifiers and LUTs described herein may include at least partially deriving and/or assigning PDU identifiers such that the LUTs described herein (e.g., LUT205, LUT(s)216, LUT(s)220, LUT(s)230, LUT(s)240, LUT(s)260, LUT(s)280, etc.) have a higher density and smaller size compared to other LUT implementations for PDU identifiers. For example, some other LUT implementations may include a single, sparsely populated, and exhaustive LUT (e.g., including a single global PDU identifier generated for each PDU), which may result in reduced lookup efficiency compared to the LUTs described with reference to the present disclosure. In some aspects, the PDU identifiers described herein may be static at the time of generation by the system200, and the LUTs described herein (e.g., LUT205, LUT(s)216, LUT(s)220, LUT(s)230, LUT(s)240, LUT(s)260, LUT(s)280, etc. described with reference toFIGS.2A through2C) that include the PDU identifiers may be unmodifiable. For example, aspects of the present disclosure may support LUTs in which PDU identifiers may not be added or removed. Accordingly, for example, aspects of the present disclosure support LUTs (e.g., any or all of LUT205, LUT(s)216, LUT(s)220, LUT(s)230, LUT(s)240, LUT(s)260, LUT(s)280, etc. described with reference toFIGS.2A through2C) that are perfect non-colliding O(1) lookup tables (e.g., expected worst case is O(1)). For example, any or all of the LUTs of the system200may be a complete and full hash that is free from gaps (e.g., data value gaps in the LUTs) and/or free from collision. For example, with respect to collisions, any or all of the LUTs of the system200may be free from instances in which two or more elements of the LUT are hashed or mapped to the same value (e.g., inserted at the exact same place in an array). That is, for example, data structures of the LUTs described herein may support perfect, non-colliding O(1) lookup tables that are free from gaps and/or collision. Example aspects of the LUTs described herein may protect against illegal PDU identifiers (e.g., instances of non-covered PDU identifiers) as well as colliding legal PDU identifiers (e.g., instances of a covered PDU identifier that collides with another covered PDU identifier). In some aspects, the system200may support reporting of illegal lookups. Aspects of the system-wide PDU generation database may be implemented, for example, by aspects of a database (e.g., database618later described with reference toFIG.6) and/or a memory (e.g., storage device(s)720later described with reference toFIG.7). Aspects of the generator may be implemented by aspects of a communications system400(e.g., a computing device468later described with reference toFIG.4), vehicle computing device604(later described with reference toFIG.6), and/or a computer system700(e.g., a CPU(s)708later described with reference toFIG.7). FIG.3Aillustrates an example of a process flow300that supports context-based PDU identifier provisioning in accordance with aspects of the present disclosure. In some examples, process flow300may implement aspects of the vehicle100, the network communications system101, and the system200described with reference toFIG.1andFIGS.2A through2C. In the following description of the process flow300, the operations may be performed in a different order than the order shown, or the operations may be performed in different orders or at different times. Certain operations may also be left out of the process flow300, or other operations may be added to the process flow300. It is to be understood that while various modules (e.g., PDU router204, a PDU manager212, CAN TP module250, CAN IF module254, etc.) of a layered communication architecture of the system200are described as performing a number of the operations of process flow300, any device (e.g., a vehicle100, a vehicle computing device604, a computer system700) or module (e.g., any PDU router, PDU service module, transport module, interface module, etc.) may perform the operations shown. The process flow300is described with reference to ‘Example A’ and ‘Example B’ ofFIG.2B, but is not limited thereto. At305, a PDU service module (e.g., PDU manager212, UDS server module208) included in a first layer (e.g., a communication layer, a service layer) of the layered communication architecture may identify an event associated with transmitting a PDU. At310, the PDU service module may identify a first PDU identifier (e.g., ‘PdumId’ (0x2241)). In some aspects, the first PDU identifier is associated with the event. At315, the PDU service module may determine, based on the first PDU identifier and using a LUT (e.g., LUT220-a(‘PDUM TX LUT’)) associated with servicing (e.g., managing) the PDU, a signal identifier (e.g., ‘SignalCollectID’ (0x12)) associated with a set of signals (e.g., Signal1through Signal3) to include in the PDU. At320, the PDU service module may generate, based on the LUT associated with servicing (e.g., managing) the PDU, the PDU including the set of signals. In some aspects, generating the PDU may be based on the first PDU identifier and a second LUT (e.g., LUT220-b(‘SignalCollectId LUT’)) associated with generating the PDU. In some aspects, the second LUT includes the set of signals. At325, the PDU service module may transmit the PDU and the first PDU identifier to a router module (e.g., PDU router204). The router module may be included in the first layer of the layered communication architecture. In some aspects, transmitting the PDU and the first PDU identifier to the router module PDUR may be associated with a first temporal instance. At330, the router module may determine, based on the first PDU identifier and using a LUT (e.g., LUT205(‘PDUR TX IF LUT’)) associated with routing the PDU, a communication bus (e.g., a CAN bus, in ‘Example A’) and an interface module (e.g., CAN IF module254, in ‘Example A’) associated with the communication bus. At335, the router module may determine, based on the first PDU identifier and using a LUT (e.g., LUT230-a(‘CAN TX CanIfID LUT’)) associated with the interface module, a second PDU identifier (e.g., ‘CanIfId’ (0x2303) in ‘Example A’) associated with the interface module CAN IF. At340, the router module may transmit the PDU and the second PDU identifier to the interface module. In some aspects, transmitting the PDU and the second PDU identifier to the interface module may be associated with a second temporal instance. In some aspects, transmitting the PDU and the second PDU identifier to the interface module may include transmitting the PDU and the second PDU identifier to a transport module (e.g., CAN TP module250in ‘Example A’, LIN TP module274in ‘Example B’) associated with the communication bus. In some aspects, the transport module forwards the PDU and the second PDU identifier to the interface module based on the second PDU identifier. At345, the interface module may determine, based on the second PDU identifier and using a LUT (e.g., LUT240-a(‘CAN Controller TX LUT’) in ‘Example A’) associated with the interface module, a controller identifier (e.g., ‘CanCtlrId’ (0x02) in ‘Example A’) indicating a controller module (e.g., CAN controller 0x02 in ‘Example A’) associated with the communication bus; At350, the interface module may determine, based the second PDU identifier and using a LUT (e.g., LUT240-c(‘CAN Controller 0x02 TX LUT’)) associated with the controller module, a message identifier (e.g., ‘MsgId’ (0x300) in ‘Example A’). At355, the controller module may generate a message including the message identifier and the set of signals. At360, a device driver (e.g., MCAL CAN DRV258in ‘Example A’) associated with the communication bus may transmit the message across the communication bus. Additionally, or alternatively, in some examples not illustrated, the router module may determine, based on a third PDU identifier (e.g., ‘PdumId’ (0x3111) in ‘Example B’) and using the LUT associated with routing the PDU, a second communication bus (e.g., a LIN bus, in ‘Example B’) and a second interface module (e.g., LIN IF module278, in Example ‘B’) associated with the second communication bus. The router module may determine, based on the third PDU identifier and using a LUT (e.g., LUT260-a(‘LIN TX LINID LUT’)) associated with the second interface module, a fourth PDU identifier (e.g., ‘LinId’ (0x55) in ‘Example B’) associated with the second interface module. The router module may transmit the PDU and the fourth PDU identifier to the second interface module. In some aspects, transmitting the PDU and the second PDU identifier to the interface module and transmitting the PDU and the fourth PDU identifier to the second interface module are based on a same data request by the router module. In some aspects, the system200may include a database including a set of LUTs (e.g., LUT205, LUT(s)216, LUT(s)220, LUT(s)230, LUT(s)240, LUT(s)260, LUT(s)280, etc.). In some aspects, each LUT of the set of LUTs is associated with a respective layer of a set of layers of the layered communication architecture, a respective module of a set of modules of the layered communication architecture, or both. In some aspects, the set of layers may include at least the first layer, and the set of modules may include at least the router module and the interface module. In some aspects, at least one LUT of the set of LUTs may include a non-colliding O(1) lookup table. In an example, the set of LUTs may include at least one of: the LUT (e.g., LUT220-a(‘PDUM TX LUT’)) associated with servicing the PDU; the LUT (e.g., LUT205(‘PDUR TX IF LUT’) associated with routing the PDU; the LUT (e.g., LUT240a(‘CAN Controller TX LUT’), in ‘Example A’) associated with the interface module; and the LUT (e.g., LUT240-c(‘CAN Controller 0x02 TX LUT’), in ‘Example A’) associated with the controller module (e.g., CAN controller 0x02, in ‘Example A’) associated with the communication bus. In some aspects, the database may include multiple sets of PDU identifiers. In an example, each set of PDU identifiers is associated with a respective LUT of the set of LUTs, a respective layer of the layered communication architecture, a respective module of the layered communication architecture, or a combination thereof. In some aspects, the multiple sets of PDU identifiers may include at least one of: a first set of PDU identifiers associated with multiplexing signals into an outgoing PDU, demultiplexing signals of an incoming PDU, or both; a second set of PDU identifiers associated with routing the outgoing PDU, the incoming PDU, or both; a third set of PDU identifiers associated with determining a transmission interface to which to transmit the outgoing PDU, determining a buffer to which to route the incoming PDU, or both; and a fourth set of PDU identifiers associated with determining a message identifier corresponding to the outgoing PDU, the incoming PDU, or both. In some aspects, the fourth set of PDU identifiers is associated with determining a communication bus corresponding to the outgoing PDU, the incoming PDU, or both. FIG.3Billustrates an example of a process flow301that supports context-based PDU identifier provisioning in accordance with aspects of the present disclosure. In some examples, process flow301may implement aspects of the vehicle100, the network communications system101, the system200, and the process flow300described with reference toFIG.1,FIGS.2A through2C, andFIG.3A. In the following description of the process flow301, the operations may be performed in a different order than the order shown, or the operations may be performed in different orders or at different times. Certain operations may also be left out of the process flow301, or other operations may be added to the process flow301. It is to be understood that while various modules (e.g., PDU router204, a PDU manager212, CAN TP module250, CAN IF module254, etc.) of a layered communication architecture of the system200are described as performing a number of the operations of process flow300, any device (e.g., a vehicle100, a vehicle computing device604, a computer system700) or module (e.g., any PDU router, PDU service module, transport module, interface module, etc.) may perform the operations shown. The process flow301is described with reference to ‘Example C’ and ‘Example D’ ofFIG.2C, but is not limited thereto. At365, a device driver (e.g., CAN DRV258) associated with a communication bus (e.g., a CAN bus, in ‘Example C’) may receive a message over the communication bus. In some aspects, the message may include a PDU. At370, a controller module (e.g., CAN controller 0x01, in ‘Example C’) may detect a message identifier (e.g., ‘MsgId’ (0x304) included in the message. At375, an interface module (e.g., CAN IF module254) associated with the communication bus may determine, based the message identifier and using a LUT (e.g., LUT240-d(‘CAN Controller 0x01 RX LUT’)) associated with the controller module, a first PDU identifier (e.g., ‘CanIfId’ (0x2304), in ‘Example C’) associated with the interface module. At380, a router module (e.g., PDU router204) included in a first layer (e.g., communication layer, service layer) of the layered communication architecture may determine, based on the first PDU identifier and using a LUT (e.g., LUT230-b(‘CAN IF RX LUT’), in ‘Example C’) associated with the interface module, a PDU service module (e.g., PDU manager212, in ‘Example C’) included in the first layer of the layered communication architecture. At385, the router module may determine, based on the first PDU identifier and using a second LUT (e.g., LUT230-c(‘CAN RX PDUM ID LUT’), in ‘Example C’) associated with the interface module, a second PDU identifier (e.g., ‘PdumId’ (0x6000)) associated with the PDU service module. At390, the router module may transmit the PDU and the second PDU identifier to the PDU service module. At395, the PDU service module may determine, based on the second PDU identifier and using a LUT (e.g., LUT220-c(‘PDUM RX LUT’) and/or LUT220-d(‘SignalCollectId LUT’), in ‘Example C’) associated with servicing (e.g., managing) the PDU, a signal identifier (e.g., ‘SignalCollectID’ (0x1F), in ‘Example C’) associated with a set of signals (e.g., Signal A through Signal C, in ‘Example C’) to extract from the PDU. At397, the PDU service module may extract, based on the signal identifier and using the LUT associated with servicing (e.g., managing) the PDU, the set of signals. FIG.4is a block diagram illustrating an example of a communication environment of the vehicle100in accordance with aspects of the present disclosure. The communication system400may include one or more vehicle driving vehicle sensors and systems404, sensor processors430, sensor data memory434, vehicle control system438, communications subsystem450, control data464, computing devices468, display devices472, and other components474that may be associated with a vehicle100. These associated components may be electrically and/or communicatively coupled to one another via at least one bus460. In some embodiments, the one or more associated components may send and/or receive signals across a communication network452to at least one of a navigation source456A, a control source456B, or some other entity456N. In accordance with at least some embodiments of the present disclosure, the communication network452may comprise any type of known communication medium or collection of communication media and may use any type of protocols, such as SIP, TCP/IP, SNA, IPX, AppleTalk, and the like, to transport messages between endpoints. The communication network452may include wired and/or wireless communication technologies. The Internet is an example of the communication network452that constitutes an Internet Protocol (IP) network consisting of many computers, computing networks, and other communication devices located all over the world, which are connected through many telephone systems and other means. Other examples of the communication network452include, without limitation, a standard Plain Old Telephone System (POTS), an Integrated Services Digital Network (ISDN), the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), such as an Ethernet network, a Token-Ring network and/or the like, a Wide Area Network (WAN), a virtual network, including without limitation a virtual private network (“VPN”); the Internet, an intranet, an extranet, a cellular network, an infra-red network; a wireless network (e.g., a network operating under any of the IEEE 802.9 suite of protocols, the Bluetooth® protocol known in the art, and/or any other wireless protocol), and any other type of packet-switched or circuit-switched network known in the art and/or any combination of these and/or other networks. In addition, it can be appreciated that the communication network452need not be limited to any one network type, and instead may be comprised of a number of different networks and/or network types. The communication network452may comprise a number of different communication media such as coaxial cable, copper cable/wire, fiber-optic cable, antennas for transmitting/receiving wireless messages, and combinations thereof. The driving vehicle sensors and systems404may include at least one navigation408(e.g., global positioning system (GPS), etc.), orientation412, odometry416, LIDAR420, RADAR424, ultrasonic428, camera432, infrared (IR)436, and/or other sensor or system438. These driving vehicle sensors and systems404may be similar, if not identical, to the sensors and systems116A-K,112described in conjunction withFIGS.1and2. The navigation sensor408may include one or more sensors having receivers and antennas that are configured to utilize a satellite-based navigation system including a network of navigation satellites capable of providing geolocation and time information to at least one component of the vehicle100. Examples of the navigation sensor408as described herein may include, but are not limited to, at least one of Garmin® GLO™ family of GPS and GLONASS combination sensors, Garmin® GPS 15x™ family of sensors, Garmin® GPS 16x™ family of sensors with high-sensitivity receiver and antenna, Garmin® GPS 18x OEM family of high-sensitivity GPS sensors, Dewetron DEWE-VGPS series of GPS sensors, GlobalSat 1-Hz series of GPS sensors, other industry-equivalent navigation sensors and/or systems, and may perform navigational and/or geolocation functions using any known or future-developed standard and/or architecture. The orientation sensor412may include one or more sensors configured to determine an orientation of the vehicle100relative to at least one reference point. In some embodiments, the orientation sensor412may include at least one pressure transducer, stress/strain gauge, accelerometer, gyroscope, and/or geomagnetic sensor. Examples of the navigation sensor408as described herein may include, but are not limited to, at least one of Bosch Sensortec BMX 160 series low-power absolute orientation sensors, Bosch Sensortec BMX055 9-axis sensors, Bosch Sensortec BMI055 6-axis inertial sensors, Bosch Sensortec BMI160 6-axis inertial sensors, Bosch Sensortec BMF055 9-axis inertial sensors (accelerometer, gyroscope, and magnetometer) with integrated Cortex M0+ microcontroller, Bosch Sensortec BMP280 absolute barometric pressure sensors, Infineon TLV494D-A1B6 4D magnetic sensors, Infineon TLI494D-W1B6 4D magnetic sensors, Infineon TL family of 4D magnetic sensors, Murata Electronics SCC2000 series combined gyro sensor and accelerometer, Murata Electronics SCC1400 series combined gyro sensor and accelerometer, other industry-equivalent orientation sensors and/or systems, which may perform orientation detection and/or determination functions using any known or future-developed standard and/or architecture. The odometry sensor and/or system416may include one or more components that is configured to determine a change in position of the vehicle100over time. In some embodiments, the odometry system416may utilize data from one or more other sensors and/or systems404in determining a position (e.g., distance, location, etc.) of the vehicle100relative to a previously measured position for the vehicle100. Additionally or alternatively, the odometry sensors416may include one or more encoders, Hall speed sensors, and/or other measurement sensors/devices configured to measure a wheel speed, rotation, and/or number of revolutions made over time. Examples of the odometry sensor/system416as described herein may include, but are not limited to, at least one of Infineon TLE4924/26/27/28C high-performance speed sensors, Infineon TL4941plusC(B) single chip differential Hall wheel-speed sensors, Infineon TL5041plusC Giant Mangnetoresistance (GMR) effect sensors, Infineon TL family of magnetic sensors, EPC Model 25SP Accu-CoderPro™ incremental shaft encoders, EPC Model 40M compact incremental encoders with advanced magnetic sensing and signal processing technology, EPC Model 925 absolute shaft encoders, EPC Model 958 absolute shaft encoders, EPC Model MA46S/MA64S/SA46S absolute shaft encoders, Dynapar™ F18 commutating optical encoder, Dynapar™ HS45R family of phased array encoder sensors, other industry-equivalent odometry sensors and/or systems, and may perform change in position detection and/or determination functions using any known or future-developed standard and/or architecture. The LIDAR sensor/system420may include one or more components configured to measure distances to targets using laser illumination. In some embodiments, the LIDAR sensor/system420may provide 4D imaging data of an environment around the vehicle100. The imaging data may be processed to generate a full 460-degree view of the environment around the vehicle100. The LIDAR sensor/system420may include a laser light generator configured to generate a plurality of target illumination laser beams (e.g., laser light channels). In some embodiments, this plurality of laser beams may be aimed at, or directed to, a rotating reflective surface (e.g., a mirror) and guided outwardly from the LIDAR sensor/system420into a measurement environment. The rotating reflective surface may be configured to continually rotate 460 degrees about an axis, such that the plurality of laser beams is directed in a full 460-degree range around the vehicle100. A photodiode receiver of the LIDAR sensor/system420may detect when light from the plurality of laser beams emitted into the measurement environment returns (e.g., reflected echo) to the LIDAR sensor/system420. The LIDAR sensor/system420may calculate, based on a time associated with the emission of light to the detected return of light, a distance from the vehicle100to the illuminated target. In some embodiments, the LIDAR sensor/system420may generate over 2.0 million points per second and have an effective operational range of at least 100 meters. Examples of the LIDAR sensor/system420as described herein may include, but are not limited to, at least one of Velodyne® LiDAR™ HDL-64E 64-channel LIDAR sensors, Velodyne® LiDAR™ HDL-42E 42-channel LIDAR sensors, Velodyne® LiDAR™ PUCK™ VLP-16 16-channel LIDAR sensors, Leica Geosystems Pegasus: Two mobile sensor platform, Garmin® LIDAR-Lite v4 measurement sensor, Quanergy M8 LiDAR sensors, Quanergy S4 solid state LiDAR sensor, LeddarTech® LeddarVU compact solid state fixed-beam LIDAR sensors, other industry-equivalent LIDAR sensors and/or systems, and may perform illuminated target and/or obstacle detection in an environment around the vehicle100using any known or future-developed standard and/or architecture. The RADAR sensors424may include one or more radio components that are configured to detect objects/targets in an environment of the vehicle100. In some embodiments, the RADAR sensors424may determine a distance, position, and/or movement vector (e.g., angle, speed, etc.) associated with a target over time. The RADAR sensors424may include a transmitter configured to generate and emit electromagnetic waves (e.g., radio, microwaves, etc.) and a receiver configured to detect returned electromagnetic waves. In some embodiments, the RADAR sensors424may include at least one processor configured to interpret the returned electromagnetic waves and determine locational properties of targets. Examples of the RADAR sensors424as described herein may include, but are not limited to, at least one of Infineon RASIC™ RTN7745PL transmitter and RRN7745PL/46PL receiver sensors, Autoliv ASP Vehicle RADAR sensors, Delphi L2C0051TR 77 GHz ESR Electronically Scanning Radar sensors, Fujitsu Ten Ltd. Automotive Compact 77 GHz 4D Electronic Scan Millimeter Wave Radar sensors, other industry-equivalent RADAR sensors and/or systems, and may perform radio target and/or obstacle detection in an environment around the vehicle100using any known or future-developed standard and/or architecture. The ultrasonic sensors428may include one or more components that are configured to detect objects/targets in an environment of the vehicle100. In some embodiments, the ultrasonic sensors428may determine a distance, position, and/or movement vector (e.g., angle, speed, etc.) associated with a target over time. The ultrasonic sensors428may include an ultrasonic transmitter and receiver, or transceiver, configured to generate and emit ultrasound waves and interpret returned echoes of those waves. In some embodiments, the ultrasonic sensors428may include at least one processor configured to interpret the returned ultrasonic waves and determine locational properties of targets. Examples of the ultrasonic sensors428as described herein may include, but are not limited to, at least one of Texas Instruments TIDA-00151 automotive ultrasonic sensor interface IC sensors, MaxBotix® MB8450 ultrasonic proximity sensor, MaxBotix® ParkSonar™-EZ ultrasonic proximity sensors, Murata Electronics MA40H1S-R open-structure ultrasonic sensors, Murata Electronics MA40S4R/S open-structure ultrasonic sensors, Murata Electronics MA58MF14-7N waterproof ultrasonic sensors, other industry-equivalent ultrasonic sensors and/or systems, and may perform ultrasonic target and/or obstacle detection in an environment around the vehicle100using any known or future-developed standard and/or architecture. The camera sensors432may include one or more components configured to detect image information associated with an environment of the vehicle100. In some embodiments, the camera sensors432may include a lens, filter, image sensor, and/or a digital image processer. It is an aspect of the present disclosure that multiple camera sensors432may be used together to generate stereo images providing depth measurements. Examples of the camera sensors432as described herein may include, but are not limited to, at least one of ON Semiconductor® MT9V024 Global Shutter VGA GS CMOS image sensors, Teledyne DALSA Falcon2 camera sensors, CMOSIS CMV50000 high-speed CMOS image sensors, other industry-equivalent camera sensors and/or systems, and may perform visual target and/or obstacle detection in an environment around the vehicle100using any known or future-developed standard and/or architecture. The infrared (IR) sensors436may include one or more components configured to detect image information associated with an environment of the vehicle100. The IR sensors436may be configured to detect targets in low-light, dark, or poorly-lit environments. The IR sensors436may include an IR light emitting element (e.g., IR light emitting diode (LED), etc.) and an IR photodiode. In some embodiments, the IR photodiode may be configured to detect returned IR light at or about the same wavelength to that emitted by the IR light emitting element. In some embodiments, the IR sensors436may include at least one processor configured to interpret the returned IR light and determine locational properties of targets. The IR sensors436may be configured to detect and/or measure a temperature associated with a target (e.g., an object, pedestrian, other vehicle, etc.). Examples of IR sensors436as described herein may include, but are not limited to, at least one of Opto Diode lead-salt IR array sensors, Opto Diode OD-850 Near-IR LED sensors, Opto Diode SA/SHA727 steady state IR emitters and IR detectors, FLIR® LS microbolometer sensors, FLIR® TacFLIR 480-HD InSb MWIR FPA and HD MWIR thermal sensors, FLIR® VOx 640×480 pixel detector sensors, Delphi IR sensors, other industry-equivalent IR sensors and/or systems, and may perform IR visual target and/or obstacle detection in an environment around the vehicle100using any known or future-developed standard and/or architecture. The vehicle100can also include one or more interior sensors437. Interior sensors437can measure characteristics of the inside environment of the vehicle100. A navigation system402can include any hardware and/or software used to navigate the vehicle either manually or autonomously. The navigation system402may be as described in conjunction withFIG.4C. In some embodiments, the driving vehicle sensors and systems404may include other sensors438and/or combinations of the sensors406-437described above. Additionally or alternatively, one or more of the sensors406-437described above may include one or more processors configured to process and/or interpret signals detected by the one or more sensors406-437. In some embodiments, the processing of at least some sensor information provided by the vehicle sensors and systems404may be processed by at least one sensor processor430. Raw and/or processed sensor data may be stored in a sensor data memory434storage medium. In some embodiments, the sensor data memory434may store instructions used by the sensor processor430for processing sensor information provided by the sensors and systems404. In any event, the sensor data memory434may be a disk drive, optical storage device, solid-state storage device such as a random access memory (“RAM”) and/or a read-only memory (“ROM”), which can be programmable, flash-updateable, and/or the like. The vehicle control system438may receive processed sensor information from the sensor processor430and determine to control an aspect of the vehicle100. Controlling an aspect of the vehicle100may include presenting information via one or more display devices472associated with the vehicle, sending commands to one or more computing devices468associated with the vehicle, and/or controlling a driving operation of the vehicle. In some embodiments, the vehicle control system438may correspond to one or more computing systems that control driving operations of the vehicle100in accordance with the Levels of driving autonomy described above. In one embodiment, the vehicle control system438may operate a speed of the vehicle100by controlling an output signal to the accelerator and/or braking system of the vehicle. In this example, the vehicle control system438may receive sensor data describing an environment surrounding the vehicle100and, based on the sensor data received, determine to adjust the acceleration, power output, and/or braking of the vehicle100. The vehicle control system438may additionally control steering and/or other driving functions of the vehicle100. The vehicle control system438may communicate, in real-time, with the driving sensors and systems404forming a feedback loop. In particular, upon receiving sensor information describing a condition of targets in the environment surrounding the vehicle100, the vehicle control system438may autonomously make changes to a driving operation of the vehicle100. The vehicle control system438may then receive subsequent sensor information describing any change to the condition of the targets detected in the environment as a result of the changes made to the driving operation. This continual cycle of observation (e.g., via the sensors, etc.) and action (e.g., selected control or non-control of vehicle operations, etc.) allows the vehicle100to operate autonomously in the environment. In some embodiments, the one or more components of the vehicle100(e.g., the driving vehicle sensors404, vehicle control system438, display devices472, etc.) may communicate across the communication network452to one or more entities456A-N via a communications subsystem450of the vehicle100. Embodiments of the communications subsystem450are described in greater detail in conjunction withFIG.5. For instance, the navigation sensors408may receive global positioning, location, and/or navigational information from a navigation source456A. In some embodiments, the navigation source456A may be a global navigation satellite system (GNSS) similar, if not identical, to NAVSTAR GPS, GLONASS, EU Galileo, and/or the BeiDou Navigation Satellite System (BDS) to name a few. In some embodiments, the vehicle control system438may receive control information from one or more control sources456B. The control source456may provide vehicle control information including autonomous driving control commands, vehicle operation override control commands, and the like. The control source456may correspond to an autonomous vehicle control system, a traffic control system, an administrative control entity, and/or some other controlling server. It is an aspect of the present disclosure that the vehicle control system438and/or other components of the vehicle100may exchange communications with the control source456across the communication network452and via the communications subsystem450. Information associated with controlling driving operations of the vehicle100may be stored in a control data memory464storage medium. The control data memory464may store instructions used by the vehicle control system438for controlling driving operations of the vehicle100, historical control information, autonomous driving control rules, and the like. In some embodiments, the control data memory464may be a disk drive, optical storage device, solid-state storage device such as a random access memory (“RAM”) and/or a read-only memory (“ROM”), which can be programmable, flash-updateable, and/or the like. In addition to the mechanical components described herein, the vehicle100may include a number of user interface devices. The user interface devices receive and translate human input into a mechanical movement or electrical signal or stimulus. The human input may be one or more of motion (e.g., body movement, body part movement, in two-dimensional or three-dimensional space, etc.), voice, touch, and/or physical interaction with the components of the vehicle100. In some embodiments, the human input may be configured to control one or more functions of the vehicle100and/or systems of the vehicle100described herein. User interfaces may include, but are in no way limited to, at least one graphical user interface of a display device, steering wheel or mechanism, transmission lever or button (e.g., including park, neutral, reverse, and/or drive positions, etc.), throttle control pedal or mechanism, brake control pedal or mechanism, power control switch, communications equipment, etc. FIG.5illustrates a hardware diagram of communications componentry that can be optionally associated with the vehicle100in accordance with embodiments of the present disclosure. The communications componentry can include one or more wired or wireless devices such as a transceiver(s) and/or modem that allows communications not only between the various systems disclosed herein but also with other devices, such as devices on a network, and/or on a distributed network such as the Internet and/or in the cloud and/or with other vehicle(s). The communications subsystem450can also include inter- and intra-vehicle communications capabilities such as hotspot and/or access point connectivity for any one or more of the vehicle occupants and/or vehicle-to-vehicle communications. Additionally, and while not specifically illustrated, the communications subsystem450can include one or more communications links (that can be wired or wireless) and/or communications busses (managed by the bus manager574), including one or more of CAN bus, OBD-II, ARCINC 429, Byteflight, CAN (Controller Area Network), D2B (Domestic Digital Bus), FlexRay, DC-BUS, IDB-1394, IEBus, I2C, ISO 9141-1/-2, J1708, J1587, J1850, J1939, ISO 11783, Keyword Protocol 2000, LIN (Local Interconnect Network), MOST (Media Oriented Systems Transport), Multifunction Vehicle Bus, SMARTwireX, SPI, VAN (Vehicle Area Network), and the like or in general any communications protocol and/or standard(s). The various protocols and communications can be communicated one or more of wirelessly and/or over transmission media such as single wire, twisted pair, fiber optic, IEEE 1394, MIL-STD-1553, MIL-STD-1773, power-line communication, or the like. (All of the above standards and protocols are incorporated herein by reference in their entirety). As discussed, the communications subsystem450enables communications between any of the inter-vehicle systems and subsystems as well as communications with non-collocated resources, such as those reachable over a network such as the Internet. The communications subsystem450, in addition to well-known componentry (which has been omitted for clarity), includes interconnected elements including one or more of: one or more antennas504, an interleaver/deinterleaver508, an analog front end (AFE)512, memory/storage/cache516, controller/microprocessor520, MAC circuitry522, modulator/demodulator524, encoder/decoder528, a plurality of connectivity managers534,558,562,566, GPU540, accelerator544, a multiplexer/demultiplexer552, transmitter570, receiver572and additional wireless radio components such as a Wi-Fi PHY/Bluetooth® module580, a Wi-Fi/BT MAC module584, additional transmitter(s)588and additional receiver(s)592. The various elements in the device450are connected by one or more links/busses5(not shown, again for sake of clarity). The device450can have one more antennas504, for use in wireless communications such as multi-input multi-output (MIMO) communications, multi-user multi-input multi-output (MU-MIMO) communications Bluetooth®, LTE, 4G, 5G, Near-Field Communication (NFC), etc., and in general for any type of wireless communications. The antenna(s)504can include, but are not limited to one or more of directional antennas, omnidirectional antennas, monopoles, patch antennas, loop antennas, microstrip antennas, dipoles, and any other antenna(s) suitable for communication transmission/reception. In an exemplary embodiment, transmission/reception using MIMO may require particular antenna spacing. In another exemplary embodiment, MIMO transmission/reception can enable spatial diversity allowing for different channel characteristics at each of the antennas. In yet another embodiment, MIMO transmission/reception can be used to distribute resources to multiple users for example within the vehicle100and/or in another vehicle. Antenna(s)504generally interact with the Analog Front End (AFE)512, which is needed to enable the correct processing of the received modulated signal and signal conditioning for a transmitted signal. The AFE512can be functionally located between the antenna and a digital baseband system in order to convert the analog signal into a digital signal for processing and vice-versa. The subsystem450can also include a controller/microprocessor520and a memory/storage/cache516. The subsystem450can interact with the memory/storage/cache516which may store information and operations necessary for configuring and transmitting or receiving the information described herein. The memory/storage/cache516may also be used in connection with the execution of application programming or instructions by the controller/microprocessor520, and for temporary or long term storage of program instructions and/or data. As examples, the memory/storage/cache520may comprise a computer-readable device, RAM, ROM, DRAM, SDRAM, and/or other storage device(s) and media. The controller/microprocessor520may comprise a general purpose programmable processor or controller for executing application programming or instructions related to the subsystem450. Furthermore, the controller/microprocessor520can perform operations for configuring and transmitting/receiving information as described herein. The controller/microprocessor520may include multiple processor cores, and/or implement multiple virtual processors. Optionally, the controller/microprocessor520may include multiple physical processors. By way of example, the controller/microprocessor520may comprise a specially configured Application Specific Integrated Circuit (ASIC) or other integrated circuit, a digital signal processor(s), a controller, a hardwired electronic or logic circuit, a programmable logic device or gate array, a special purpose computer, or the like. The subsystem450can further include a transmitter(s)570,588and receiver(s)572,592which can transmit and receive signals, respectively, to and from other devices, subsystems and/or other destinations using the one or more antennas504and/or links/busses. Included in the subsystem450circuitry is the medium access control or MAC Circuitry522. MAC circuitry522provides for controlling access to the wireless medium. In an exemplary embodiment, the MAC circuitry522may be arranged to contend for the wireless medium and configure frames or packets for communicating over the wired/wireless medium. The subsystem450can also optionally contain a security module (not shown). This security module can contain information regarding but not limited to, security parameters required to connect the device to one or more other devices or other available network(s), and can include WEP or WPA/WPA-2 (optionally+AES and/or TKIP) security access keys, network keys, etc. The WEP security access key is a security password used by Wi-Fi networks. Knowledge of this code can enable a wireless device to exchange information with an access point and/or another device. The information exchange can occur through encoded messages with the WEP access code often being chosen by the network administrator. WPA is an added security standard that is also used in conjunction with network connectivity with stronger encryption than WEP. In some embodiments, the communications subsystem450also includes a GPU540, an accelerator544, a Wi-Fi/BT/BLE (Bluetooth® Low-Energy) PHY module580and a Wi-Fi/BT/BLE MAC module584and optional wireless transmitter588and optional wireless receiver592. In some embodiments, the GPU540may be a graphics processing unit, or visual processing unit, comprising at least one circuit and/or chip that manipulates and changes memory to accelerate the creation of images in a frame buffer for output to at least one display device. The GPU540may include one or more of a display device connection port, printed circuit board (PCB), a GPU chip, a metal-oxide-semiconductor field-effect transistor (MOSFET), memory (e.g., single data rate random-access memory (SDRAM), double data rate random-access memory (DDR) RAM, etc., and/or combinations thereof), a secondary processing chip (e.g., handling video out capabilities, processing, and/or other functions in addition to the GPU chip, etc.), a capacitor, heatsink, temperature control or cooling fan, motherboard connection, shielding, and the like. The various connectivity managers534,558,562,566manage and/or coordinate communications between the subsystem450and one or more of the systems disclosed herein and one or more other devices/systems. The connectivity managers534,558,562,566include a charging connectivity manager534, a vehicle database connectivity manager558, a remote operating system connectivity manager562, and a sensor connectivity manager566. The charging connectivity manager534can coordinate not only the physical connectivity between the vehicle100and a charging device/vehicle, but can also communicate with one or more of a power management controller, one or more third parties and optionally a billing system(s). As an example, the vehicle100can establish communications with the charging device/vehicle to one or more of coordinate interconnectivity between the two (e.g., by spatially aligning the charging receptacle on the vehicle with the charger on the charging vehicle) and optionally share navigation information. Once charging is complete, the amount of charge provided can be tracked and optionally forwarded to, for example, a third party for billing. In addition to being able to manage connectivity for the exchange of power, the charging connectivity manager534can also communicate information, such as billing information to the charging vehicle and/or a third party. This billing information could be, for example, the owner of the vehicle, the driver/occupant(s) of the vehicle, company information, or in general any information usable to charge the appropriate entity for the power received. The vehicle database connectivity manager558allows the subsystem to receive and/or share information stored in the vehicle database. This information can be shared with other vehicle components/subsystems and/or other entities, such as third parties and/or charging systems. The information can also be shared with one or more vehicle occupant devices, such as an app (application) on a mobile device the driver uses to track information about the vehicle100and/or a dealer or service/maintenance provider. In general, any information stored in the vehicle database can optionally be shared with any one or more other devices optionally subject to any privacy or confidentially restrictions. The remote operating system connectivity manager562facilitates communications between the vehicle100and any one or more autonomous vehicle systems. These communications can include one or more of navigation information, vehicle information, other vehicle information, weather information, occupant information, or in general any information related to the remote operation of the vehicle100. The sensor connectivity manager566facilitates communications between any one or more of the vehicle sensors (e.g., the driving vehicle sensors and systems304, etc.) and any one or more of the other vehicle systems. The sensor connectivity manager566can also facilitate communications between any one or more of the sensors and/or vehicle systems and any other destination, such as a service company, app, or in general to any destination where sensor data is needed. In accordance with one exemplary embodiment, any of the communications discussed herein can be communicated via the conductor(s) used for charging. One exemplary protocol usable for these communications is Power-line communication (PLC). PLC is a communication protocol that uses electrical wiring to simultaneously carry both data, and Alternating Current (AC) electric power transmission or electric power distribution. It is also known as power-line carrier, power-line digital subscriber line (PDSL), mains communication, power-line telecommunications, or power-line networking (PLN). For DC environments in vehicles PLC can be used in conjunction with CAN bus, LIN-bus over power line (DC-LIN) and DC-BUS. The communications subsystem can also optionally manage one or more identifiers, such as an IP (Internet Protocol) address(es), associated with the vehicle and one or other system or subsystems or components and/or devices therein. These identifiers can be used in conjunction with any one or more of the connectivity managers as discussed herein. FIG.6illustrates a block diagram of a computing environment600that may function as the servers, user computers, or other systems provided and described herein. The computing environment600includes one or more user computers, or computing devices, such as a vehicle computing device604, a communication device608, and/or more612. The computing devices604,608,612may include general purpose personal computers (including, merely by way of example, personal computers, and/or laptop computers running various versions of Microsoft Corp.'s Windows® and/or Apple Corp.'s Macintosh® operating systems) and/or workstation computers running any of a variety of commercially-available UNIX® or UNIX-like operating systems. These computing devices604,608,612may also have any of a variety of applications, including for example, database client and/or server applications, and web browser applications. Alternatively, the computing devices604,608,612may be any other electronic device, such as a thin-client computer, Internet-enabled mobile telephone, and/or personal digital assistant, capable of communicating via a network452and/or displaying and navigating web pages or other types of electronic documents or information. Although the exemplary computing environment600is shown with two computing devices, any number of user computers or computing devices may be supported. The computing environment600may also include one or more servers614,616. In this example, server614is shown as a web server and server616is shown as an application server. The web server614, which may be used to process requests for web pages or other electronic documents from computing devices604,608,612. The web server614can be running an operating system including any of those discussed above, as well as any commercially-available server operating systems. The web server614can also run a variety of server applications, including SIP (Session Initiation Protocol) servers, HTTP(s) servers, FTP servers, CGI servers, database servers, Java® servers, and the like. In some instances, the web server614may publish operations available operations as one or more web services. The computing environment600may also include one or more file and or/application servers616, which can, in addition to an operating system, include one or more applications accessible by a client running on one or more of the computing devices604,608,612. The server(s)616and/or614may be one or more general purpose computers capable of executing programs or scripts in response to the computing devices604,608,612. As one example, the server616,614may execute one or more web applications. The web application may be implemented as one or more scripts or programs written in any programming language, such as Java®, C, C#®, or C++, and/or any scripting language, such as Perl, Python, or TCL, as well as combinations of any programming/scripting languages. The application server(s)616may also include database servers, including without limitation those commercially available from Oracle®, Microsoft®, Sybase®, IBM® and the like, which can process requests from database clients running on a computing device604,608,612. The web pages created by the server614and/or616may be forwarded to a computing device604,608,612via a web (file) server614,616. Similarly, the web server614may be able to receive web page requests, web services invocations, and/or input data from a computing device604,608,612(e.g., a user computer, etc.) and can forward the web page requests and/or input data to the web (application) server616. In further embodiments, the server616may function as a file server. Although for ease of description,FIG.6illustrates a separate web server614and file/application server616, those skilled in the art will recognize that the functions described with respect to servers614,616may be performed by a single server and/or a plurality of specialized servers, depending on implementation-specific needs and parameters. The computer systems604,608,612, web (file) server614and/or web (application) server616may function as the system, devices, or components described inFIGS.1-6. The computing environment600may also include a database618. The database618may reside in a variety of locations. By way of example, database618may reside on a storage medium local to (and/or resident in) one or more of the computers604,608,612,614,616. Alternatively, it may be remote from any or all of the computers604,608,612,614,616, and in communication (e.g., via the network452) with one or more of these. The database618may reside in a storage-area network (“SAN”) familiar to those skilled in the art. Similarly, any necessary files for performing the functions attributed to the computers604,608,612,614,616may be stored locally on the respective computer and/or remotely, as appropriate. The database618may be a relational database, such as Oracle 20i®, that is adapted to store, update, and retrieve data in response to SQL-formatted commands. FIG.7illustrates one embodiment of a computer system700upon which the servers, user computers, computing devices, or other systems or components described above may be deployed or executed. The computer system700is shown comprising hardware elements that may be electrically coupled via a bus704. The hardware elements may include one or more central processing units (CPUs)708; one or more input devices712(e.g., a mouse, a keyboard, etc.); and one or more output devices716(e.g., a display device, a printer, etc.). The computer system700may also include one or more storage devices720. By way of example, storage device(s)720may be disk drives, optical storage devices, solid-state storage devices such as a random access memory (“RAM”) and/or a read-only memory (“ROM”), which can be programmable, flash-updateable and/or the like. The computer system700may additionally include a computer-readable storage media reader724; a communications system728(e.g., a modem, a network card (wireless or wired), an infra-red communication device, etc.); and working memory736, which may include RAM and ROM devices as described above. The computer system700may also include a processing acceleration unit732, which can include a DSP, a special-purpose processor, and/or the like. The computer-readable storage media reader724can further be connected to a computer-readable storage medium, together (and, optionally, in combination with storage device(s)720) comprehensively representing remote, local, fixed, and/or removable storage devices plus storage media for temporarily and/or more permanently containing computer-readable information. The communications system728may permit data to be exchanged with a network and/or any other computer described above with respect to the computer environments described herein. Moreover, as disclosed herein, the term “storage medium” may represent one or more devices for storing data, including read only memory (ROM), random access memory (RAM), magnetic RAM, core memory, magnetic disk storage mediums, optical storage mediums, flash memory devices and/or other machine readable mediums for storing information. The computer system700may also comprise software elements, shown as being currently located within a working memory736, including an operating system740and/or other code744. It should be appreciated that alternate embodiments of a computer system700may have numerous variations from that described above. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, software (including portable software, such as applets), or both. Further, connection to other computing devices such as network input/output devices may be employed. Examples of the processors340,708as described herein may include, but are not limited to, at least one of Qualcomm® Snapdragon® 800 and 801, Qualcomm® Snapdragon® 620 and 615 with 4G LTE Integration and 64-bit computing, Apple® A7 processor with 64-bit architecture, Apple® M7 motion coprocessors, Samsung® Exynos® series, the Intel® Core™ family of processors, the Intel® Xeon® family of processors, the Intel® Atom™ family of processors, the Intel Itanium® family of processors, Intel® Core® i5-4670K and i7-4770K 22 nm Haswell, Intel® Core® i5-3570K 22 nm Ivy Bridge, the AMD® FX™ family of processors, AMD® FX-4300, FX-6300, and FX-8450 32 nm Vishera, AMD® Kaveri processors, Texas Instruments® Jacinto C6000™ automotive infotainment processors, Texas Instruments® OMAP™ automotive-grade mobile processors, ARM® Cortex™-M processors, ARM® Cortex-A and ARM1926EJ-S™ processors, other industry-equivalent processors, and may perform computational functions using any known or future-developed standard, instruction set, libraries, and/or architecture. Any of the steps, functions, and operations discussed herein can be performed continuously and automatically. The exemplary systems and methods of this disclosure have been described in relation to a PDU router104and a network communications system101. However, to avoid unnecessarily obscuring the present disclosure, the preceding description omits a number of known structures and devices. This omission is not to be construed as a limitation of the scope of the claimed disclosure. Specific details are set forth to provide an understanding of the present disclosure. It should, however, be appreciated that the present disclosure may be practiced in a variety of ways beyond the specific detail set forth herein. Furthermore, while the exemplary embodiments illustrated herein show the various components of the system collocated, certain components of the system can be located remotely, at distant portions of a distributed network, such as a LAN and/or the Internet, or within a dedicated system. Thus, it should be appreciated, that the components of the system can be combined into one or more devices, such as a server, communication device, or collocated on a particular node of a distributed network, such as an analog and/or digital telecommunications network, a packet-switched network, or a circuit-switched network. It will be appreciated from the preceding description, and for reasons of computational efficiency, that the components of the system can be arranged at any location within a distributed network of components without affecting the operation of the system. Furthermore, it should be appreciated that the various links connecting the elements can be wired or wireless links, or any combination thereof, or any other known or later developed element(s) that is capable of supplying and/or communicating data to and from the connected elements. These wired or wireless links can also be secure links and may be capable of communicating encrypted information. Transmission media used as links, for example, can be any suitable carrier for electrical signals, including coaxial cables, copper wire, and fiber optics, and may take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. While the flowcharts have been discussed and illustrated in relation to a particular sequence of events, it should be appreciated that changes, additions, and omissions to this sequence can occur without materially affecting the operation of the disclosed embodiments, configuration, and aspects. A number of variations and modifications of the disclosure can be used. It would be possible to provide for some features of the disclosure without providing others. In yet another embodiment, the systems and methods of this disclosure can be implemented in conjunction with a special purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element(s), an ASIC or other integrated circuit, a digital signal processor, a hard-wired electronic or logic circuit such as discrete element circuit, a programmable logic device or gate array such as PLD, PLA, FPGA, PAL, special purpose computer, any comparable means, or the like. In general, any device(s) or means capable of implementing the methodology illustrated herein can be used to implement the various aspects of this disclosure. Exemplary hardware that can be used for the present disclosure includes computers, handheld devices, telephones (e.g., cellular, Internet enabled, digital, analog, hybrids, and others), and other hardware known in the art. Some of these devices include processors (e.g., a single or multiple microprocessors), memory, nonvolatile storage, input devices, and output devices. Furthermore, alternative software implementations including, but not limited to, distributed processing or component/object distributed processing, parallel processing, or virtual machine processing can also be constructed to implement the methods described herein. In yet another embodiment, the disclosed methods may be readily implemented in conjunction with software using object or object-oriented software development environments that provide portable source code that can be used on a variety of computer or workstation platforms. Alternatively, the disclosed system may be implemented partially or fully in hardware using standard logic circuits or VLSI design. Whether software or hardware is used to implement the systems in accordance with this disclosure is dependent on the speed and/or efficiency requirements of the system, the particular function, and the particular software or hardware systems or microprocessor or microcomputer systems being utilized. In yet another embodiment, the disclosed methods may be partially implemented in software that can be stored on a storage medium, executed on programmed general-purpose computer with the cooperation of a controller and memory, a special purpose computer, a microprocessor, or the like. In these instances, the systems and methods of this disclosure can be implemented as a program embedded on a personal computer such as an applet, JAVA® or CGI script, as a resource residing on a server or computer workstation, as a routine embedded in a dedicated measurement system, system component, or the like. The system can also be implemented by physically incorporating the system and/or method into a software and/or hardware system. Although the present disclosure describes components and functions implemented in the embodiments with reference to particular standards and protocols, the disclosure is not limited to such standards and protocols. Other similar standards and protocols not mentioned herein are in existence and are considered to be included in the present disclosure. Moreover, the standards and protocols mentioned herein and other similar standards and protocols not mentioned herein are periodically superseded by faster or more effective equivalents having essentially the same functions. Such replacement standards and protocols having the same functions are considered equivalents included in the present disclosure. The present disclosure, in various embodiments, configurations, and aspects, includes components, methods, processes, systems and/or apparatus substantially as depicted and described herein, including various embodiments, subcombinations, and subsets thereof. Those of skill in the art will understand how to make and use the systems and methods disclosed herein after understanding the present disclosure. The present disclosure, in various embodiments, configurations, and aspects, includes providing devices and processes in the absence of items not depicted and/or described herein or in various embodiments, configurations, or aspects hereof, including in the absence of such items as may have been used in previous devices or processes, e.g., for improving performance, achieving ease, and/or reducing cost of implementation. The foregoing discussion of the disclosure has been presented for purposes of illustration and description. The foregoing is not intended to limit the disclosure to the form or forms disclosed herein. In the foregoing Detailed Description for example, various features of the disclosure are grouped together in one or more embodiments, configurations, or aspects for the purpose of streamlining the disclosure. The features of the embodiments, configurations, or aspects of the disclosure may be combined in alternate embodiments, configurations, or aspects other than those discussed above. This method of disclosure is not to be interpreted as reflecting an intention that the claimed disclosure requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment, configuration, or aspect. Thus, the following claims are hereby incorporated into this Detailed Description, with each claim standing on its own as a separate preferred embodiment of the disclosure. Moreover, though the description of the disclosure has included description of one or more embodiments, configurations, or aspects and certain variations and modifications, other variations, combinations, and modifications are within the scope of the disclosure, e.g., as may be within the skill and knowledge of those in the art, after understanding the present disclosure. It is intended to obtain rights, which include alternative embodiments, configurations, or aspects to the extent permitted, including alternate, interchangeable and/or equivalent structures, functions, ranges, or steps to those claimed, whether or not such alternate, interchangeable and/or equivalent structures, functions, ranges, or steps are disclosed herein, and without intending to publicly dedicate any patentable subject matter. Example aspects of the present disclosure include an apparatus including: a layered communication architecture; a processor; and memory in electronic communication with the processor; and instructions stored in the memory, the instructions being executable by the processor to: at a router module included in a first layer of the layered communication architecture, determine, based on a first PDU identifier and using a LUT associated with routing a PDU, a communication bus and an interface module associated with the communication bus; determine, based on the first PDU identifier and using a LUT associated with the interface module, a second PDU identifier associated with the interface module; and transmit the PDU and the second PDU identifier to the interface module. Aspects of the above apparatus include: where the instructions are further executable by the processor to: at a PDU service module included in the first layer of the layered communication architecture, determine, based on the first PDU identifier and using a LUT associated with servicing the PDU, a signal identifier associated with a set of signals to include in a PDU; generate, based on the LUT associated with servicing the PDU, the PDU including the set of signals; and transmit the PDU and the first PDU identifier to the router module. Aspects of the above apparatus include: where the instructions are further executable by the processor to: at the PDU service module, identify an event associated with transmitting the PDU; and identify the first PDU identifier, where the first PDU identifier is associated with the event. Aspects of the above apparatus include: where generating the PDU is based on the first PDU identifier and a second LUT associated with generating the PDU, and where the second LUT includes the set of signals. Aspects of the above apparatus include: where transmitting the PDU and the first PDU identifier to the router module is associated with a first temporal instance; and transmitting the PDU and the second PDU identifier to the interface module is associated with a second temporal instance. Aspects of the above apparatus include: where transmitting the PDU and the second PDU identifier to the interface module may include: transmitting the PDU and the second PDU identifier to a transport module associated with the communication bus, where the transport module forwards the PDU and the second PDU identifier to the interface module based on the second PDU identifier. Aspects of the above apparatus include: where the instructions are further executable by the processor to: determine, at the interface module, based on the second PDU identifier and using a LUT associated with the interface module, a controller identifier indicating a controller module associated with the communication bus; determine, at the interface module, based the second PDU identifier and using a LUT associated with the controller module, a message identifier; and generate, using the controller module, a message including the message identifier and the set of signals. Aspects of the above apparatus include: where the instructions are further executable by the processor to: transmit the message across the communication bus, using a device driver associated with the communication bus. Aspects of the above apparatus include: where the instructions are further executable by the processor to: at the router module, determine, based on a third PDU identifier and using the LUT associated with routing the PDU, a second communication bus and a second interface module associated with the second communication bus; determine, based on the third PDU identifier and using a LUT associated with the second interface module, a fourth PDU identifier associated with the second interface module; and transmit the PDU and the fourth PDU identifier to the second interface module. Aspects of the above apparatus include: where transmitting the PDU and the second PDU identifier to the interface module and transmitting the PDU and the fourth PDU identifier to the second interface module are based on a same data request by the router module. Aspects of the above apparatus include: a database including a set of LUTs, where each LUT of the set of LUTs is associated with a respective layer of a set of layers of the layered communication architecture, a respective module of a set of modules of the layered communication architecture, or both, where the set of layers may include at least the first layer, and the set of modules may include at least the router module and the interface module. Aspects of the above apparatus include: where at least one LUT of the set of LUTs may include a non-colliding O(1) lookup table. Aspects of the above apparatus include: where the database may include multiple sets of PDU identifiers; and each set of PDU identifiers is associated with a respective LUT of the set of LUTs, a respective layer of the layered communication architecture, a respective module of the layered communication architecture, or a combination thereof. Aspects of the above apparatus include: database including a set of LUTs, where the set of LUTs may include at least one of: a LUT associated with servicing the PDU; the LUT associated with routing the PDU; a LUT associated with the interface module; and a LUT associated with a controller module associated with the communication bus. Aspects of the above apparatus include: a database including multiple sets of PDU identifiers, where the multiple sets of PDU identifiers may include at least one of: a first set of PDU identifiers associated with multiplexing signals into an outgoing PDU, demultiplexing signals of an incoming PDU, or both; a second set of PDU identifiers associated with routing the outgoing PDU, the incoming PDU, or both; a third set of PDU identifiers associated with determining a transmission interface to which to transmit the outgoing PDU, determining a buffer to which to route the incoming PDU, or both; and a fourth set of PDU identifiers associated with determining a message identifier corresponding to the outgoing PDU, the incoming PDU, or both, where the fourth set of PDU identifiers is associated with determining a communication bus corresponding to the outgoing PDU, the incoming PDU, or both. Example aspects of the present disclosure include a method including: determining, based on a first PDU identifier and using a LUT associated with routing a PDU, a communication bus and an interface module associated with the communication bus; determining, based on the first PDU identifier and using a LUT associated with the interface module, a second PDU identifier associated with the interface module; and transmitting the PDU and the second PDU identifier to the interface module. Aspects of the above method include: determining, based on the first PDU identifier and using a LUT associated with servicing the PDU, a signal identifier associated with a set of signals to include in a PDU; generating, based on the LUT associated with servicing the PDU, the PDU including the set of signals; and transmitting the PDU and the first PDU identifier. Aspects of the above method include: determining, based the second PDU identifier and using a LUT associated with the interface module, a controller identifier indicating a controller module associated with the communication bus; determining, based the second PDU identifier and using a LUT associated with the controller module, a message identifier; and generating, a message including the message identifier and the set of signals. Example aspects of the present disclosure include an apparatus including: a layered communication architecture; a processor; and memory in electronic communication with the processor; and instructions stored in the memory, the instructions being executable by the processor to: receive a message over a communication bus, using a device driver associated with the communication bus, where the message may include a PDU; detect, using a controller module, a message identifier included in the message; and determine, at an interface module associated with the communication bus, based the message identifier and using a LUT associated with the controller module, a first PDU identifier associated with the interface module. Aspects of the above apparatus include: where the instructions are further executable by the processor to: at a router module included in a first layer of the layered communication architecture, determine, based on the first PDU identifier and using a LUT associated with the interface module, a PDU service module included in the first layer of the layered communication architecture; determine, based on the first PDU identifier and using a second LUT associated with the interface module, a second PDU identifier associated with the PDU service module; and transmit the PDU and the second PDU identifier to the PDU service module. Aspects of the above apparatus include: where the instructions are further executable by the processor to: at the PDU service module, determine, based on the second PDU identifier and using a LUT associated with servicing the PDU, a signal identifier associated with a set of signals to extract from the PDU; and extract, based on the signal identifier and using the LUT associated with servicing the PDU, the set of signals. The phrases “at least one,” “one or more,” “or,” and “and/or” are open-ended expressions that are both conjunctive and disjunctive in operation. For example, each of the expressions “at least one of A, B and C,” “at least one of A, B, or C,” “one or more of A, B, and C,” “one or more of A, B, or C,” “A, B, and/or C,” and “A, B, or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together. The term “a” or “an” entity refers to one or more of that entity. As such, the terms “a” (or “an”), “one or more,” and “at least one” can be used interchangeably herein. It is also to be noted that the terms “comprising,” “including,” and “having” can be used interchangeably. The term “automatic” and variations thereof, as used herein, refers to any process or operation, which is typically continuous or semi-continuous, done without material human input when the process or operation is performed. However, a process or operation can be automatic, even though performance of the process or operation uses material or immaterial human input, if the input is received before performance of the process or operation. Human input is deemed to be material if such input influences how the process or operation will be performed. Human input that consents to the performance of the process or operation is not deemed to be “material.” Aspects of the present disclosure may take the form of an embodiment that is entirely hardware, an embodiment that is entirely software (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module,” or “system.” Any combination of one or more computer-readable medium(s) may be utilized. The computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium. A computer-readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer-readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device. A computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer-readable signal medium may be any computer-readable medium that is not a computer-readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer-readable medium may be transmitted using any appropriate medium, including, but not limited to, wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. The terms “determine,” “calculate,” “compute,” and variations thereof, as used herein, are used interchangeably and include any type of methodology, process, mathematical operation or technique.
107,014
11943141
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS To make objectives, technical solutions, and advantages of this application clearer, the following further describes implementations of this application in detail with reference to accompanying drawings. To facilitate understanding of embodiments of this application, the following first describes a system architecture and concepts of nouns in the embodiments of this application. The embodiments of this application are applicable to an EPVN network. The EPVN network includes a plurality of network devices, and the network device may be a service provider edge (PE) device. The network device may include a forwarding plane and a control plane. The control plane may be used to learn a MAC entry by using a BGP peer, and deliver the learned MAC entry to the forwarding plane. The forwarding plane guides forwarding of received traffic based on the received MAC entry and a locally learned MAC entry. FIG.1is a schematic diagram of networking of an EPVN. In the EPVN, to implement interworking between sites, an EPVN instance is created on network devices (namely, PE devices) on a carrier backbone network, and a customer edge (CE) device at each site is connected to the PE device. In addition, a neighbor relationship is established between the network devices. Because each site is a layer 2 network, the network device learns MAC entries rather than routes from CE devices. The network device forwards the MAC entries learned from the CE devices to other sites by using an EPVN-specific route type (namely, a BGP protocol). In this way, each network device in the EPVN has MAC entries of all CE devices. It should be noted that, for a network device herein, the network device may learn MAC entries of a CE device connected to the network device, where these MAC entries may be referred to as local MAC entries. The network device learns, from another network device based on the BGP protocol, a MAC entry of a CE device connected to the another network device, and the another network device may be referred to as a BGP peer of the network device. An embodiment of this application provides a method for managing a MAC table. The method may be performed by a network device, and the network device may be a PE device in an EPVN. FIG.2is a structural block diagram of a network device according to an embodiment of this application. The network device may include at least a receiver201, a processor202, a memory203, and a transmitter204. The receiver201may be configured to receive data, the transmitter204may be configured to send data, the memory203may be configured to store a software program and a module, and the processor202executes various functional applications and data processing by running the software program and the module that are stored in the memory203. The memory203may mainly include a program storage area and a data storage area. The program storage area may store an operating system, an application required by at least one function, and the like. The storage data area may store data created according to use of the network device, and the like. In addition, the memory203may include a high-speed random access memory, and may further include a non-volatile memory, for example, at least one magnetic disk storage device, a flash storage device, or another volatile solid-state storage device. Correspondingly, the memory203may further include a memory controller, to provide the processor202, the receiver201, and the transmitter204with access to the memory203. The processor202is a control center of the network device. The processor202connects parts of the entire network device through various interfaces and lines, performs various functions and data processing of the network device by running or executing the software program and/or the module stored in the memory203, and invoking data stored in the memory203, to perform overall monitoring on the network device. In a specific implementation, the processor202may include one or more processing cores. Preferably, an application processor and a modem processor may be integrated into the processor202, where the application processor mainly processes an operating system, a user interface, an application, and the like, and the modem processor mainly processes wireless communication. It may be understood that the modem processor may alternatively not be integrated into the processor202. The network device may include a forwarding plane and a control plane. The control plane may be implemented by a central processing unit (CPU), or may be implemented by a network processor (NP) with a control plane function. The forwarding plane may be a switching chip. For example, the forwarding plane may be implemented by an application-specific integrated circuit (ASIC), a programmable logic device (PLD), an NP, a core configured to implement the forwarding plane in a multi-core CPU, or any combination thereof. The PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), generic array logic (GAL), or any combination thereof. With reference to a specific implementation, the following describes a processing procedure shown inFIG.3in detail. Content may be as follows. Step301: A control plane of a network device obtains a first MAC entry. In implementation, during networking in an EPVN, the control plane of the network device may learn, from another network device based on a BGP protocol, a MAC entry (the learned MAC entry includes the first MAC entry) of a CE device connected to the another network device. The another network device may be referred to as a BGP peer of the network device, and the MAC entry includes a MAC address, an outbound interface corresponding to the MAC address, a broadcast domain, and the like. Step302: The control plane sends the first MAC entry and a first identifier to a forwarding plane of the network device, where the first identifier is used to instruct the forwarding plane to set the first MAC entry to a valid state in a first MAC table, and the first MAC entry includes a first MAC address. In implementation, after obtaining the first MAC entry, the control plane may send the first MAC entry and the first identifier to the forwarding plane of the network device. The first identifier is used to instruct the forwarding plane to set the first MAC entry to the valid state in the stored first MAC table, and the first MAC entry includes the first MAC address. Step303: The forwarding plane of the network device stores, in the first MAC table, the first MAC entry sent by the control plane, where a state of the first MAC entry is set to the valid state. The first MAC table is a MAC table stored by the forwarding plane of the network device. In implementation, after receiving the first MAC entry and the first identifier that are sent by the control plane, the forwarding plane may store the first MAC entry in the first MAC table, and set the state of the first MAC entry to the valid state (as shown in Table 1, herein, a state flag may be added to the first MAC table to identify a state of a MAC entry). TABLE 1MAC addressBroadcast domain. . .State flagFirst MAC addressA. . .Valid state. . .. . .. . .. . . In Table 1, the first MAC address is a MAC address in the first MAC entry, the broadcast domain is A, the state is the valid state, and an ellipsis between the broadcast domain and the state flag indicates another attribute of the MAC entry. In a specific implementation, when the state of the first MAC entry is set to the valid state, a state flag of the first MAC entry may be set to the first identifier. If a state flag of a MAC entry is the first identifier, it indicates that a state of the MAC entry is a valid state. In addition, in this embodiment of this application, the forwarding plane of the network device may further learn a MAC entry from a CE device locally connected to the network device, and a state of the MAC entry learned from the local CE device is also set to a valid state. To prevent a local traffic attack, a MAC entry limit value of a local interface may be set. Generally, a local MAC entry limit value is related to a quantity of CEs connected to the network device. Generally, the MAC entry limit value is equal to the quantity of CEs connected to the network device. MAC entries mentioned later are MAC entries learned from the BGP peer based on the BGP protocol. Step304: The forwarding plane changes the state of the first MAC entry from the valid state to an invalid state, and sets invalid duration for the first MAC entry. After the first MAC entry is stored in the forwarding plane, the state of the first MAC entry is changed to the invalid state after first preset duration elapses and when a first preset condition is met. The first preset condition is that the forwarding plane does not receive traffic corresponding to the first MAC address within the first preset duration after storing the first MAC entry, and the first MAC address is a MAC address included in the first MAC entry. In implementation, after receiving traffic, the forwarding plane may obtain a destination MAC address of the traffic, and match the destination MAC address in the first MAC table. If the destination MAC address can be matched, the traffic is forwarded based on an outbound interface corresponding to the destination MAC address. For the first MAC table, after the first MAC entry is stored in the forwarding plane, if there is no traffic whose destination MAC address is the same as the first MAC address (namely, the MAC address in the first MAC entry) after the first preset duration elapses, the forwarding plane may set, in the first MAC table, the state flag of the first MAC entry to a second identifier, to be specific, change the state of the first MAC entry from the valid state to the invalid state. The second identifier is used to instruct that the state of the MAC entry is the invalid state. In addition, the forwarding plane may obtain pre-stored invalid duration, and set the invalid duration to the invalid duration of the first MAC entry. A table of the invalid duration may be added to the first MAC table. For example, as shown in Table 2, if the invalid duration is five minutes, five minutes may be written into the table of the invalid duration. TABLE 2MAC addressBroadcast. . .State flagInvalid durationdomainFirst MAC addressA. . .Second identifierFive minutes. . .. . .. . .. . .. . . It should be noted that the invalid duration may be pre-configured in the forwarding plane, and invalid duration of each MAC entry is generally the same. The first identifier may be 0, and the second identifier may be 1. It should further be noted that the destination MAC address rather than a source MAC address is matched because the destination MAC address rather than the source MAC address is used during forwarding. During forwarding, traffic without a matched destination MAC address is broadcast, regardless of a source MAC address. Therefore, the destination MAC address rather than the source MAC address needs to be matched. In a specific implementation, to synchronize the forwarding plane with the control plane, after setting the state of the first MAC entry to the invalid state, the forwarding plane may further notify the control plane. Processing may be as follows. The forwarding plane sends a first message to the control plane, where the first message is used to instruct the control plane to set a second MAC entry that is stored in a second MAC table stored in the control plane to an invalid state, and the second MAC entry includes the first MAC address. The control plane receives the first message sent by the forwarding plane, and the control plane sets the second MAC entry that is stored in the second MAC table stored in the control plane to the invalid state. In implementation, after setting the state of the first MAC entry to the invalid state, the forwarding plane may generate the first message, add an identifier (the identifier may be the first MAC address) of the first MAC entry to the first message, and then send the first message to the control plane. the first message is used to instruct the control plane to set the second MAC entry that is stored in the second MAC table stored in the control plane to the invalid state. After receiving the first message, the control plane may parse the first message to obtain the identifier of the first MAC entry, then obtain the stored second MAC table, find, in the second MAC table by using the identifier of the first MAC entry, the second MAC entry corresponding to the identifier of the first MAC entry, and then set a flag of a state of the second MAC entry to the second identifier, to be specific, a state of the second MAC is updated from a valid state to the invalid state. In a specific implementation, to save storage space of MAC entries of the forwarding plane, a MAC entry that meets a specific condition may be deleted. Corresponding processing may be as follows: The forwarding plane deletes the first MAC entry from the first MAC table, where after the first MAC entry is set to the invalid state, the first MAC entry is deleted after the invalid duration and when a second preset condition is met. The second preset condition is that the forwarding plane does not receive the traffic corresponding to the first MAC address within the invalid duration after setting the first MAC entry to the invalid state. In implementation, after setting the state of the first MAC entry to the invalid state, the forwarding plane may determine whether traffic whose destination address is the first MAC address (namely, the traffic corresponding to the first MAC address) is received within the invalid duration. If the traffic whose destination address is the first MAC address is not received within the invalid duration, the forwarding plane may delete the first MAC entry from the first MAC table, to save the storage space of the MAC entries of the forwarding plane. For example, the invalid duration of the first MAC entry is five minutes, and if the traffic whose destination address is the first MAC address is not received within five minutes after the first MAC entry is set to the invalid state, the first MAC entry may be deleted from the first MAC table. In this way, a MAC entry in a valid state and a MAC entry in an invalid state are identified, and the MAC entry in the invalid state is preferably filtered out. Therefore, this makes limitation of the MAC entry more accurate and improves an anti-attack effect of the EVPN. In a specific implementation, to reduce a quantity of times that traffic is multicast or broadcast in the forwarding plane, the forwarding plane may restore a MAC entry that meets a specific condition and is in an invalid state to a valid state. Corresponding processing may be as follows: The forwarding plane changes the state of the first MAC entry from the invalid state to the valid state, where the state of the first MAC entry is changed to the valid state after the forwarding plane receives the traffic corresponding to the first MAC address within the invalid duration. In implementation, after setting the state of the first MAC entry to the invalid state, the forwarding plane may determine whether the traffic whose destination address is the first MAC address (namely, the traffic corresponding to the first MAC address) is received within the invalid duration. If the traffic whose destination address is the first MAC address is received within the invalid duration, the state of the first MAC entry may be updated from the invalid state to the valid state. For example, the invalid duration of the first MAC entry is five minutes. At the fourth minute after the first MAC entry is set to the invalid state, when the traffic whose destination address is the first MAC address is received, the state of the first MAC entry may be changed from the invalid state to the valid state. In this way, it can be learned from the foregoing process that a frequently used MAC entry is generally not deleted. Therefore, when traffic corresponding to the MAC entry is received, the traffic may be forwarded based on the MAC entry. This can reduce a quantity of times that the traffic is broadcast or multicast. In addition, when the state of the first MAC entry is changed from the invalid state to the valid state, the control plane may be further notified for change. Corresponding processing may be as follows. The forwarding plane may send a third message to the control plane, where the third message may be used to indicate the control plane to set the second MAC entry that is stored in the second MAC table stored in the control plane to the valid state. After receiving the third message, the control plane may change the state of the second MAC entry to the valid state. In this way, it can be ensured that MAC entries of the forwarding plane and the control plane are synchronized as much as possible. In a specific implementation, as shown inFIG.4, an embodiment of this application further provides the following processing when a destination address of traffic received by the forwarding plane cannot match a MAC entry in the first MAC table. Step401: The forwarding plane sends a second message to the control plane, where the second message carries a second MAC address and is used to request a third MAC entry, and the third MAC entry includes the second MAC address. In implementation, when receiving traffic, the forwarding plane may search, by using a destination MAC address in the traffic, the first MAC table for a MAC entry in which the destination MAC address is located. If the destination MAC address (which may be referred to as the second MAC address subsequently) does not exist, the forwarding plane may generate the second message, enable the second message to carry the second MAC address, and then send the second message to the control plane. Step402: The control plane receives the second message sent by the forwarding plane, where the second message carries the second MAC address. Step403: The control plane sends the third MAC entry to the forwarding plane, where the third MAC entry includes the second MAC address. In implementation, after receiving the second message, the control plane may parse the second message to obtain the second MAC address, and traverse a stored MAC table by using the second MAC address, to determine whether a MAC entry to which the second MAC address belongs exists. If the MAC entry (namely, the third MAC entry) to which the second MAC address belongs exists, the control plane may send the third MAC entry and the first identifier to the forwarding plane, where the first identifier is used to instruct the forwarding plane to set the third MAC entry to a valid state. In addition, if the MAC entry to which the second MAC address belongs does not exist in the control plane, the control plane may generate a fourth message, and send the fourth message to the forwarding plane, where the fourth message is used to instruct the forwarding plane to delete traffic corresponding to the second MAC address this time. In this way, an attack can be avoided. In a specific implementation, when an occupation rate of storage space of MAC entries of the forwarding plane is greater than a first preset value, processing in step403may be as follows. When the occupation rate of the storage space of the MAC entries of the forwarding plane exceeds the first preset value, the second message further carries a third MAC address, where the third MAC address is a MAC address that is stored by the forwarding plane and has longest current invalid duration. If the second MAC address exists in the control plane and the second MAC address exists in the stored MAC table, the third MAC entry is sent to the forwarding plane. If the second MAC address exists in the control plane, the second MAC address does not exist in the stored MAC table, and a border gateway protocol BGP MAC entry limit is deployed, after a fourth MAC entry is deleted from the stored MAC table, the third MAC entry is sent to the stored MAC table, and the third MAC entry is sent to the forwarding plane. The fourth MAC entry includes the third MAC address. The first preset value may be preset and stored in the forwarding plane, for example, may be 95%. The BGP MAC entry limit is a limit set for MAC entries learned based on a BGP peer. For example, if the BGP MAC entry limit of the control plane is 5000, MAC entries learned based on the BGP peer is actually 10000, but only 5000 MAC entries are stored in the MAC table of the control plane. The MAC table is delivered to the forwarding plane to guide traffic forwarding of the forwarding plane. The remaining 5000 MAC entries are not stored in the MAC table of the control plane but stored in another place of the control plane, and the 5000 MAC entries are not delivered to the forwarding plane. In implementation, when receiving the traffic, the forwarding plane may search, by using the destination MAC address in the traffic, the first MAC table for the MAC entry in which the destination MAC address is located. If the destination MAC address (which may be referred to as the second MAC address subsequently) does not exist, a size of the storage space of the MAC entries of the forwarding plane may be obtained, and a current storage capacity of the MAC entries of the forwarding plane is obtained. The current storage capacity is compared with the size of the storage space of the MAC entries, to obtain a current occupation rate of the storage space of the MAC entries. The occupation rate is compared with the first preset value. If the occupation rate exceeds the first preset value, a MAC address (namely, the third MAC address) that is in the MAC entries and that has longest current invalid duration may be determined in the first MAC table, and the second message is sent to the control plane, where the second message carries the second MAC address and the third MAC address. After receiving the second message, the control plane may determine whether the MAC table of the control plane stores the MAC entry of the second MAC address. If the MAC table of the control plane stores the MAC entry of the second MAC address, the control plane directly sends the third MAC entry to the forwarding plane. If the MAC entry of the second MAC address is not stored in the MAC table of the control plane, but the control plane stores the third MAC entry of the second MAC address (in this case, the MAC entry of the second MAC address is generally not stored in the MAC table of the control plane, but is stored in another place), it may be determined whether a BGP MAC entry limit is deployed in the control plane. If the BGP MAC entry limit is deployed, the fourth MAC entry (the fourth MAC entry includes the third MAC address) may be deleted from the MAC table stored in the control plane. Then, the third MAC entry (the third MAC entry includes the second MAC address) is stored in the MAC table that is stored in the control plane, and the third MAC entry is sent to the forwarding plane. In addition, generally, the storage space of the MAC entry of the control plane is relatively sufficient. Therefore, the second MAC address exists in the control plane, the MAC entry of the second MAC address is not stored in the MAC table of the control plane, and the BGP MAC entry limit is not deployed. This case generally does not exist, and therefore is not described herein. Step404: The forwarding plane receives the third MAC entry sent by the control plane. The forwarding plane stores the third MAC entry in the first MAC table, and sets the third MAC entry to a valid state. In implementation, after receiving the third MAC entry sent by the control plane, the forwarding plane may store the third MAC entry in the first MAC table, and set the third MAC entry to the valid state. In this way, when subsequently receiving traffic whose destination address is the second MAC address, the forwarding plane may directly send the traffic based on the third MAC entry without by using the control plane. In a specific implementation, when the occupation rate of the storage space of the MAC entries of the forwarding plane exceeds the first preset value, the second message further carries the third MAC address, where the third MAC address is a MAC address that is stored by the forwarding plane and has the longest current invalid duration. The forwarding plane can perform the following processing. The forwarding plane deletes the fourth MAC entry from the first MAC table, where the fourth MAC entry stores the third MAC address. In implementation, when the second message carries the third MAC address, after receiving the third MAC entry sent by the control plane, the forwarding plane may delete the fourth MAC entry (the fourth MAC entry includes the third MAC address) from the first MAC table, then add the third MAC entry, and set the state of the third MAC entry to the valid state. In this way, when the storage space of the MAC entries of the forwarding plane is occupied relatively much, it is equivalent to that the fourth MAC entry is replaced with the third MAC entry, so that total occupied storage space remains unchanged. Step405: The forwarding plane forwards, based on the third MAC entry, the traffic corresponding to the second MAC address. In implementation, the forwarding plane may forward, based on the second MAC address, the traffic whose target address is the second MAC address. In addition, the fourth message may be further used to indicate the forwarding plane not to send the third message to the control plane when subsequently receiving the traffic whose destination address is the second MAC address. When receiving the fourth message, the forwarding plane may record the second MAC address in another MAC table. In this way, if the forwarding plane cannot find the second MAC address in the first MAC table, the forwarding plane may search the another MAC table for the second MAC address. If the second MAC address exists, the third message is not sent to the control plane. If the second MAC address does not exist, the third message may be sent to the control plane. In this way, processing resources can be saved. In addition, herein, when sending the second message to the control plane, the forwarding plane may directly forward, in a broadcasting manner, the traffic whose destination address is the second MAC address. In this way, when receiving the fourth message sent by the control plane, the forwarding plane can record only subsequently received traffic corresponding to the second MAC address, and no longer report the second message to the control plane. In a specific implementation, an embodiment of this application further provides corresponding processing in a process in which the forwarding plane sends the MAC table to a control and management device. The forwarding plane sends a third MAC table to the control and management device at intervals of preset duration, where all MAC entries included in the third MAC table are set to a valid state, and the third MAC table is a subset of the first MAC table. The preset duration may be preset and is stored in the forwarding plane. As shown inFIG.5, a connection is established between the control and management device and the network device, and the control and management device may be a software-defined networking (SDN) server or the like. In implementation, after going online, the forwarding plane may obtain, at the intervals of preset duration, a MAC entry that is marked as a valid state in a current first MAC table, to form the third MAC table, and then send the third MAC table to the control and management device. After receiving the third MAC table sent by the forwarding plane, the control and management device may replace a previously stored MAC table with the third MAC table received this time. In this way, the control and management device backs up the MAC entry in the valid state. In a specific implementation, when a BGP session between the network device and the BGP peer corresponding to the network device is disconnected and reestablished, the third MAC table may be obtained from the control and management device to guide forwarding of the forwarding plane. As shown inFIG.6, corresponding processing steps may be as follows: Step601: The control plane sends a MAC table obtaining request to the control and management device when the BGP session between the network device and the BGP peer is disconnected and reestablished. In implementation, the BGP session between the network device and the BGP peer corresponding to the network device is disconnected and reestablished when the network device is restarted. Alternatively, the BGP session between the network device and the BGP peer corresponding to the network device is reestablished after the BGP session between the network device and the BGP peer corresponding to the network device is disconnected. In this case, the control plane may send the MAC table obtaining request to the connected control and management device, and may learn the MAC entries from the BGP peer by using the BGP protocol. After receiving the MAC table obtaining request sent by the control plane of the network device, the control and management device may determine to obtain a latest MAC table (namely, the third MAC table) that is received from the forwarding plane of the network device and that is stored in the control and management device, and then send the third MAC table to the control plane. Step602: The control plane receives the third MAC table sent by the control and management device. Step603: The control plane determines a fifth MAC entry based on the learned MAC entries and the third MAC table, where the learned MAC entries are learned by the control plane based on the BGP peer. In implementation, after receiving the third MAC table sent by the control and management device, the control plane may determine the fifth MAC entry by using the learned MAC entries and the third MAC table. In a specific implementation, the control plane may determine the fifth MAC entry in the following manner. Corresponding processing in step603may be as follows: The control plane determines a MAC entry that belongs to the third MAC table and belongs to the learned MAC entries as the fifth MAC entry. In implementation, the control plane may determine an intersection set of the MAC entries in the third MAC table and the MAC entries learned from the BGP peer, and determine a MAC entry in the intersection set as the fifth MAC entry. Step604: The control plane sends the fifth MAC entry to the forwarding plane. In a specific implementation, corresponding to a fact that the fifth MAC entry belongs to the third MAC table and belongs to the learned MAC entries, processing in step604may be as follows. The control plane sends the fifth MAC entry and the first identifier to the forwarding plane, where the first identifier is used to instruct the forwarding plane to set the fifth MAC entry to a valid state. Step605: The forwarding plane receives the fifth MAC entry sent by the control plane, where the fifth MAC entry is determined by the control plane based on the learned MAC entries and the third MAC table, and the learned MAC entries are learned by the control plane based on the border gateway protocol BGP peer. In a specific implementation, for step604of sending the first identifier, processing in step605may be as follows. The forwarding plane receives the fifth MAC entry and the first identifier that are sent by the control plane. Step606: The forwarding plane stores the fifth MAC entry in the first MAC table. In implementation, after receiving the fifth MAC entry sent by the control plane, the forwarding plane may store the fifth MAC entry in the first MAC table. In a specific implementation, for step605in which the forwarding plane receives the fifth MAC entry and the first identifier that are sent by the control plane, after step606, the following processing may be further performed: The forwarding plane sets the fifth MAC entry to the valid state. It should be noted that, when the occupation rate of the storage space of the MAC entries of the forwarding plane exceeds a second preset value, the control plane may send only the fifth MAC entry to the forwarding plane and set the fifth MAC entry to the valid state without sending a subsequent sixth MAC entry. This can reduce occupation of the storage space of the MAC entries of the forwarding plane. In this way, whether the storage space of the MAC entries of the forwarding plane is insufficient or not, an intersection set of MAC entries that are in a valid state and that are previously determined by the forwarding plane and MAC entries currently learned based on the BGP peer is sent to the forwarding plane, and is set to the valid state. This can reduce occupation of the storage space of the MAC entries of the forwarding plane. In a specific implementation, when the storage space of the MAC entries of the forwarding plane does not exceed the second preset value, the forwarding plane and the control plane may further perform the following processing: The control plane determines a MAC entry that belongs to only the third MAC table or belongs to only the learned MAC entries as the sixth MAC entry. The control plane sends the sixth MAC entry and the second identifier to the forwarding plane, where the second identifier is used to instruct the forwarding plane to set the sixth MAC entry to an invalid state. The forwarding plane receives the sixth MAC entry and the second identifier that are sent by the control plane, where the sixth MAC entry belongs to only the third MAC table or belongs to only the learned MAC entries, and the second identifier is used to instruct the forwarding plane to set the sixth entry to the invalid state. The forwarding plane stores the sixth MAC entry in the first MAC table, and sets the sixth MAC entry to the invalid state. The second preset value may be preset and stored in the control plane, and the second preset value herein may be the same as the first preset value. In implementation, before delivering a MAC entry to the forwarding plane, the control plane may send an obtaining request to the forwarding plane, where the obtaining request is used to request to obtain an occupation rate of storage space of the MAC entry. After receiving the obtaining request, the forwarding plane may determine a current occupation amount of the storage space of the MAC entry and a total amount of the storage space, compare the occupation amount with the total amount to obtain a current occupation rate of the MAC entry, and send the occupation rate to the control plane. After receiving the occupation rate, the control plane may determine whether the occupation rate exceeds the second preset value. If the occupation rate does not exceed the second preset value, the control plane may determine a MAC entry that belongs to only the third MAC table or belongs to the MAC entries learned from the BGP peer as the sixth MAC entry, and then send the sixth MAC entry and the second identifier to the forwarding plane. After receiving the sixth MAC entry and the second identifier, the forwarding plane may store the sixth MAC entry in the first MAC table, and set the state of the sixth MAC entry to the invalid state. In this way, when the storage space of the forwarding plane is sufficient, the control plane may indicate the forwarding plane to distinguish between the MAC entry in the valid state and a MAC entry in an invalid state, and the MAC entry in the invalid state may be deleted as soon as possible. This can save the storage space of the MAC entries of the forwarding plane. In addition, a large quantity of MAC entries are delivered, which can reduce a probability that traffic is multicast or broadcast. In addition, when the entire storage space of the MAC entries of the forwarding plane is occupied, preset warning information may be sent. In a specific implementation, when a BGP session between the network device and the BGP peer in the network in which the network device is located is disconnected and reestablished, to shorten traffic interruption duration, the following processing may be performed: When the BGP session between the network device and the BGP peer is disconnected and reestablished, the control plane sends the MAC table obtaining request to the control and management device. The control plane receives the third MAC table sent by the control and management device. The control plane sends MAC entries in the third MAC table and the second identifier to the forwarding plane, where the second identifier is used to instruct the forwarding plane to set the MAC entries in the third MAC table to an invalid state. The forwarding plane receives the second identifier and the MAC entries in the third MAC table that are sent by the control plane. The forwarding plane stores the MAC entries in the third MAC table in the first MAC table, and sets the MAC entries in the third MAC table to an invalid state based on the second identifier. In implementation, the BGP session between the network device and the BGP peer corresponding to the network device is disconnected and reestablished when the network device is restarted. Alternatively, the BGP session between the network device and the BGP peer corresponding to the network device is reestablished after the BGP session between the network device and the BGP peer corresponding to the network device is disconnected. In this case, the control plane may send the MAC table obtaining request to the connected control and management device, and may learn the MAC entries from the BGP peer by using the BGP protocol. After receiving the obtaining request, the control and management device may obtain a MAC table stored in the control and management device, namely, the third MAC table, and send the third MAC table to the control plane. After receiving the third MAC table sent by the control and management device, the control plane may send the third MAC table and the second identifier to the forwarding plane. After receiving the third MAC table and the second identifier, the forwarding plane may store the MAC entries in the third MAC table in the first MAC table, and set these MAC entries to the invalid state. In this way, because a speed of obtaining the third MAC table is far less than that of learning the MAC entries from the BGP peer, the traffic interruption duration can be shortened. After learning the MAC entries from the BGP peer subsequently, the control plane may determine the intersection set of the MAC entries learned from the BGP peer and the MAC entries in the third MAC table, and send the MAC entry corresponding to the intersection set and the first identifier to the forwarding plane. After receiving the first identifier, the forwarding plane may update states of the MAC entries in the intersection set from an invalid state to a valid state. In this way, these MAC entries can be prevented from being deleted, and a large amount of traffic can be preventing from being broadcast. In this way, the forwarding plane periodically sends the MAC entries in the valid state to the control and management device. After the BGP session between the network device and the BGP peer is restored, the MAC entries in the valid state can be delivered to the forwarding plane as soon as possible to guide traffic forwarding of the forwarding plane. This can reduce a risk of broadcasting the large amount of traffic. In the embodiments of this application, the control plane of the network device may obtain the first MAC entry, and send the first MAC entry and the first identifier to the forwarding plane of the network device. The forwarding plane of the network device stores, in the first MAC table, the first MAC entry sent by the control plane, and the state of the first MAC entry is set to the valid state. After the first MAC entry is stored in the forwarding plane, the forwarding plane changes the state of the first MAC entry to the invalid state after the first preset duration elapses and when the first preset condition is met, and sets the invalid duration for the first MAC entry. The first preset condition is that the forwarding plane does not receive the traffic corresponding to the first MAC address within the first preset duration after storing the first MAC entry, and the first MAC address is a MAC address stored in the first MAC entry. In this way, after a MAC entry is stored in the forwarding plane, the MAC entry is set to an invalid state if no traffic matches the MAC entry for a period of time. This makes management of the MAC entry more reasonable. It should be further noted that, after a virtual private local area network (VPLS) evolves into the EVPN, because learning manners of MAC entries in the two technologies are different, a network device in the network has a higher requirement for a capacity of the MAC entry. This application solves this problem based on the foregoing processing, and reasons are as follows: On the premise of retaining an original advantage of the EVPN, because the MAC entry in the valid state can be identified, the MAC entry in the valid state is preferably delivered. This reduces a capacity requirement of the MAC entry in the EVPN. The capacity requirement of the MAC entry in the EVPN is the same as that of the MAC entry in the VPLS. A capacity requirement of the EVPN for a MAC entry in the network device is basically the same as that of VPLS for the MAC entry in the network device. Therefore, evolution from the VPLS to the EVPN is smoother. FIG.7is a structural diagram of a network device for managing a MAC table according to an embodiment of this application. The network device may be implemented as a part or all of the network device by using software, hardware, or a combination of software and hardware. The network device provided in this embodiment of this application may implement the procedures inFIG.3,FIG.4, andFIG.6in the embodiments of this application. A forwarding plane of the network device includes a storage module710, a processing module720, a sending module730, and a receiving module740. The storage module710is configured to store, in a first MAC table, a first MAC entry sent by a control plane, where a state of the first MAC entry is set to a valid state. The storage module710may specifically implement a storage function in the foregoing step303and other implicit steps. The processing module720is configured to change the state of the first MAC entry from the valid state to an invalid state, and set invalid duration for the first MAC entry. The processing module720may specifically implement a storage function in the foregoing step304and other implicit steps. After the first MAC entry is stored in the forwarding plane, the state of the first MAC entry is changed to the invalid state after first preset duration elapses and when a first preset condition is met. The first preset condition is that the forwarding plane does not receive traffic corresponding to a first MAC address within the first preset duration after storing the first MAC entry, and the first MAC address is a MAC address included in the first MAC entry. In a specific implementation, the sending module730is further configured to: send a first message to the control plane, where the first message is used to instruct the control plane to set a second MAC entry that is stored in a second MAC table stored in the control plane to an invalid state, and the second MAC entry includes the first MAC address. In a specific implementation, the processing module720is further configured to: delete the first MAC entry from the first MAC table, where after the first MAC entry is set to the invalid state, the first MAC entry is deleted after the invalid duration elapses and a second preset condition is met, and the second preset condition is that the forwarding plane does not receive the traffic corresponding to the first MAC address within the invalid duration after setting the first MAC entry to the invalid state. In a specific implementation, the processing module720is further configured to: change the state of the first MAC entry from the invalid state to the valid state, where the state of the first MAC entry is changed to the valid state after the forwarding plane receives the traffic corresponding to the first MAC address within the invalid duration. In a specific implementation, the sending module730is further configured to: send a second message to the control plane, where the second message carries a second MAC address and is used to request a third MAC entry, and the third MAC entry includes the second MAC address; the receiving module740is further configured to: receive the third MAC entry sent by the control plane; and the processing module720is further configured to: store the third MAC entry in the first MAC table, and set the third MAC entry to a valid state; and forward, based on the third MAC entry, traffic corresponding to the second MAC address. In a specific implementation, when an occupation rate of storage space of MAC entries of the forwarding plane exceeds a first preset value, the second message further carries a third MAC address, where the third MAC address is a MAC address that is stored by the forwarding plane and has longest current invalid duration; and the processing module720is further configured to delete a fourth MAC entry from the first MAC table, where the fourth MAC entry stores the third MAC address. In a specific implementation, the sending module730is further configured to: send a third MAC table to a control and management device at intervals of preset duration, where all MAC entries included in the third MAC table are set to a valid state, and the third MAC table is a subset of the first MAC table. In a specific implementation, the receiving module740is further configured to: receive a fifth MAC entry sent by the control plane, where the fifth MAC entry is determined by the control plane based on learned MAC entries and the third MAC table, and the learned MAC entries are learned by the control plane based on a border gateway protocol BGP peer; and the storage module710is further configured to store the fifth MAC entry in the first MAC table. In a specific implementation, the receiving module740is further configured to: receive the fifth MAC entry and a first identifier that are sent by the control plane, where the fifth MAC entry belongs to the third MAC table and belongs to the learned MAC entries, and the first identifier is used to instruct the forwarding plane to set the fifth entry to a valid state; and the processing module720is further configured to set the fifth MAC entry to the valid state. In a specific implementation, the occupation rate of the storage space of the MAC entries of the forwarding plane does not exceed a second preset value; the receiving module740is further configured to receive a sixth MAC entry and a second identifier that are sent by the control plane, where the sixth MAC entry belongs to only the third MAC table or belongs to only the learned MAC entries, and the second identifier is used to instruct the forwarding plane to set the sixth entry to an invalid state; the storage module710is further configured to store the sixth MAC entry in the first MAC table; and the processing module720is further configured to set the sixth MAC entry to the invalid state. In a specific implementation, the receiving module740is further configured to: receive the second identifier and the MAC entries in the third MAC table that are sent by the control plane; the storage module710is further configured to store the MAC entries in the third MAC table in the first MAC table; and the processing module720is further configured to set the MAC entries in the third MAC table to an invalid state based on the second identifier. FIG.8is a structural diagram of a network device for managing MAC according to an embodiment of this application. The network device may be implemented as a part or all of the network device by using software, hardware, or a combination of software and hardware. The network device provided in this embodiment of this application may implement the procedures inFIG.3,FIG.4, andFIG.6in the embodiments of this application. A control plane of the network device includes a processing module810, a sending module820, and a receiving module830. The processing module810is configured to obtain a first MAC entry, and may specifically implement a storage function in the foregoing step303and other implicit steps. The sending module820is configured to send the first MAC entry and a first identifier to a forwarding plane of the network device, and may specifically implement a storage function in the foregoing step302and other implicit steps. The first identifier is used to instruct the forwarding plane to set the first MAC entry to a valid state in a first MAC table, and the first MAC entry includes a first MAC address. In a specific implementation, the receiving module830is further configured to: receive a first message sent by the forwarding plane, where the first message carries the first MAC address and is used to instruct the control plane to set a second MAC entry that is stored in a second MAC table stored in the control plane to an invalid state, and the second MAC entry includes the first MAC address; and the processing module810is further configured to set the second MAC entry that is stored in the second MAC table stored in the processing module810to the invalid state. In a specific implementation, the receiving module830is further configured to: receive a second message sent by the forwarding plane, where the second message carries a second MAC address; and the sending module820is further configured to send a third MAC entry to the forwarding plane, where the third MAC entry includes the second MAC address. In a specific implementation, when an occupation rate of storage space of MAC entries of the forwarding plane exceeds a first preset value, the second message further carries a third MAC address, where the third MAC address is a MAC address that is stored by the forwarding plane and has longest current invalid duration; and the sending module820is further configured to: if the second MAC address exists in the control plane, and the second MAC address exists in a stored MAC table, send the third MAC entry to the forwarding plane; or if the second MAC address exists in the control plane, the second MAC address does not exist in a stored MAC table, and a boundary gateway protocol BGP MAC entry limit is deployed, after a fourth MAC entry is deleted from the stored MAC table, add the third MAC entry to the stored MAC table, and send the third MAC entry to the forwarding plane, where the fourth MAC entry includes the third MAC address. In a specific implementation, the sending module820is further configured to: send a MAC table obtaining request to a control and management device when a BGP session between the network device and a BGP peer is disconnected and reestablished; the receiving module830is further configured to receive a third MAC table sent by the control and management device; the processing module810is further configured to determine a fifth MAC entry based on learned MAC entries and the third MAC table, where the learned MAC entries are learned by the control plane based on the BGP peer; and the sending module820is further configured to send the fifth MAC entry to the forwarding plane. In a specific implementation, the processing module810is further configured to determine a MAC address that belongs to the third MAC table and belongs to the learned MAC entries as the fifth MAC entry; and the sending module820is further configured to send the fifth MAC entry and the first identifier to the forwarding plane, where the first identifier is used to instruct the forwarding plane to set the fifth MAC entry to a valid state. In a specific implementation, the occupation rate of the storage space of the MAC entries of the forwarding plane does not exceed a second preset value; the processing module810is further configured to determine a MAC entry that belongs to only the third MAC table or belongs to only the learned MAC entries as a sixth MAC entry; and the sending module820is further configured to send the sixth MAC entry and a second identifier to the forwarding plane, where the second identifier is used to instruct the forwarding plane to set the sixth MAC entry to an invalid state. In a specific implementation, the sending module820is further configured to: send a MAC table obtaining request to a control and management device when a BGP session between the network device and a BGP peer is disconnected and reestablished; the receiving module830is further configured to receive a third MAC table sent by the control and management device; and the sending module820is further configured to send the MAC entries in the third MAC table and a second identifier to the forwarding plane, where the second identifier is used to instruct the forwarding plane to set the MAC entries in the third MAC table to an invalid state. In the embodiments of this application, the control plane of the network device may obtain the first MAC entry, and send the first MAC entry and the first identifier to the forwarding plane of the network device. The forwarding plane stores the first MAC entry in the first MAC table, and the state of the first MAC entry is set to the valid state. After the first MAC entry is stored in the forwarding plane, the forwarding plane changes the state of the first MAC entry to the invalid state after the first preset duration elapses and when the first preset condition is met, and sets the invalid duration for the first MAC entry. The first preset condition is that the forwarding plane does not receive the traffic corresponding to the first MAC address within the first preset duration that elapses after the first MAC entry is stored, and the first MAC address is a MAC address stored in the first MAC entry. In this way, after a MAC entry is stored in the forwarding plane, the MAC entry is set to an invalid state if no traffic matches the MAC entry for a period of time. This makes management of the MAC entry more reasonable. It should be noted that, when the network device for managing a MAC table provided in the foregoing embodiment manages the MAC, division into the functional modules is merely used as an example for description. In actual application, the foregoing functions may be allocated to different functional modules for implementation based on a requirement. To be specific, an internal structure of the network device is divided into the different functional modules, to implement all or some of the foregoing functions. In addition, the embodiment of the network device for managing a MAC table and the embodiment of the method for managing MAC belong to a same inventive concept. For a specific implementation process of the embodiment of the network device for managing a MAC table, refer to the method embodiment, and details are not described herein again. Optionally, an embodiment of this application further provides a computer-readable storage medium. The computer-readable storage medium stores instructions. When the computer-readable storage medium runs on a network device, the network device is enabled to perform the method for managing a MAC table. Optionally, an embodiment of this application further provides a computer program product including instructions. When the computer program product runs on a network device, the network device is enabled to perform the method for managing a MAC table. All or some of the foregoing embodiments may be implemented by using software, hardware, firmware, or any combination thereof. When the software is used for implementation, all or some of the embodiments may be implemented in a form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a server or a terminal, all or some of the procedures or functions according to the embodiments of this application are generated. The computer instructions may be stored in a computer-readable storage medium or may be transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired (for example, a coaxial optical cable, an optical fiber, or a digital subscriber line) or wireless (for example, infrared, radio, or microwave) manner. The computer-readable storage medium may be any usable medium accessible by a server or a terminal, or a data storage device, for example, a server or a data center, integrating one or more usable media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape, or the like), an optical medium (for example, a digital video disk (DVD) or the like), a semiconductor medium (for example, a solid-state drive or the like). The foregoing descriptions are merely an embodiment of this application, but are not intended to limit this application. Any modification, equivalent replacement, or improvement made without departing from the spirit and principle of this application should fall within the protection scope of this application.
58,008
11943142
DETAILED DESCRIPTION OF THE INVENTION In the following description, numerous details are set forth for purposes of explanation. However, one of ordinary skill in the art will realize that the invention can be practiced without the use of these specific details. Thus, the present invention is not intended to be limited to the embodiments shown but is to be accorded the widest scope consistent with the principles and features described herein. FIG.1illustrates an exemplary 5-dimensional packet matching rule100. The five dimensions refer to the five header fields: a source IP field (labeled as src.IP), a destination IP field (labeled as dest.IP), a source port field (labeled as src.port), a destination port field (labeled as dest.port), and a protocol field. The rule100includes a match key that is a combination of values, one for each header field. The rule100also includes control data that indicates a corresponding action to take when an incoming packet matches the match key. Each field in the rule is allowed different types of matches, including exact match, wildcard match, prefix match (e.g. exact match followed by wildcard match) and range match. The use of wildcard (*) in a field with preceding values represents a prefix match, while the standalone use of wildcard in a field indicates that any arbitrary value can match. InFIG.1, the source IP field and the destination IP field both include values that are prefix specifications, the source port field includes a value that is a wildcard specification, the destination port field includes a value that is a range specification, and the protocol field includes a value that is an exact match specification. The corresponding action associated with the match key of the rule100is drop, meaning that if a packet arriving at a network device, such as a network switch, matches the specifications or the match key of the rule100, then the packet will be dropped by the network switch. Although the control data of the rule100includes a single action, the control data of a rule can include a plurality of actions. As explained above, prior art network devices implement solutions that are based predominantly on ternary content addressable memory (TCAM). In these traditional solutions, as illustrated inFIG.2, a key is indexed into TCAM205, which performs parallel matching of all entries in the TCAM205and returns an address. The returned address is then indexed into an accompanying SRAM210, which provides final data such as control data (e.g., drop). The TCAM205has a built-in priority encoding scheme, meaning that the first matched entry is the final matched entry and is indexed into the SRAM210. In other words, if multiple entries in the TCAM205match the key, then the TCAM205returns the address of the first matched entry in the TCAM205. As such, priority is inherent based on the ordering of the entries in the TCAM205. InFIG.2, the key matches three entries in the TCAM205. The address of the first matched entry (labeled as “Matching pattern 0”) is returned and indexed into the SRAM210, which returns the control data (labeled as “Matching data”). Although TCAM-based solutions are relatively fast to implement, these solutions have high cost and power consumption. In addition, these solutions are inefficient in handling range specifications and inserting new entries. Embodiments of the present invention are directed to a wildcard matching solution that uses a combination of static random access memories (SRAMs) and ternary content addressable memories (TCAMs) in a hybrid solution. In particular, the wildcard matching solution uses a plurality of SRAM pools for lookup and a spillover TCAM pool for unresolved hash conflicts. The wildcard matching solution is based on an observation made by the inventors regarding prefix specifications and range specifications. The observation, referring toFIG.1, is that the most significant bits of each dimension are very likely to be an exact match and, hence, are hashable. In some embodiments, the plurality of SRAM pools is configured to store entries that are hashable and the spillover TCAM pool is configured to store entries that are not hashable as well as entries that cannot be inserted into the plurality of SRAM pools due to hash conflicts. Hashability of entries are further discussed below. FIG.3illustrates an exemplary implementation300of the wildcard matching solution in accordance with some embodiments. The implementation300includes a request interface control logic305, a plurality of SRAM pools310and at least one spillover TCAM pool315. In some embodiments, each SRAM pool310includes 16 SRAM tiles for parallel hashing, and the TCAM pool315includes a plurality of TCAM databases as a basic matching unit. Details of the SRAM pool310and the TCAM pool315are discussed below. AlthoughFIG.3shows a single spillover TCAM pool, a plurality of spillover TCAM pools is contemplated. The implementation300of the wildcard matching solution is typically carried out on a network device, such as a network switch. First, at a high level, a search request (e.g., search key) is received at the request interface control logic305. Upon receiving the request, the control logic305dispatches the request data to one or more of the pools that are active, where local matching takes place. After each of the active pools performs local matching, the next level arbitration takes place to determine which of the active pools has the priority to return the final data (e.g., control data). The request interface control logic305or another logic in the network device performs the arbitration. Table 1 lists exemplary configurations of a hybrid wildcard match (WMC) table. The hybrid WMC table resides in any of one or a combination of one or more of the SRAM pools310and the spillover TCAM pool315. Alternatively or in addition to, the hybrid WMC table resides in another part of memory of the network device. TABLE 1FieldWidthDescriptionPer Request Interfacetcam_pool_valid1Use spillover TCAM poolsram_pool_bitmap4SRAM pools usedkey_size4128/192/384-bit keydata_size4In unit of bytePer TCAM Pooltcam_db_start3TCAM DB usage within this pooltcam_db_end3key_size4128/192/384-bit keydata_size4In unit of bytePer SRAM Poolkey_size2128/192/384-bit keykey_tile_start4key_tile_start ≤ key_tile_endkey_tile_end4entry_encode_mode16Pattern encoding mode, one bit per tilekey_map_way_{0 . . . 11}32 × 12Mask for bits that participates inhashing, 4/6/12-bit granularity for128/192/384-bit key, respectivelydata_size4In unit of bytedata_tile_start4data_tile_start ≤ data_tile_end ≤ 15data_tile_end4 The hybrid WCM table is accessed by the request interface control logic305, the SRAM pools310and the spillover TCAM pool315ofFIG.3and provides each of the units305-315corresponding configurations or specifications. For example, the hybrid WCM table includes configurations for the request interface control logic305regarding whether to access the spillover TCAM pool, which of the SRAM pools to access, what the size of the key being compared is and what the size of the action being returned is. The hybrid WCM table also includes configurations for each spillover TCAM pool and for each SRAM pool. In some embodiments, the hybrid WCM table is automatically updated, manually updated via software, or both. FIG.4illustrates an exemplary diagram of a TCAM pool400in accordance with some embodiments. In some embodiments, the TCAM pool315ofFIG.3is similarly configured as the TCAM pool400. The TCAM pool400is organized into eight different databases. A database has six TCAM tiles, which are configurable into one of three different modes with different key sizes. In some embodiments, a TCAM tile is 64-bits wide and 512-entries deep, although other sizes are possible. However, based on this size, in one mode (mode A), the six TCAM tiles can be arranged as 128-bits wide by 1.5 k entries deep. In another mode (mode B), the six TCAM tiles can be arranged as 192-bits wide by 1 k entries deep. In yet another mode (mode C), the size TCAM tiles can be arranged as 384-bits wide by 512 entries deep. For each database, there is an accompanying dedicated SRAM for data return. The dedicated SRAM can be a part of the SRAM pools or separate from the SRAM pools. In some embodiments, a single dedicated SRAM is used for all eight databases. FIG.5illustrates an exemplary diagram of a SRAM pool500in accordance with some embodiments. In some embodiments, each of the SRAM pools310ofFIG.3is similarly configured as the SRAM pool500. The SRAM pool500includes multiple SRAM tiles. For example, the SRAM pool500includes 16 SRAM tiles. These multiple physical memory tiles can be logically organized to represent a different logic table width. In some embodiments, two physical memory tiles are grouped to store one set of logic data. As such, the SRAM pool500includes eight pairs of SRAM tiles. The mechanism for the pairing will become more evident as entry insertions and table lookups are discussed. In some embodiments, the logical organization of these multiple SRAM tiles is reconfigurable. In some embodiments, every SRAM tile is 256-bits wide and 2048-lines deep, although other sizes are possible. Data in the SRAM are stored as separate logical entries in the SRAM. Entries could be of different width based on application. In some embodiments, a logical entry could be 256-bits wide, where every row of a SRAM tile is an entry. A logical entry could also be 512-bits wide, in which case, one row of a SRAM tile (256-bits wide) is concatenated with a row of the other SRAM tile (256-bits wide) in the corresponding pair to be 512-bits wide. As further discussed below, each entry in a SRAM tile stores a pattern. Assume the SRAM pool500is an active pool, which would be indicated as such by the sram_pool_bitmap field in the hybrid WCM table. Once a key arrives at each of the pairs of tiles in the SRAM pool500, a corresponding hash function is applied to the key. The hash function maps the key to a narrower index or address, which is used to address the SRAM tiles to retrieve an entry. In some embodiments, if a table entry (pattern) is 256-bits wide, then the entire entry is stored in one SRAM line (e.g., each tile is one way). In some embodiments, if a table entry is 512-bits wide, then the entire entry spans across the two SRAM tiles (e.g., every two tiles consist of one way). This is regardless of the input key width. As such, when a search key is forwarded to the SRAM pool500, each way applies a hash mask and hash function to the search key to look up the entry. It should be noted that the hash mask is also able to be referred to as a “key map” or “way mask.” In some embodiments, the SRAM pool500can do a maximum 16-way hash for narrow entries. In some embodiments, for hardware cost purposes, control data is stored in two pairs of SRAM tiles and entries are stored in six pairs of SRAM tiles. As such, in these embodiments, the SRAM pool500is limited to a 12-way hash for a key of 256-bits or less and to a 6-way hash for a key of 512-bits or less. Table 2 lists exemplary entry formats of the SRAM entry table. The SRAM entry table shows support for three different key sizes or formats: 128-bit key, 192-bit key and 384-bit key. The 128-bit key can fit in one of two types of 256-bit entries. The 192-bit key fits in a 512-bit entry. The 384-bit key entry fits in a 512-bit entry. TABLE 2WidthFieldDescription128-bit key12priorityIn case of multiple matches, select(256-bit entry)final match based on priority205patternEncoded pattern, 0, 1 or X (don't care)6dbyte_sel_{0, 1}Select up to two 16-bit field as rangespecification. Value represented in thepattern field is used as lower bound.32dbyte_up_bound_{0, 1}1dbyte_range_concatIf set, combine both 16-bit fieldsfor one 32-bit comparison128-bit key20priorityIn case of multiple matches, select(256-bit key)final match based on priority128value16byte_maskIf a bit is 0, a corresponding byteof the value field is “don't care”12dbyte_sel_{0 . . . 3}Select up to four 16-bit fields asrange specification64dbyte_up_bound_{0 . . . 3}3dbyte_range_concatIf bit i is set, combine field i andi + 1 for 32-bit comparison13reserved192-bit key19priorityIn case of multiple matches, select(512-bit entry)final match based on priority192value192bit_maskIf a bit is 0, a corresponding bitof the value field is “don't care”25dbyte_sel_{0 . . . 4}Select up to five 16-bit fields asrange specification80dbyte_up_bound_{0 . . . 4}4dbyte_range_concatIf bit i is set, combine field i andi + 1 for 32-bit comparison384-bit key15priorityIn case of multiple matches, select(512-bit entry)final match based on priority384value48byte_maskIf a bit is 0, a corresponding byteof the value field is “don't care”15dbyte_sel_{0, 1, 2}Select up to three 16-bit fieldsas range specification48dbyte_up_bound_{0, 1, 2}2dbyte_range_concatIf bit i is set, combine field i andi + 1 for 32-bit comparison As shown in the SRAM entry table, an explicit priority is provided for each entry. As noted above, priority is implicit within TCAM based on the location of the entry. In the hybrid SRAM and TCAM scheme, each TCAM entry also requires explicit priority to be arbitrated with potential SRAM entry matches. Priority must be specified in the SRAM entry table since priority is decoupled from the addressing itself. The pattern of the entry is encoded. The SRAM entry table also provides a range specification and whether concatenation is set for each entry. The SRAM entry table provides that, for some formats, a mask that indicates which byte(s) or bit(s) of the value field is a “don't care” or wildcard. In some embodiments, the SRAM entry table is either automatically updated or manually updated via software. To represent a N-bit pattern with wildcards as a binary value requires N×log23 or 1.6N bits. For example, a 5-bit pattern with wildcards is an 8-bit binary. Three coefficient values are used to represent a 0 value, a 1 value and a wildcard. In particular, the coefficient value of 0 indicates a 0 value, the coefficient value of 1 indicates a 1 value, and the coefficient value of 2 indicates a wildcard. For example, the encoded 5-bit pattern 5′b01XXX, with X being the wildcard, is an 8-bit binary value of 8′d53, which is equivalent to 8′h35. Specifically, the 8-bit binary value is the total of 2×30(for the 0thbit in the 5-bit pattern), 2×31(for the 1stbit in the 5-bit pattern), 2×32(for the 2ndbit in the 5-bit pattern), 1×33(for the 3rdbit in the 5-bit pattern), and 0×34(for the 4thbit in the 5-bit pattern). The decoded 8-bit binary value 8′d53 is a 5-bit pattern of 5′b01XXX. Specifically, the coefficient values for the bits of the 5-bit pattern are: (53/30)%3=2 (or X for the 0thbit in the 5-bit pattern), (53/31)%3=2 (or X for the 1stbit in the 5-bit pattern), (53/32)%3=2 (or X for the 2ndbit in the 5-bit pattern), (53/33)%3=1 (or 1 for the 3rdbit in the 5-bit pattern), and (53/34)%3=0 (or 0 for the 4thbit in the 5-bit pattern). These calculations use integer division. Referring back to Table 2, it should be noted that for a 16-bit or 32-bit range specification, if a value contains an X (e.g., “don't care” or wildcard), then the corresponding dbyte_up_bound field is not valid. Similarly, if a dbyte_up_bound is less than the corresponding value, then the dbyte_up_bound field is not valid. Entries are each inserted as a pattern into the SRAM pools and the TCAM pool. In some embodiments, the entries are software configured into the pools. Referring back to Table 1, each SRAM way (SRAM tile or pair, depending on entry width) is associated with a key_map, which indicates whether that SRAM way is hashable for an entry to be inserted. The key_map masks bits of the entry that participate in the hash function. If a SRAM way is hashable, then the entry is inserted into that SRAM way where any x-bit of the pattern is masked off from hashing. For example, a 128-bit entry of {128.*.*.*, 162.192.*.*, 16′d456, 16′d31002, 8′h6, 24′h0} is to be inserted. Assume the key_map of a SRAM is 32′hC0C0-00FF. Based on this assumption, this SRAM way is hashable for the 128-bit entry. In particular, every bit in the key_map masks corresponding four bits from the entry (key). The value C in hexadecimal is 1100 in binary and the value 0 in hexadecimal is 0000 in binary. This means the upper eight bits in the pattern participate in hashing, the next 24 bits do not participate in hashing, the next 8 bits participate in hashing, the next 56 bits do not participate in hashing, and the last 32 bits participate in hashing. In this case, since all the bits in the entry that participate in hashing are exact numbers, this SRAM way is hashable. However, assume the key_map of a SRAM way is 32′hF0C0_00FF. Based on this assumption, this SRAM way is not hashable for the 128-bit entry. The value F in hexadecimal is 1111 in binary. This means the upper 16 bits in the pattern participate in hashing, the next 16 bits do not participate in hashing, the next 8 bits participate in hashing, the next 56 bits do not participate in hashing, and the last 32 bits participate in hashing. In this case, since all upper 16 bits need to participate in hashing but only the first 8 bits of the 16 bits are exact numbers and the remaining eight bits are wildcards, this SRAM way is not hashable. If no SRAM ways are hashable, then the entry is either inserted into the spillover TCAM or, alternatively, each bit of x can be expanded into a plurality of patterns such that they are hashable. Continuing with the last assumption, since eight bits are missing for hashing, the pattern can be expanded into 28or 256 different patterns, each corresponding to 256 individual numbers. As such, all these different patterns become hashable. FIG.6illustrates an exemplary scenario600of rehashing to resolve a hash conflict in accordance with some embodiments. In some embodiments, rehashing is carried out using ways and hops. The scenario600depicts a 2-way hash. Once a new entry or a pattern comes in, the entry is masked and hashed for each way. Each mask and hash unit points to a location in its corresponding SRAM tile, as illustrated by the black dotted lines inFIG.6. If one or both locations are empty, then the pattern is inserted in one of the locations. In some embodiments, if both locations are empty, then preference of insertion is given to the lower way (e.g., way_0), which is typically associated with the lowered number SRAM tile. A hash conflict occurs when one or both locations are occupied, such as in scenario600. InFIG.6, the locations pointed by the mask and hash units are occupied by “entry 0” and “entry 1.” In a hash conflict, entries are “kicked out” of their locations to different locations until a location is empty. In some embodiments, when both locations are occupied, the new entry is always inserted via the lower way (e.g., way_0) or arbitrarily inserted via one of the ways. InFIG.6, the new entry is inserted via way_0 at the location of “entry 0.” Assuming both ways are hashable for “entry 0,” “entry 0” is rehashed via the other way (e.g., way_1). “Entry 0” is inserted in a location that is occupied by “entry 2.” The insertion of “entry 0” in its new location is referred to as hop-1. Further assuming both ways are hashable for “entry 2,” “entry 2” is rehashed via the other way (e.g., way_0). “Entry 2” is inserted in a location that is unoccupied. The insertion of “entry 2” in its new location is referred to hop-2. In this scenario, the hash conflict is resolved using two hops. However, in other scenarios, a hash conflict is resolved using more than two hops. The rehash can implement a depth-first insertion algorithm (depth of all tiles), such as illustrated inFIG.6. Alternatively, the rehash can implement a breadth-first insertion algorithm (width of each tile). Alternatively, the rehash can be based on D-left hashing. If a hash conflict cannot be resolved through rehashing, then the corresponding entry is inserted into one of the at least one spillover TCAM pools. FIG.7illustrates a method of entry insertion700in accordance with some embodiments. The method of entry insertion700is typically implemented on a network switch that includes a plurality of SRAM pools and at least one spillover TCAM pool. At a step705, an entry to be inserted into one of the pools is received. At a step710, it is determined whether the entry is hashable. In some embodiments, the determination at the step710includes comparing each key_map with the entry. As discussed above, each of the plurality of SRAM pools includes 16 SRAM tiles. Each of the 16 SRAM tiles is associated with a key_map. Each key_map masks bits of the entry that participate in hashing. The entry is hashable when all bits in the entry that participate in hashing are exact numbers. At a step715, based on the determination that the entry is hashable, the entry is inserted into one of the plurality of SRAM pools. In some embodiments, the entry is inserted into one of two SRAM tiles of the one of the plurality of SRAM pools. In some embodiments, inserting the entry into one of the plurality of SRAM pools includes rehashing to resolve a hash conflict. The rehashing can implement a depth-first insertion algorithm or a width-first insertion algorithm. At a step720, based on the determination that the entry is not hashable, the entry is inserted into the spillover TCAM pool. The entry is inserted as a pattern into one of the pools. Now assume that after the 128-bit entry of {128.*.*.*, 162.192.*.*, 16′d456, 16′d31002, 8′h6, 24′h0} has been inserted into SRAM pool 0,FIG.8illustrates an exemplary flow of table lookup800in accordance with some embodiments. At a step805, a search key of {128.1.2.3, 1.62.192.1.2, 456, 31002, 8′h6} arrives at the request interface control logic305ofFIG.3. At a step810, the request interface control logic305, based on the configurations in the hybrid WCM table, dispatches accordingly the search key to active pools. In this example, team_pool_valid is set to 1 and the sram_pool_bitmap is 4′b0011, which corresponds to SRAM pool 0 and SRAM pool 1. As such, the spillover TCAM pool, SRAM pool 0 and SRAM pool 1 are active pools. At a step815, the search key is received at SRAM pool 0. At a step820, the search key is received at SRAM pool 1. At a step825, the search key is received at the spillover TCAM. The steps815-825occur substantially simultaneously. Generally, the lookup steps inFIG.9occur simultaneously or in parallel. At the step825, lookup in the spillover TCAM is similar to conventional TCAM. At a step855, the spillover TCAM returns the control data and priority to the request interface control logic305. SRAM pool 0 corresponds to a 2-way hash. Each of the ways is associated with a key_map, which is applied to mask off the corresponding bits of the search (input) key. InFIG.8, way_0 has the key_map 32′hC0C0_00FF and way_1 has the key_map 32′hF0C0_00FF. The key_map of way_0 and the search key are applied to a hash function, which returns a hash_key of {8′d128, 24′d0, 8′d162, 56′d0, 8′d6, 24′h0} at a step830. Similarly, the key_map of way_1 and the search key are applied to the hash function, which returns a hash_key of {8′d128, 8′dl,16′d0, 8′d162, 88′d0} at a step835. The hash_key at the step830corresponds to the hash_index ′d123, which points to a particular entry in the SRAM tiles of SRAM pool 0 at a step840. The entry is compared with the search key and the result is returned at a step845. The results from the SRAM pools 0 and 1 are returned at a step850, in which priority arbitration is performed and control data is thereafter read from the corresponding entry that has the highest priority and that is located in the remaining SRAM tiles. In this example, nothing is returned by SRAM pool 0, way 1 and the control data is read at address ′d123 since the corresponding entry has the highest priority. The results from the step850and the step855are returned at a step860, in which priority arbitration is performed and the results data is returned. In this example, no data is returned by SRAM pool 1. FIG.9illustrates a method of table lookup900in accordance with some embodiments. The method of table lookup900is typically implemented on a network switch that includes a plurality of SRAM pools and at least one spillover TCAM pool. At a step905, a search key is received at a request interface control logic. The network switch typically includes the request interface control logic. At a step910, based on a hybrid wildcard match (WCM) table, the search key is dispatched to one or more active pools. Configurations of the hybrid WCM table indicate which of the pools are the one or more active pools. In some embodiments, lookups in the one or more active pools are performed simultaneously. At a step915, when the one or more active pools include the spillover TCAM pool, the TCAM pool returns a first set of results that includes data and priority information. At a step920, when the one or more active pools include at least one of the plurality of SRAM pools, each active SRAM pool returns a second set of results that includes data and priority information. For each active SRAM pool, to determine the second set of results, a first level arbitration based on priority is performed for all sets of results returned by each SRAM tile of a corresponding active SRAM pool. At a step925, a second level arbitration based on priority is performed for all sets of results returned by the one or more active pools. At a step930, based on the second level arbitration, data from the set with the highest priority is outputted. FIG.10Aillustrates formats of entries1000,1000′,1000″ of an SRAM entry table according to some embodiments. The SRAM table is able to be substantially similar to the SRAM table described above except for the differences described herein. Additionally, as described above, each of the entries1000,1000′,1000″ correspond to a 5-dimensional packet matching rule100. As shown inFIG.10A, the different entry formats1000,1000′,1000″ are able to support three different key/entry sizes. A 128 bit format entry1000(for a 128 bit key) has16 bytes dedicated to a value or pattern field1002, 15 bytes dedicated to a code field1004and 1 byte dedicated to a priority/combination field1006. In some embodiments, the entry1000is able to be stored in a single line of the memory. A 384 bit format entry1000′ (for a 192 bit key) has 24 bytes dedicated to a value or pattern field1002, 8 bytes dedicated to other data1003(e.g. this field is able to be used to store control data that is returned upon a hit on this field), 15 bytes dedicated to a code field1004and 1 byte dedicated to a priority/combination field1006. A 512 bit format entry1000″ (for a 384 bit key) has 48 bytes dedicated to a value or pattern field1002that starts with the most significant bits (MSB) of the value data in a first line1005of the memory (e.g. SRAM) and the least significant bits (LSB) of the value data in a second line1005of the memory (e.g. SRAM), 15 bytes dedicated to a code field1004and 1 byte dedicated to a priority/combination field1006. Alternatively, one or more of the fields are able to be larger or smaller and/or be located in the same or different lines of the memory. The value field1002stores the values that are matched or compared to the values of a key (e.g. 128 byte key) according to the codes1004a-b(seeFIG.10B) of the code field1004in order to determine if the key matches the entry1000,1000′,1000″. In some embodiments, the field data and/or the data field1002is able to be non-contiguous within the entry1000,1000′,1000″ (e.g. within the lines of the memory storing the entry). For example, one or more of the values of the value field1002are able to be located adjacent to or at an end of one or more of the codes1004a-d(e.g. match/no match codes) within the entry (e.g. the SRAM lines of the entry) separate from the remainder of the field data and/or the data field1002within the entry. The priority/combination data field1006is able to include combination data (e.g. a combination vector or logic tree) that indicates what combination of results/outcomes from the application of the codes1004a-dof the entry1000,1000′,1000″ to the key indicate a match between the key and the entry1000,1000′,1000″. For example, the combination data is able to indicate that the outcome of three of the codes1004a-dneed to be a 1 (e.g. satisfy the code) and the remaining codes1004a-dneed to be a zero (e.g. do not satisfy the code) in order for the key to match the entry1000,1000′,1000″. Additionally, the priority/combination data field1006is able to include priority data that indicates a priority of an output result when there is a match between the key and entry/rule (e.g. according to the codes and combination data). Specifically, when multiple results are received from one or more SRAM pools310and/or one or more TCAM pools315, the priority data of the priority field for each of the pools310,315are compared by the second level arbitration logic in order to arbitrate between the results as described in steps850-860above. Indeed, this explicitly assigned priority is in contrast to TCAM priority which merely selects the match that is stored in the earliest entry within the TCAM. The code field1004is able to comprise one or more 2 byte and/or 3 byte codes1004a-dthat fit within the size of the field1004and define in what ways the selected data of the key must correspond to the selected value data of the value field1002in order to satisfy or not satisfy the rule100. Thus, for a code field1104of 15 bytes, a maximum of seven 2 byte match codes1004a, b, equaling 14 bytes, or five 3 byte codes1004c, d, equaling 15 bytes, or a combination of 2 and 3 byte codes1004a-dtotaling less than 15 bytes are able to be used for each entry1000,1000′,1000″. FIG.10Billustrates four different types of codes1004a-dthat are able to be used for implementing an entry1000,1000′,1000″ according to some embodiments. As shown inFIG.10B, the code1004ais an exact (or equal) match code that indicates to check whether a key value (at a designated portion of the key) exactly matches a value (at a designated portion of the value field1002) of the entry1000,1000′,1000″ and the code1004bis an no exact match code that indicate to check whether a key value (at a designated portion of the key) does not exactly match a value (at a designated portion of the value field1002) of the entry1000,1000′,1000″. As also shown inFIG.10B, the code1004cis an in range code that indicates to check whether a key value (at a designated portion of the key) falls within a specified range and the code1004dis an out of range (or not in range) code that indicates to check whether a key value (at a designated portion of the key) falls outside a specified range. As described above, in some embodiments the match/no match codes1004a, bare 2 bytes and the within range/not within range codes1004c, dare 3 bytes. Alternatively, one or more of the code types1004a-dare able to be smaller or larger in size. Each of the exact match codes1004aand the exact no match codes1004bis able to comprise a two bit code type identifier field1010that identifies the code1004a, bas either exact match or exact no match; a seven bit nibble index field1012that identifies the starting location of the desired match/no match data within the key (e.g. the subkey) and the starting location of the desired match/no match data within the value field1002of the entry1000,1000′,1000″; and a seven bit length field1014that identifies the length/number of bits after the starting location (indicated by the nibble index1012) to select such that the selected data represents all the value field data and all key data to be used in the match/no match code1004a, bdetermination. Additionally, in some embodiments if the number of bits of the bit length field1014is equal to zero, the bit length field1014for that code1004a, bis able to be changed to a single bounded range value. Specifically, if the code was an exact match code1004a, the code determines if the selected key value (e.g. subkey) is larger than or equal to the single bounded range value within the bit length field1014. Similarly, if the code was an exact no match code1004b, the code determines if the selected key value (e.g. subkey) is less than the single bounded range value within the bit length field1014. Each of the in-range/out-of-range codes1004c, dis able to comprise a two bit code type identifier field1010that identifies the code as either in-range1004cor out-of-range1004d; a six bit byte index field1016that identifies the starting location of the desired data within the key (e.g. a 16 bit subkey or other predetermined range subkey length) and the starting location of the lower bound value within the value field1002of the entry1000,1000′,1000″; and a 2 byte upper bound field1018that indicates the upper bound of the range code1004c, d. If the code is an in-range code1004c, the upper and lower bound values are inclusive and if the code is an out-of-range code1004d, the upper and lower bound values are exclusive. Thus, the in-range/out of range codes1004c, denable it to be determined if the subkey value is between or not between the lower and upper bounds (i.e. the range). Alternatively, different allocations of bits of each of the codes1004a-dfor one or more of the various fields of the codes1004a-dare contemplated. Additionally, in some embodiments one or more of the codes10004a-dof one or more of the entries1000,1000′,1000″ are able to not be byte aligned within the entry1000,1000′,1000″. In some embodiments, out-of-range or in-range codes for larger size bit ranges are able to be created by logically combining smaller bit out-of-range codes, in-range codes and/or match codes. As a result, a in/out-of-range comparison code for a larger bit range (e.g. 32 bits, 48 bits) is able to be accomplished in rule/entry1000,1000′,1000″ using multiple smaller/standard bit in/out-of-range comparison codes (e.g. 16 bit) such that the entry effectively comprises the larger size bit range code (that is represented by the combination of the smaller bit size codes). In such embodiments, the combination data is able to implement/indicate the desired logical combination of the standard bit range codes1004a-dthat is required for causing the combination of those codes to “equal” or effectively generate a one or more out-of-range or in-range codes for larger size bit ranges. For example, an entry1000,1000′,1000″ whose range codes1004c, dare limited to 16 bits, is able to implement a 32 bit in-range or out-of-range code by including multiple 16 bit range codes and one or more exact match codes that when logically combined (as indicated in the combination data) have the same results as would the desired 32 bit in-range or out-of-range code. Or to put it logically, in order to determine if a key value K is within a 32 bit range that has a lower boundary L and an upper boundary U (e.g. is L≤K≤U?), the 32 bit value K is treated as composed of two 16-bit portions L0to U0and L1to U1(where L0≤U0≤L1≤U1). As a result, L≤K≤U becomes L1L0≤K1K0≤U1U0, which can be then broken down into a logical combination of two 16 bit exact match codes1004aand three 16 bit in-range codes1004c:((L1+1≤K1≤U1) OR (L1==K1&& L0≤K0≤0xFFFF) OR (U1==K1&& 0x0≤K0≤U0))=L1L0≤K1K0≤U1U0. Thus, the logical combination of the two 16 bit exact match codes1004aand three 16 bit in-range codes1004cis able to effectively create a 32 bit in-range code for use by the entry1000,1000′,1000″. FIG.11illustrates a method of determining if a received key and a determined entry1000,1000′,1000″ result in a match for a way (e.g. way_0) of a pool (e.g. SRAM pool 0) according to some embodiments. In particular, the method is able to supplement and/or replace steps815,830,840,845and850in the method described inFIG.8above such that the method is able to be combined with the method ofFIG.8to form a combined method. As shown inFIG.11, a key arrives (e.g. 128 bytes) at a pool/way and is masked for that way/pool and then hashed down to a hash key or hash (e.g. 11 bits) at the step1102. The hash key is then used as the index to select an entry1000,1000′,1000″ within the pool at the step1104. For the selected entry1000,1000′,1000″, each of the codes1004a-dof that entry are evaluated based on the data within the received key and/or the relevant data of the value field1002of the entry in order to determine if (the data of) the received key satisfies or does not satisfy the code1004a-d(e.g. is an exact match, is not an exact match, is within the range, is not within the range) at the step1106. In some embodiments, the index starts from the most significant bit of the codes1004a-dof the selected entry1000,1000′,1000″. Based on which combination of the codes1004a-dis determined to be satisfied, it is determined if the received key and the entry indicated by the hash key are a match at the step1108. In some embodiments, the combination data (e.g. combination vector or logic tree of the priority/combination field1006of the selected entry1000,1000′,1000″ indicates what combination of satisfying/not satisfying the codes1004a-dof the selected entry indicate a match between the key and the entry. For example, if the entry comprises 3 codes (e.g. exact match, in-range, out-of-range), the combination data is able to indicate that for a key to match the entry1000,1000′,1000″ the first two codes must be satisfied and the third code must not be satisfied. In other words, for each code, the combination data is able to indicate what result of evaluating the code for the received key is desired such that if all of the desired results are found the key matches the entry. If a match is found by the pool310,315between the received key and the entry1000,1000′,1000″, the pool/way310,315returns data indicating that a match was found (e.g. a match bit) along with the priority value (of the priority/combination field1006of the selected entry) to the request interface control logic305to be arbitrated among results from other pools310,315at the step1110. Thus, the entry format and evaluation method provides the advantage of enabling processing power and storage area to be saved by using an SRAM pools and TCAM pools hybrid instead of just TCAMs. Indeed, this also beneficially enables priority of each entry to be selectively specified, whereas in TCAMs priority is just according to the position of the entry within the TCAM. One of the key ideas of the wildcard matching solution is that most entries are inserted into the plurality of SRAM pools. The SRAM pools may not perform matching on tough patterns, in which case the spillover TCAM pool will perform the matching on these tough patterns. Depending on whether there is a high level software algorithm support, the hardware can be decoupled from the software algorithm. If the software algorithm is confident in performing the matching, very little TCAM resources are used. On the other hand, if the software algorithm is not confident or is lazy in performing the matching, more TCAM resources can be used. One of ordinary skill in the art will realize other uses and advantages also exist. While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. Thus, one of ordinary skill in the art will understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.
39,829
11943143
DETAILED DESCRIPTION In the following description, for purposes of explanation, numerous examples and specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. Some embodiments as expressed in the claims may include some or all of the features in these examples, alone or in combination with other features described below, and may further include modifications and equivalents of the features and concepts described herein. Some embodiments provide policy-based proximity routing techniques for traffic over stretched networks across multiple sites, such as a remote site and a local site (e.g., an on-premises data center and a cloud system). The policy-based proximity routing techniques eliminate asymmetric routing paths between workloads in the local site and the remote site such that the traffic paths taken between the local site and the remote site are symmetric. Additionally, the policy-based proximity routing techniques may avoid changing any configuration in the remote site to make the routes symmetric, which is desirable because network management of the remote site may be more difficult than the local site. In some embodiments, the routing techniques allow the remote site to route packets between a migrated workload on a stretched network in the local site and workloads in the remote site in symmetric paths. The local site sends traffic from the migrated workload in the local site through a layer 2 VPN channel to the remote site. The techniques use layer 3 handling to determine a policy that causes the local site to use layer 2 forwarding of the traffic from the migrated workload to the remote site instead of layer 3 routing through a layer 3 VPN. When a return packet is sent from the workload in the remote site, the packet is also sent via the layer 2 VPN channel to the workload in the local site. To perform policy-based proximity routing, the system identifies a stretched network and sets the Internet protocol (IP) address of the layer 3 interface for a logical router in the local site as the same IP address of a router in the remote site, but the subnet mask of the IP address for the logical router in the local site is set to a value to make the IP address specific, such as the mask is set to a longest possible value. For example, a /32 route may be used when using the protocol IPv4. The /32 route in IPv4 is the most specific for a 32-bit route. However, routes other than /32 routes may be used, such as when the protocol IPv6 is being used. When a workload migrates from the remote site to the local site, the system adds a /32 route for the IP address of the migrated workload to a local routing table for the logical router. This /32 route causes the logical router to route traffic locally between a migrated workload and another workload in the local site. For example, the logical router looks up the /32 route for the migrated workload in the local routing table and knows the migrated workload is located locally, and thus forwards a packet locally to the migrated workload. The system also generates a policy table that covers the IP address scope that is being used in the remote site. The IP address scope may be the range of IP addresses being used in the remote site. An example policy may be “10.1.1.0/24 allow”, which matches IP addresses within the range specified by 10.1.1.0/24 and indicates these IP address match a policy. In some examples, a migrated workload that is on a stretched network in the local site may send a packet to a workload in the remote site, such as a workload on a non-stretched network in the remote site. The packet is directed to the logical router in the local site first. When the logical router does not find a matching route in the local routing table for the IP address of the workload in the remote site, the logical router consults the policy table to determine if there is a match of a route. When there is a match, the logical router may apply a policy that forwards the packet via the layer 2 VPN channel instead of routing the packet via the layer 3 VPN channel. For example, the logical router forwards the packet to the router in the remote site, which then forwards the packet to the workload in the remote site. When a workload in the remote site sends a return packet to the workload in the local site, the router in the remote site sends the packet through the layer 2 VPN to the workload in the local site. This results in symmetric paths through the layer 2 VPN between the local site and the remote site. System Overview FIG.1depicts a simplified system100for performing policy-based proximity routing according to some embodiments. System100includes a remote site102and a local site104. In some embodiments, remote site102may be an on-premise data center, which may be an internal cloud system for an enterprise or a “private” cloud system in which a tenant and a cloud service provider are part of the same organization. Local site104may be a public cloud system that may be provided by an organization that is separate from a tenant that accesses the external cloud system. For example, the tenant may be part of an enterprise, and the external cloud system may be part of a cloud service provider that is separate from the enterprise of the tenant. The cloud service provider may provide cloud services to different enterprises and/or individuals. A hybrid cloud or heterogeneous network may be a cloud architecture in which a tenant is provided with seamless access to both remote site resources and local site cloud resources. Local site104may be configured as a software defined networking (SDN) system, such as an overlay network supported by NSX™ by VMware Inc. Local site104may include a dedicated cloud service for a single tenant or may be configured for multiple tenants with logically isolated virtualized computing resources on a shared physical infrastructure. Remote site102may be a virtual network or physical network. Remote site102and local site104may communicate through a layer 2 channel and a layer 3 channel, such as a layer 2 VPN (L2 VPN)112and a layer 3 VPN (L3 VPN)114. Although VPNs are discussed, other links between remote site102and local site104may be used, such as the layer 2 channel and layer 3 channel may include high-throughput, dedicated links. Workloads106-1to106-4may be running in remote site102and local site104. Workloads may refer to virtual machines that are running on a respective host, but this is one example of a virtualized computing instance or compute node. Any suitable technology may be used to provide a workload. Workloads may include not only virtual machines, but also containers (e.g., running on top of a host or guest operating system without the need for a hypervisor or separate operating system or implemented as an operating system level virtualization), virtual private servers, client computers, etc. The workloads may also be complete computation environments containing virtual equivalents of the hardware and software components of a physical computing system. Also, as used herein, the term hypervisor may refer generally to a software layer or component that supports the execution of multiple workloads including system-level software that supports name space containers. Although a virtualized environment is described, some embodiments may be used in an environment that is not virtualized. Also, when the term workload is discussed, the term workload may also be interchanged with a host. Workloads106may exist on non-stretched networks and stretched networks. A non-stretched network may not be stretched across remote site102and local site104, but rather exists only in one of remote site102or local site104. A stretched network, or extended network, may be stretched across a network between remote site102and local site104via layer 2 VPN112. Layer 2 concentrators (L2C)110-1and110-2are network devices (e.g., virtual devices implemented in software, that may, for example, be incorporated into routers or hosts) that connect the stretched layer 2 network across remote site102and local site104via layer 2. Each of the L2 concentrators may be configured to bridge (e.g., bridges created by L2 concentrators110-1and110-2) each connected local area network (LAN) link(s) via a wide area network (WAN) link, such as via a wide area network124. The stretched network may be separate from a layer 3 network. Stretched networks may be created using different methods, such as creating a new network in local site104corresponding to an existing network in remote site102with the new network having the same IP address as router R1108-1in remote site102, plugging a port of L2C110-2into the new network on local site104and a port of L2C110-1into the existing network on remote site102, and bridging the L2C pair. Examples of stretched networks may be found in U.S. Pat. No. 10,250,685, entitled “Creating Layer 2 Extension Network in a Hybrid Cloud Computing System”, filed on Aug. 29, 2017, which is incorporated herein by reference. Layer 3 VPN router120in remote site102and edge router122in local site104provide layer 3 connectivity through wide area network124via a layer 3 channel, such as a layer 3 VPN114. Layer 3 VPN router120and edge router122provide VPN connectivity over network124between remote site102and local site104. Router R1108-1performs routing functions in remote site102. Router R1108-1may be a logical router or may not use virtualization. Logical router DR1108-2performs routing functions in local site104. Logical router DR1108-2may span multiple sites, such as multiple data centers. The logical router may include a distributed component referred to as a distributed router that is distributed across hosts in the multiple sites (on one or more hosts in each site) and may also include a centralized component referred to as a service router that may perform centralized functions on an edge node, such as a network address translation (NAT), firewall services, and other services. Logical router DR1108-2is distributed on multiple host computing devices (e.g., hypervisors) that are running on multiple sites, such as remote site102and local site104and may be a distributed router. Logical router DR1108-2provides connectivity to workloads hosted on hypervisors. Further details of logical routers and logical switches are described in U.S. Pat. No. 9,369,426, entitled “DISTRIBUTED LOGICAL L3 ROUTING”, filed Aug. 17, 2012, which claims priority to U.S. provisional application No. 61/524,754, filed on Aug. 17, 2011, U.S. provisional application No. 61/643,753394, filed on May 6, 2012, U.S. provisional application No. 61/654,121, filed on Jun. 1, 2012, and U.S. provisional application No. 61/666,876, filed on Jul. 1, 2012, all which are incorporated by reference in their entirety. Another example implementation of this type of logical router architecture is described in detail in U.S. Pat. No. 9,787,605, granted Oct. 10, 2017, which is also incorporated herein by reference in its entirety. Router R1108-1includes an IP address of “10.1.1.1” and a media access control (MAC) address of “MAC1”. While IPv4 addressing is described hereby way of example, the principles described herein maybe adapted for any Layer 3 addressing scheme, including IPv6. The IP address of an L3 interface of logical router DR1108-2is assigned the same IP address as router R1108-1, but the subnet mask of the IP address of logical router DR1108-2is set to a specific IP address, such as the most specific or longest for the protocol being used, (e.g. the IP address “10.1.1.1/32”). The specific IP address may be the explicit IP address assigned to logical router DR1108-2. That is, the /32 route is the specific address of “10.1.1.1” while the /24 route of “10.1.1.0/24” represents a range of addresses because the subnet mask is less than the number of bits in the specific IP addresses. Although /32 routes are described and examples are presented using the protocol IPv4, the same concepts can be applied using other protocols, such as IPv6 using different prefix masks. Also, workload WL1106-1and workload WL4106-4are on a stretched network that is assigned the range of IP addresses of “10.1.1.0/24”. Workload WL1106-1includes the IP address of 10.1.1.2 and workload WL4106-4includes the IP address of 10.1.1.3. Router R1108-1includes a layer 3 interface with the IP address of 10.1.2.1 for a non-stretched network that is assigned a range of IP addresses of 10.1.2.0/24. Workload WL3106-3includes an IP address of 10.1.2.3 and is on the non-stretched network in remote site102. Logical router DR1108-2includes an L3 interface with the IP address of 100.1.1.1 for a non-stretched network that is assigned a range of addresses of 100.1.1.0/24. Workload WL2106-2is on the non-stretched network in local site104and includes the IP address of 100.1.1.2 and. To differentiate logical router DR1108-2from router R1108-1, logical router DR1108-2is assigned a MAC address of MAC2, which is different from the MAC address MAC1 of router R1108-1. Assigning the same IP address to both logical router DR1108-2and router R1108-1prevents packets from being sent to remote site102first and then back to local site104. Rather, packets sent by workload WL1106-1are first sent to logical router DR1108-2. This will allow local traffic in local site104to be sent locally, such as from workload WL2106-2to workload WL1106-1via logical router DR1108-2. Local routing table116is used by logical router DR1108-2to make routing decisions at layer 3. When workloads are migrated from remote site102to local site104, the system adds a /32 route to local routing table116. As discussed above, the /32 routes avoid an issue where traffic is sent to remote site102and back to local site104for migrated workloads in local site104. For example, when logical router DR1108-2encounters a /32 route for a migrated workload in local routing table116, logical router DR1108-2can determine that the packet should be routed locally within local site104. For example, packets sent from workload WL2106-2to workload WL1106-1are locally routed by logical router DR1108-2because the /32 route in local routing table116indicates to logical router DR1108-2that workload WL1106-1is located in local site104. Policy table118includes policies that cover the scope of IP addresses found in remote site102. In some embodiments, policy table118adds policies that cover the ranges of IP addresses for stretched networks and non-stretched networks connected to router R1108-1. When logical router DR1108-2does not find a matching route in local routing table116, logical router DR1108-2consults policy table118. If the destination IP address matches a route in policy table118, logical router DR1108-2applies an associated policy for the route, which may be to send the packet through layer 2 VPN112. The route processing in this case processes the packet in the layer 3 processing pipeline to determine the policy, but logical router DR1108-2performs layer 2 forwarding of a packet to router R1108-1. Router R1108-1thinks the packet is from the source workload that sent the packet. Router R1108-1can then forward the packet to the destination workload. When the destination workload responds and sends a packet back to the source workload, router R1108-1forwards the packet via the layer 2 VPN112to the workload on the stretched network. Accordingly, a symmetric path is taken in the communication through layer 2 VPN112. For example, as discussed in more detail below, the paths taken for traffic between workload WL1106-1and workload WL3106-3are symmetric. Local Routing Table and Policy Table Example FIG.2depicts an example of local routing table116and policy table118according to some embodiments. Local routing table116is a layer 3 routing table for logical router DR1108-2and includes local routes for local site104. For example, local routing table116may include a default route 0.0.0.0/0, which is an uplink route via network124. The default route may be a route that is matched if no other routes in local routing table116are matched. Local routing table116may also include other routes, such as /32 routes and /24 routes. A system, such as a management plane (not shown), may add /32 routes to local routing table116for IP addresses of workloads that are on a stretched network in local site104. Additionally, when workloads migrate from remote site102to local site104, the management system may add a /32 route pointing to the IP address of the migrated workload. As shown, local routing table116includes the IP addresses of 10.1.1.1/32 for logical router DR1108-2and 10.1.1.2/32 for workload WL1106-1. Additionally, the route of 100.1.1.0/24 is for the non-stretched network connected to logical router DR1108-2that includes workload WL2106-2. Policy table118includes routes that cover the IP address scope in remote site102. For example, the scope may be a range of IP addresses in remote site102. In some embodiments, different policies may be associated with the routes, such as an allow policy or a deny policy. An allow policy allows logical router DR1108-2to forward packets to IP addresses within the range of IP addresses associated with the route. A deny policy may be used to solve some overlapping IP address scopes between remote site102and local site104. For example, if remote site102has an IP address scope of “10.1.1.0/16” but local site104has an IP address scope of “10.1.3.0/24”, there may be some overlap in IP addresses due to the prefixes used. A policy in policy table118can be defined to distinguish between the overlap, such as a policy may be “10.1.0.0/16 allow”, “10.1.3.0/24 deny”. Logical router DR1108-2may use the longest prefix match when determining when a match of a destination IP address to a route in policy table118occurs. The deny policy may be used to bypass policy-based routing using policy table118. That is, logical router DR1108-2may not use policy-based routing when an IP address falls within a deny route. A MAC address is associated with the policy. The MAC address MAC-R may represent the MAC address that is required to forward the packet to eventually reach router R1108-1. In some examples, the MAC address MAC-R is the MAC address for router R1108-1. However, the MAC address MAC-R may be associated with L2C110, which tunnels the packet via layer 2 VPN112. In this case, the MAC address MAC-R may be associated with L2C110and is translated to the IP address of router R1108-1during forwarding of the packet. Network Configuration A management plane or other device in local site104may configure the virtualized network in local site104to perform policy-based proximity routing.FIG.3depicts a simplified flowchart300of a method for configuring the virtualized network in local site104according to some embodiments. At302, the management plane identifies a stretched network between local site104and remote site102. As discussed above, the stretched network is assigned the range of IP addresses 10.1.1.0/24 and includes workloads WL4106-4and WL1106-1. At304, the management plane may set an IP address of the layer 3 interface for logical router DR1108-2to the same IP address of router R1108-1. At306, the management plane adds policies into policy table118to cover the IP address scope in remote site102. As discussed above, routes for the range of IP addresses for the non-stretched network 10.1.2.0/24 and the stretched network 10.1.1.0/24 are added to policy table118. At308, for workloads in the stretched network that are located in local site104, the management plane adds a /32 route for the IP address of the respective workloads in local routing table116of logical router DR1108-2. For example, if there are existing workloads (not shown) in local site104on a stretched network when the configuration occurs, the management plane adds a /32 route for the IP address for those workloads in local routing table116. At310, the management plane may detect a migrated workload from remote site102to local site104. For example, workload WL1106-1that resides on stretched network 10.1.1.0/24 may be migrated from remote site102to local site104. At312, the management plane installs a /32 route for the migrated workload in local routing table116of logical router DR1108-2. As will be discussed in more detail below, the /32 route for workload WL1106-1in local routing table116makes sure that local traffic, such as from workload WL2106-2to workload WL1106-1, is routed locally by logical router DR1108-1instead of through remote site102. Routing Optimization Using Policy-Based Proximity Routing The following will describe different paths that result when different workloads communicate in system100using policy-based proximity routing. As will be discussed below, the paths between workloads are symmetric. Also, although workloads are described, it will be understood that packets may be sent between a host or another computing device instead of a workload. Workload WL1and Workload WL3 FIG.4depicts an example of paths taken to send packets between workload WL1106-1and workload WL3106-3according to some embodiments. Workload WL1106-1on a stretched network between remote site102and local site104and workload WL3106-3is on a non-stretched network in remote site102. A packet from workload WL1106-1for workload WL3is sent to logical router DR1108-2. A corresponding route for the IP address for workload WL3106-3is not included in local routing table116. Without using policy-based proximity routing, logical router DR1108-2would route the packet via layer 3 VPN114in network124to remote site102. Router R1108-1would receive the packet and route the packet to workload WL3106-3. However, when workload WL106-3sends a packet to workload WL1106-1, router R1108-1receives the packet and sends the packet via the stretched network to workload WL1106-1using layer 2 VPN112. This is because router R1108-1is connected to the stretched network and is configured to route packets to workloads on the stretched network via layer 2 VPN112. The above paths are asymmetric because one path takes the layer 3 path and another path takes the layer 2 path. Using policy-based proximity routing, the route from workload WL1106-1to workload WL3106-3is changed from layer 3 routing to layer 2 forwarding. As can be seen, a path at402goes from WL1106-1to logical router DR1108-2. Logical router DR1108-2then forwards the packet via layer 2 VPN112to router R1108-1. Router R1108-1can then route the packet to workload WL3106-3. At404, the return path from workload WL3106-3to workload WL1106-1remains the same through the layer 2 VPN112. The packet goes from workload WL3106-3to router R1108-1, which forwards the packet directly to workload WL1106-1. That is, router R1108-1does not route or forward the packet to logical router DR1108-2. This is because workload WL1106-1is connected to the stretched network and can be reached by router R1108-1via layer 2 forwarding. The symmetry of paths are through layer 2 VPN112even though the paths may go through different network devices. The symmetry allows services to be performed at the edge of networks in remote site102and local site104. FIG.5depicts a simplified flowchart500of a method for processing a packet received at logical router DR1108-2using policy-based proximity routing according to some embodiments. At502, logical router DR1108-2receives a packet from workload WL1106-1that includes a destination IP address 10.1.2.3 for workload WL3106-3ad a destination MAC address of MAC2 for logical router DR1108-2. The use of the MAC address MAC2 forwards the packet to logical router DR1108-2first instead of router R1108-1. At504, logical router DR1108-2looks up the IP address for workload WL3106-3in local routing table116. If a route is found in local routing table116that includes the destination IP address, then this indicates that the workload is located within local site104. In some embodiments, a matching route may be an exact match or may be a match to a range of IP addresses for a /24 route. This process will be described in more detail below, but at508, logical router DR1108-2sends the packet locally within local site104without having the packet routed to remote site102. In some cases, logical router DR1108-2may not consider the default route when the packet is from a stretched network when attempting to find a match. This is because the default route may be set to be a default match if no other matches occur. For example, if the default route is 0.0.0/0, this route will always match the destination IP address in which case a situation where a match is not found will never occur. Accordingly, in some embodiments, logical router DR1108-2first determines that this packet is from a stretched network. The stretched network may be identified based on the interface from which the packet is received or from metadata. For example, the interface 10.1.1.1 is designated as being associated with a stretched network. Also, metadata may be set for the packet that indicates the packet is from a stretched network. If the packet is from a stretched network, logical router DR1108-2searches for a match of routes other than the default route. When no match is found for a route in local routing table116, at510, logical router DR1108-2looks up the destination IP address for workload WL3106-3in policy table118. At512, logical router DR1108-2determines if a route is found in policy table118that matches the destination IP address. If a route is not found, at514, logical router DR1108-2may perform default routing, which may route the packet via layer 3. However, if a route is found, logical router DR1108-2may route the packet based on an associated policy, such as by using layer 2 forwarding. For example, the IP address 10.1.2.3 matches the policy of “10.1.2.0/24 Allow MAC-R” in policy table118because the IP addresses is within the range of IP addresses defined by the rule. A policy may indicate that destination IP addresses of packets within this range of IP addresses are allowed and should be forwarded using layer 2 forwarding via the layer 2 connection between local site104and remote site102instead of via layer 3. To forward the packet via layer 2, logical router DR1108-2may perform some actions to mask the processing of the packet by logical router DR1108-2such that router R1108-1believes the packet is sent from workload WL1106-1. For example, at516, logical router DR1108-2preserves the source MAC address of workload WL1106-1and keeps the same time-to-live (TTL) value for the packet. However, logical router DR1108-2changes the destination MAC address of the packet to MAC-R. Performing these actions hides the processing of the packet at logical router DR1108-2. That is, the TTL value is not decremented, which hides the hop at logical router DR1108-2. Also, keeping the source MAC address as workload WL1106-1(instead of changing the MAC address to an interface of logical router DR1108-2) causes router R1108-1to think the packet is from workload WL1106-1. Accordingly, the hop associated with processing the packet at logical router DR1108-2is not recorded. In other embodiments, the TTL value may be decremented. The TTL may be decremented if it is desirable to have a hint that one hidden hop is in the path. At518, logical router DR1108-2forwards the packet through layer 2 VPN112to router R1108-1. As discussed above, the MAC address MAC-R may represent the MAC address that is required for the packet to reach router R1108-1. In some examples, the MAC address MAC-R is the MAC address for router R1108-1. However, the MAC address MAC-R may be associated with L2C110and is translated to the MAC address of MAC1 of router R1108-1during forwarding of the packet. When router R1108-1receives the packet, router R1108-1routes the packet to workload WL3106-3in remote site102using the destination IP addresses. Accordingly, traffic from migrated workloads is forwarded through the layer 2 VPN112channel to remote site102. The processing of the packet to be forwarded through the layer 2 VPN112channel is performed by logical router DR1108-2as part of layer 3 handling using local routing table116. This does not change the processing at logical router DR1108-2because logical router DR1108-2has to consult local routing table116to determine if any packets should be forwarded locally. However, the packet received at router R1108-1from logical router DR1108-2appears as a layer 2 forwarding of the packet to remote site102. Also, performing policy-based proximity routing does not change the configuration of remote site102. For example, the routing from workload WL3106-3to workload WL1106-1follows a normal return path from router R1108-1to workload WL1106-1through layer 2 VPN channel112as discussed above. In the return path, the packet from workload WL3106-3may bypass logical router DR1108-2as router R1108-1is connected to workload WL1106-1via the stretched network that includes workload WL1106-1. Workload WL2to Workload WL1 Packets may also be sent locally between workloads in addition to being sent in between local site104and remote site102.FIG.6depicts an example of traffic paths taken when sending packets between workload WL2106-2and workload WL1106-1according to some embodiments. Workload WL2106-2is on a non-stretched network in local site104and workload WL1106-1is on a stretched network. The /32 routes in local routing table116are used to locally route the packet instead of having the packet being routed to remote site102and then back to local site104. When workload WL2106-2sends a packet with a destination IP address of 10.1.1.2 of workload WL1106-1and a destination MAC address of MAC2 for logical router DR1108-2, the packet is sent in a path602via logical router DR1108-2to workload WL1106-1. Using a /32 route in local routing table116, logical router DR1108-2determines that workload WL1106-1is located locally in local site104. Then, logical router DR1108-2forwards the packet to workload WL1106-1without sending the packet to remote site102. When workload WL1106-1sends a return packet to workload WL2106-2, the packet is also routed locally via logical router DR1108-2in a path604because workload106-2is local in local site104. FIG.7depicts a simplified flowchart700of the processing performed when routing packets locally to a stretched network according to some embodiments. At702, logical router DR1108-2receives a packet from workload WL2106-2with a destination address of 10.1.1.2 for workload WL1106-1and a destination MAC address of MAC2 for logical router DR1108-2. At704, logical router DR1108-2looks up the IP address for workload WL in local routing table116. For example, logical router DR1108-2looks up the destination IP address 10.1.1.2 in local routing table116. As shown inFIG.2, local routing table116includes a /32 route of 10.1.1.2/32, which may have been added when workload WL1106-1migrated to local site104. At706, logical router DR1108-2determines if a route is found in local routing table116for the destination IP address. If a route is not found in local routing table116, at708, logical router DR1108-2may perform policy-based proximity routing or use a default route to route the packet. However, if a route is found in local routing table116, at710, logical router DR1108-2routes the packet locally. For example, logical router DR1108-2sends the packet to workload WL1106-1using the destination IP address without routing the packet to remote site102. The return path from workload WL1106-1to workload WL2106-2is processed normally by logical router DR1108-2as the network in which workload WL2106-2is connected is found in local routing table116and the packet is locally routed. Route Processing from One Stretched Network to Another Stretched Network FIG.8depicts an example of traffic paths when routing packets from a first stretched network to a second stretched network according to some embodiments. As shown, a second stretched network is assigned the IP address range of 10.0.0.0/24 and is stretched across remote site102and local site104. The second stretched network includes workload WL5106-5with an IP address of 10.0.0.100 in remote site102and workload WL6106-6with an IP address of 10.0.0.0.99 in local site104. The first stretched network and the second stretched network are connected via L2 VPN112. When workload WL1106-1sends a packet for workload WL5106-5, logical router DR1108-2receives the packet, and cannot find the IP address in local routing table116. After consulting policy table118, logical router DR1108-2finds a policy for the route 10.0.0.0/24. Accordingly, logical router DR1108-2forwards the packet in a path802via layer 2 VPN112to router R1108-1similar to as described above with respect toFIGS.4and5except the packet is sent to a workload on another stretched network. Router R1108-1receives the packet and forwards the packet to workload WL5106-5on the connected stretched network. When workload WL5106-5sends a packet back to workload WL1106-1, the path at804is followed. The path sends the packet to router R1108-1, which then sends the packet via layer 2 VPN112-1to workload WL1106-1without going through logical router DR1108-2. This is similar to the processing as discussed above inFIGS.4and5except the packet originates from the stretched network instead of a non-stretched network. The above paths are also symmetric similar to the paths inFIG.4. Packet Flow from Workload WL2to Workload WL4 FIG.9depicts an example of traffic paths when routing packets between workload WL2106-2on a non-stretched network in local site104to workload WL4106-4on a stretched network in remote site102according to some embodiments. Workload WL2106-2may send a packet that goes through the path at902to logical router DR1108-2. Logical router DR1108-2looks up the IP address in local routing table116. Because workload WL2106-2is in a non-stretched network, policy-based proximity routing does not apply to this packet. For example, logical router DR1108-2may be configured to not apply policy-based proximity routing to packets received from workloads on non-stretched networks. In this case, the default route is eligible to be matched and the default route is the matching route because the IP address of 10.1.1.3 is not found in a route in local routing table116. This is because workload WL4106-4is not located in local site104and has not had a /32 route added to local routing table116. Accordingly, logical router DR1108-2sends the packet via layer 3 VPN114in network124. The packet is processed by layer 3 VPN router120and received at router R1108-1. Router R1108-1forwards the packet to workload WL4106-4. The same path is used when workload WL4106-4sends the packet back to workload WL2106-2. For example, at904, workload WL4106-4sends the packet to router R1108-1, which routes the packet via layer 3 because workload WL2is on a non-stretched network in local site104(e.g., the destination IP address is not in the local routing table of router R1108-1). Layer 3 VPN router120routes the packet via layer 3 VPN114in network124to edge router122, which sends the packet to logical router DR1108-2. Logical router DR1108-2can then forward the packet to workload WL2106-2. The paths shown at902and904are also symmetric through L3 VPN114. Conclusion Accordingly, using policy-based proximity routing, traffic paths taken between local site104and remote site102are symmetric through either L2 VPN112or L3 VPN114. This is important when providing services that require symmetric routes between workloads, such as firewall services. The policies are used when communicating with workloads in remote site102to steer traffic from a workload on a stretched network to the layer 2 VPN. Also, /32 routes are added to local routing table116to forward packets sent within local site104without sending the packets to remote site102. The above packet processing can be performed without adjusting the configuration of remote site102. Additionally, when using policy-based proximity routing, the paths do not need to use the layer 3 VPN. Site Aware Policy-Based Proximity Routing A tenant may have multiple remote sites. In this case, using policy table118may not be possible because IP addresses in different remote sites for the same tenant may be the same or overlapping. Also, it is possible that a stretched network may stretch among local site104and multiple remote sites102. To support multiple sites, local site104may use site aware policy-based proximity routing.FIG.10depicts a simplified system1000for performing site aware policy-based proximity routing according to some embodiments. System1000is similar to system100described above; however, multiple remote sites102-1to102-X are included. Remote sites102-1to102-X may be associated with a same tenant. In some embodiments, a stretched network, such as the network assigned the IP address range 10.1.1.0/24 may be stretched across multiple remote sites102and local site104that are not stretched across the other site. In addition, each remote site102may have respective stretched networks with local site104. Because IP addresses may overlap between remote sites102and networks may stretch across multiple remote sites102, local site104cannot use a single policy table118for multiple sites as described above. Rather, local site104uses site specific policies1002. For example, each remote site102-1to102-X may have an associated site policy1002-1to1002-X. FIG.11depicts a more detailed example of multi-site system1000according to some embodiments. Layer 3 connections between remote site102-1and remote site102-2with local site104-4are not shown, but each remote site may have a layer 3 VPN channel with local site104. A first stretched network assigned with the IP address range of 10.1.1.1/24 may be stretched across remote site102-1, local site104, and remote site102-2. The stretched network may be stretched across a layer 2 VPN112-1and between remote site102-1and local site104and a layer 2 VPN112-2between local site104and remote site102-2. A workload WL1106-1may have migrated to local site104from remote site102-1. Workload WL1106-1is assigned a site ID of #100, which corresponds to remote site102-1. Another workload WL8106-8may have been migrated from remote site102-2to local site104on the same stretched network as workload WL1106-1. Workload WL8may be assigned the site ID #101to differentiate the workloads that are migrated from different remote sites. Logical router DR1108-2includes an IP address for an L3 interface of 10.1.1.1/32, which is the same IP address as a router R1in remote site102-1. Logical router DR1108-2includes a different MAC address of MAC2 compared to the MAC address of router R1108-1of MAC1. Router R2108-3in remote site102-2may also include the same IP address as logical router DR1108-2, but a different MAC address of MAC3. A second stretched network between remote site102-2and local site104is assigned the IP address range of 10.2.1.0/24. A workload WL7106-7has migrated from remote site102-2to local site104on the stretched network. Workload WL7106-7is assigned the site ID of #101and maintains the same IP address that was used in remote site102-2of 10.2.1.100. Logical router DR1108-2includes a layer 3 interface with the same IP address as another interface of router R2108-3(or another router) in remote site102-2of 10.2.1.1/32 and the MAC address of MAC2. Different configurations to implement the multiple stretched networks may be appreciated. FIG.12depicts examples of site policies1002according to some embodiments. A management plane may maintain the site-specific policies118for site ID #100and site ID #101. Local routing table116may be similar to the routing table discussed above and include /32 routes for workloads that have been migrated to local site104. The /32 routes may be added for workloads that are migrated from all remote sites102. For example, /32 routes are added for workload WL1106-1(e.g., 10.1.1.2/32), workload WL8106-9(e.g., 10.1.1.4/32), workload WL7106-7(e.g., 10.2.1.100), and the logical interfaces for logical router DR1108-2(e.g., 10.1.1.1/32 and 10.2.1.1/32). Each remote site102may be associated with a unique site identifier which identifies the remote site, such as identifiers for a site #100and a site #101are used for two remote sites. Additionally, site-specific policies for a site #100and a site #101are shown in site #100policy1002-1and site #101policy1002-2, respectively. Site #100policy1002-1includes the routes 10.1.1.0/24 and 10.1.2.0/24. These routes correspond to the IP address scope in remote site #100. The MAC address for router R1in site #100of MAC-R1is included in site #100policy1002-1. This indicates that logical router DR1108-2should send the packets using MAC address MAC-R1, which sends packets to router R1108-1. Site #101policy1002-2includes the routes 10.1.1.1.0/24 and 10.2.1.0/24. This includes the IP address scope for networks in site #101. The MAC address for a router R2for a MAC address MAC-R2is included in the site #101policy. This indicates that logical router DR1108-2should send the packets using MAC address MAC-R2, which sends packets to router R2108-3. Logical router DR1108-2may use the site-specific policies as follows.FIG.13depicts a simplified flowchart1300of a method for performing site-aware policy-based proximity routing according to some embodiments. At1302, logical router DR1108-2receives a packet for workload WL3106-3in remote site #100with a MAC address of MAC2. At1304, logical router DR1108-2looks up the IP address for workload WL3106-3in local routing table116. If the route is found, at1308, logical router DR1108-2routes the packet locally. However, if the route is not found, logical router DR1108-2determines that the IP address 10.1.2.3 is not in the local site. Because workload WL1106-1is on a stretched network, logical router DR1108-2may perform site-aware policy-based proximity routing. However, because multiple sites may be connected to local site104, at1310, logical router DR1108-2may determine if a site ID is associated with the packet. Logical router DR1108-2may determine the site ID in different ways. For example, when a stretched network does not stretch between multiple remote sites, the site ID may be associated with the logical interface of logical router DR1108-2that connects to the stretched network in which the packet is received. For example, one stretched network is between remote site102-2and local site104. If the stretched network is assigned the IP address range of 10.2.1.0/24, logical router DR1108-2may determine the site ID if the packet is received on this interface at the IP address 10.2.1.1/32. The site ID on site #101may be associated with the logical interface in this case because the network is only stretched to remote site102-2. However, if a stretched network is stretched across multiple remote sites, then the logical interface for that stretched network may not be used to identify the site because packets may be received from multiple remote sites on the same interface. Other methods of identifying the site may be used then. For example, workloads on the stretched network may be tagged with a site ID. For example, workload WL1106-1is tagged with a site ID #100when the workload migrates from remote site102-1. Another workload WL8106-8may be tagged with the site ID #101when the workload is migrated from remote site102-2. The tagging of workloads when they migrate differentiates the workloads that are migrated from different remote sites. In some embodiments, the site ID may be carried in metadata for the packets sent from each respective workload. Metadata may be generated separate from the packet header or packet body. In other embodiments, a workload may insert the site ID in the packet, such as in a packet header. Logical router DR1108-2can then use the metadata to determine the site ID. Also, other methods may be used, such as a table may associate workloads to associated sites. The use of metadata and the table may be used when the stretched network connects multiple remote sites or only a single remote site with local site104. Referring back toFIG.13, if a site ID is not associated with a packet, at1312, logical router DR1108-2looks up the IP address for workload WL3106-3in the default site policy. In this case, the processing described above for a single remote site may be used. However, if a site ID is associated with the packet, at1314, logical router DR1108-2looks up the IP address for a workload WL3106-3in the specific site policy, such as the site policy for site #101. When a site ID is determined, at1316, logical router DR1108-2looks up the destination IP address for workload WL3106-3in site #100policy table1002-1. At1318, logical router DR1108-2determines if a route is found in site #100policy table1002-1. If the route is not found, at1316, logical router DR1108-2may perform default routing, which may route the packet via layer 3. However, if a route is found, logical router DR1108-2may route the packet using layer 2 forwarding. For example, the IP address 10.1.2.3 matches the policy of “10.1.2.0/24 Allow MAC-R1” in site #100policy table1002-1, which indicates that the destination IP address of the packet is within this range of IP addresses that are allowed and the MAC address that is used forwards the packet to router R1108-1of remote site102-1instead of router R2108-3of remote site102-2. The policy of “Allow” indicates that this packet is to be sent via the layer 2 connection between local site104and remote site102-1. To forward the packet via layer 2, at1318, logical router DR1108-2may perform similar actions described above to mask the processing of the packet by logical router DR1108-2such that router R1108-1believes the packet is sent from workload WL1106-1, such as preserving the source MAC and keeping the same TTL value, and changing destination MAC to MAC-R in the specific site. At1320, logical router DR1108-2forwards the packet through layer 2 VPN112to router R1108-1in remote site102-1. When router R1108-1receives the packet, router R1108-1routes the packet to workload WL3106-3in remote site102-1. In the return path, the packet from workload WL3106-3may bypass logical router DR1108-2as router R1108-1includes workload WL1106-1in its local routing table and is connected to the stretched network that includes workload WL1106-1. Also, logical router DR1108-2may also receive a packet from workload WL8106-8for a workload in remote site #101. Logical router DR1108-2determines the site ID associated with workload WL8106-8is site #101. In this case, logical router DR1108-2uses site #101policy1002-2to perform policy-based proximity routing to forward the packet to the workload in site #101. A similar process is used as described above. However, the MAC address is MAC-R2, which forwards the packet to router R2108-3. In another scenario, logical router DR1108-2may also receive a packet from workload WL7106-7for a workload in remote site #101. The packet is received at the interface with the IP address 10.2.1.1/32. Logical router DR1108-2determines the site ID associated with workload WL7106-7is site #101. For example, the site ID may be associated with the interface in this case because the stretched network is between local site104and remote site102-2, and not other remote sites. Alternatively, logical router DR1108-2may use metadata associated with the packet to determine the site ID. In this case, logical router DR1108-2uses site #101policy1002-2to perform policy-based proximity routing to forward the packet to the workload in site #101. This process is similar to the process to forward the packet to workload WL3106-3in remote site100. Conclusion Accordingly, a tenant may have multiple remote sites and still use policy-based proximity routing. A site ID may be used to identify a site or a policy. Then, logical router DR1108-1may apply the policy for the site. Embodiments In some embodiments, a method comprising: setting, by a first network device in a first site, a first Internet Protocol (IP) address for an interface of the first network device to a value of a second IP address of a second network device in a second site; adding a set of policies in a policy table to cover a set of IP addresses used in the second site; adding a specific route for a third IP address associated with a first workload that is migrated from the second site to the first site into a routing table for the first network device, wherein the first workload is on a stretched network between the first site and the second site that is coupled via a layer 2 channel, wherein: the specific route in the routing table configures the first network device to send a first packet from a second workload local to the first site to the first workload, and the policy table configures the first network device to send a second packet from the first workload to a third workload in the second site via the layer 2 channel between the first site when an IP address for the third workload does not match an eligible route in the routing table. In some embodiments, the first network device is configured to review the policy table after not finding the eligible route in the routing table. In some embodiments, the first network device includes a first Media Access Control (MAC) address and the second network device includes a second MAC address, and the second packet is sent to the IP address of the third workload and a MAC address associated with the first MAC address of the first network device. In some embodiments, the set of IP address found in the second site are for networks connected to the second network device in the second site. In some embodiments, the first network device keeps a MAC address for the first workload as a source MAC address for the second packet and keeps a same time to live value for the second packet when sending the second packet via the layer 2 channel. In some embodiments, the second packet is sent to the second network device, and the second network device sends the packet to the third workload in the second site. In some embodiments, the third workload sends a third packet for the first workload, the second network device sends the third packet to the first workload via the layer 2 channel, and paths for the first packet and the second packet are symmetric in that the paths go through the layer 2 channel. In some embodiments, a non-transitory computer-readable storage medium containing instructions, that when executed, control a computer system to be operable for: setting, by a first network device in a first site, a first Internet Protocol (IP) address for an interface of the first network device to a value of a second IP address of a second network device in a second site; adding a set of policies in a policy table to cover a set of IP addresses used in the second site; adding a specific route for a third IP address associated with a first workload that is migrated from the second site to the first site into a routing table for the first network device, wherein the first workload is on a stretched network between the first site and the second site that is coupled via a layer 2 channel, wherein: the specific route in the routing table configures the first network device to send a first packet from a second workload local to the first site to the first workload, and the policy table configures the first network device to send a second packet from the first workload to a third workload in the second site via the layer 2 channel between the first site when an IP address for the third workload does not match an eligible route in the routing table. In some embodiments, the first network device is configured to review the policy table after not finding the eligible route in the routing table. In some embodiments, the first network device includes a first Media Access Control (MAC) address and the second network device includes a second MAC address, and the second packet is sent to the IP address of the third workload and a MAC address associated with the first MAC address of the first network device. In some embodiments, the set of IP address found in the second site are for networks connected to the second network device in the second site. In some embodiments, the first network device keeps a MAC address for the first workload as a source MAC address for the second packet and keeps a same time to live value for the second packet when sending the second packet via the layer 2 channel. In some embodiments, the second packet is sent to the second network device, and the second network device sends the packet to the third workload in the second site. In some embodiments, the third workload sends a third packet for the first workload, the second network device sends the third packet to the first workload via the layer 2 channel, and paths for the first packet and the second packet are symmetric in that the paths go through the layer 2 channel. In some embodiments, a method comprising: receiving, by a first network device, a packet from a first workload that is located in a network that is stretched across a first site and a second site, wherein the first network device includes an interface associated with an Internet Protocol (IP) address that is a same IP address as a second network device in the second site, and the packet includes a destination IP address for a device in the second site; determining that the destination IP address of the packet does not match an eligible route in a routing table for the first network device; reviewing a policy table to determine that the destination IP address matches a route for a policy in the policy table, wherein the policy table includes routes to cover a set of IP addresses used in the second site; and based on the determining the route in the policy table, sending the packet through a layer 2 channel to the second network device in the second site based on the policy, wherein the second network device sends the packet to the device. In some embodiments, the packet comprises a first packet, the device sends a second packet to the first workload, and the second network device sends the second packet to the first workload via the layer 2 channel. In some embodiments, the first network device keeps a MAC address for the second packet as a source MAC address for the first workload and keeps a same time to live value for the packet when sending the packet via the layer 2 channel. In some embodiments, the packet comprises a first packet, the method further comprising: receiving a second packet from a third workload on a non-stretched network in the first site; determining that the IP address of the packet matches an eligible route in the routing table for the first network device; and based on the determining the match of the eligible route in the routing table, sending the packet locally to the first workload. In some embodiments, the eligible route in the routing table is a specific route for the IP address associated with the first workload. In some embodiments, the method further comprising: when the first workload migrates to the first site, adding a specific route for the IP address associated with the first workload in the routing table. Many variations, modifications, additions, and improvements are possible, regardless the degree of virtualization. The virtualization software can therefore include components of a host, console, or guest operating system that performs virtualization functions. Plural instances may be provided for components, operations or structures described herein as a single instance. Finally, boundaries between various components, operations and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of the disclosure(s). In general, structures and functionality presented as separate components in exemplary configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. Some embodiments described herein can employ various computer-implemented operations involving data stored in computer systems. For example, these operations can require physical manipulation of physical quantities—usually, though not necessarily, these quantities take the form of electrical or magnetic signals, where they (or representations of them) are capable of being stored, transferred, combined, compared, or otherwise manipulated. Such manipulations are often referred to in terms such as producing, identifying, determining, comparing, etc. Any operations described herein that form part of one or more embodiments can be useful machine operations. Further, one or more embodiments can relate to a device or an apparatus for performing the foregoing operations. The apparatus can be specially constructed for specific required purposes, or it can be a general purpose computer system selectively activated or configured by program code stored in the computer system. In particular, various general purpose machines may be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations. The various embodiments described herein can be practiced with other computer system configurations including handheld devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like. Yet further, one or more embodiments can be implemented as one or more computer programs or as one or more computer program modules embodied in one or more non-transitory computer readable storage media. The term non-transitory computer readable storage medium refers to any data storage device that can store data which can thereafter be input to a computer system. The non-transitory computer readable media may be based on any existing or subsequently developed technology for embodying computer programs in a manner that enables them to be read by a computer system. Examples of non-transitory computer readable media include a hard drive, network attached storage (NAS), read-only memory, random-access memory, flash-based nonvolatile memory (e.g., a flash memory card or a solid state disk), a CD (Compact Disc) (e.g., CD-ROM, CD-R, CD-RW, etc.), a DVD (Digital Versatile Disc), a magnetic tape, and other optical and non-optical data storage devices. The non-transitory computer readable media can also be distributed over a network coupled computer system so that the computer readable code is stored and executed in a distributed fashion. Finally, boundaries between various components, operations, and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of embodiments. In general, structures and functionality presented as separate components in exemplary configurations can be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component can be implemented as separate components. These and other variations, modifications, additions, and improvements may fall within the scope of the appended claims(s). As used in the description herein and throughout the claims that follow, “a”, “an”, and “the” includes plural references unless the context clearly dictates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise. The above description illustrates various embodiments of the present disclosure along with examples of how aspects of the present disclosure may be implemented. The above examples and embodiments should not be deemed to be the only embodiments, and are presented to illustrate the flexibility and advantages of the present disclosure as defined by the following claims. Based on the above disclosure and the following claims, other arrangements, embodiments, implementations and equivalents may be employed without departing from the scope of the disclosure as defined by the claims.
60,736
11943144
DETAILED DESCRIPTION As is set forth in greater detail below, embodiments of the present disclosure are generally directed to systems and methods for providing dynamic, adaptive, real-time traffic rules for applications employing a microservice architecture. Embodiments of the present disclosure can utilize historical and real-time traffic information, as well as real-time computing resource information, to determine flow and retry rules that are to be enforced for one or more services associated with an application. The flow and retry rules can be continuously and iteratively updated based on changing real-time traffic and computing conditions. Advantageously, embodiments of the present disclosure can mitigate failures encountered by client devices in response to application programming interfaces (API) calls to the various services. According to exemplary embodiments of the present disclosure, one or more machine learning models may be trained to determine flow and retry rules to be enforced for one or more services associated with an application. For example, the trained machine learning model may determine a historical traffic prediction based on historical traffic information accessed from a historical traffic data store. Alternatively, a historical traffic prediction for a certain time period (e.g., specific date, week, holiday, etc.) may be generated and stored and maintained in a historical traffic prediction datastore. The historical traffic information for the relevant time period and real-time traffic information, as well as real-time computing information, may be provided as inputs to the trained machine learning model to generate a traffic prediction, which may be used to generate flow and retry rules for one or more services. Accordingly, the real-time traffic and computing information may be continuously monitored and provided to the trained machine learning system such that the flow and retry rules can be continuously and dynamically updated based on the real-time traffic and computing information. According to one aspect of the present disclosure, embodiments of the present disclosure may be implemented on the networking layer, where a controller may communicate with virtual network elements associated with each of the various services employed in a microservices architecture to obtain real-time traffic and computing information associated with each of the services. The real-time traffic and computing information, as well as historical traffic prediction information, may be provided to one or more trained machine learning models to determine a traffic prediction. The controller may, based on the traffic prediction generated by the trained machine learning model, determine flow and retry rules for each of the various services, and may communicate with the virtual network elements to enforce the flow and retry rules for the various services. According to another aspect of the present disclosure, embodiments of the present disclosure may be implemented on the application layer, where a controller may communicate with an agent associated with each of the various services employed in a microservices architecture to obtain real-time traffic and computing information associated with each of the services. The real-time traffic and computing information, as well as historical traffic prediction information, may be provided to one or more trained machine learning models to determine a traffic prediction. The controller may, based on the traffic prediction generated by the trained machine learning model, determine flow and retry rules for each of the various services, and may communicate with the agents to enforce the flow and retry rules for the various services. According to yet another aspect of the present disclosure, embodiments of the present disclosure may be implemented on the application layer with a cache memory. In such an exemplary implementation, each of the various services employed in a microservices architecture may communicate with the cache to log real-time traffic and computing information, as well as pull flow and retry rules for each of the services. Further, a controller may communicate with the cache to obtain real-time traffic and computing information associated with each of the services. The real-time traffic and computing information, as well as historical traffic prediction information, may be provided to one or more trained machine learning models to determine a traffic prediction. The controller may, based on the traffic prediction generated by the trained machine learning model, determine flow and retry rules for each of the various services, and may communicate with the cache to enforce the flow and retry rules. According to yet another aspect of the present disclosure, embodiments of the present disclosure may be implemented on the application layer, where a controller may communicate with a cache memory and with an agent associated with each of the various services employed in a microservices architecture. In such an exemplary implementation, the controller may obtain real-time traffic and computing information associated with each of the services from either the agents associated with each of the various services or the cache memory. The real-time traffic and computing information, as well as historical traffic prediction information, may be provided to one or more trained machine learning models to determine a traffic prediction. The controller may, based on the traffic prediction generated by the trained machine learning model, determine flow and retry rules for each of the various services, and may communicate with either the agents associated with each of the various services or the cache memory to enforce the flow and retry rules. According to yet another aspect of the present disclosure, embodiments of the present disclosure may be implemented on the networking layer with a cache memory. In such an exemplary implementation, a controller may communicate with virtual network elements associated with each of the various services employed in a microservices architecture to obtain real-time traffic and computing information associated with each of the services. The real-time traffic and computing information, as well as historical traffic prediction information, may be provided to one or more trained machine learning models to determine a traffic prediction. The controller may, based on the traffic prediction generated by the trained machine learning model, determine flow and retry rules for each of the various services, and may communicate with the cache memory to enforce the flow and retry rules for the various services. According to yet another aspect of the present disclosure, embodiments of the present disclosure may be implemented such that the various services that make up the application may query/poll/gossip with each other to obtain real-time traffic and computing information associated with each of the services. The real-time traffic and computing information, as well as historical traffic prediction information, may be provided to one or more trained machine learning models to determine a traffic prediction. The traffic prediction may be utilized by each of the service in making calls to one or more upstream services. FIG.1Ais an illustration of an exemplary computing environment100, according to exemplary embodiments of the present disclosure. As shown inFIG.1A, computing environment100may include one or more client devices102,104, and/or106, also referred to as user devices, for connecting over network108to access an online application executing on remote computing resources110. Remote computing resource(s)110, and each of the other computing resources discussed herein, do not require end-user knowledge of the physical premises and configuration of the system that delivers the services. Common expressions associated with these remote computing resource(s)110include “on-demand computing,” “software as a service (SaaS),” “platform computing,” “network-accessible platform,” “cloud services,” “data centers,” and so forth. Example components of a server that may be or be included in computing resources110is discussed below with respect toFIG.5. Client devices102,104,106and/or remote computing resources110may communicate via wired and/or wireless connections to network108. Client devices102,104, and/or106may include any type of computing device, such as a smartphone, tablet, laptop computer, desktop computer, wearable, etc., and network108may include any wired or wireless network (e.g., the Internet, cellular, satellite, Bluetooth, Wi-Fi, etc.) that can facilitate communications between client devices102.104, and/or106and the online application executing on remote computing resources110. The online application executing on remote computing resources110may include one or more applications that may be implemented using a micro-services architecture (e.g., networked e-commerce platform, social media platform, advertising platform, etc.). Accordingly, the online application executing on remote computing resources110may be implemented using a micro-service architecture such that it includes a collection of independent services where each service performs an application process as a service. Upon execution of the online application executing on remote computing resources110, calls may be placed (e.g., via an API) to the various components such that each of the various services may perform a respective application process. Further, the various services may be configured such that certain services may have dependencies to one or more other upstream services. Accordingly, the services may place calls (e.g., via an API) to the upstream services on which they depend. According to exemplary embodiments of the present disclosure, the traffic being handled by the various services in connection with the online application executing on remote computing resources110may be monitored and managed by a traffic management service so that certain settings, parameters, and/or configurations (e.g., traffic rules) may be determined to manage the traffic handled by the services that make up the online application. For example, real-time traffic information, real-time computing information, and historical traffic information may be processed by one or more trained machine learning models of the traffic management service to generate a traffic prediction. Based on the traffic prediction, traffic and/or retry rules/configurations may be determined to dynamically manage the traffic among and between the various services. Accordingly, the rate at which calls are made to one or more upstream services can be managed based on the traffic and/or retry rules/configurations determined based on the real-time traffic information, real-time computing information, and historical traffic information. This can advantageously dynamically increase the capacity of transactions handled by the various components and mitigate failures in response to calls made by the various components. According to an exemplary implementation, historical information114may be stored and maintained in one or more data store(s)112and may include historical traffic information and/or historical traffic prediction information. According to certain aspects, historical traffic prediction information may be determined (e.g., using a trained machine learning model) based on historical traffic information and may correspond to certain time periods (e.g., specific dates, specific weeks of the year, specific holidays (e.g., Black Friday, Cyber Monday, New Year's Day, etc.), specific months, specific time periods (e.g., morning, afternoon, evening, overnight, etc.), specific seasons, and the like). Further, the historical traffic predictions may be periodically updated as more recent historical traffic information becomes available. FIG.1Bis a block diagram illustrating an exemplary computing environment120, according to exemplary embodiments of the present disclosure. FIG.1Billustrates an exemplary implementation showing exemplary computing environment120where client device122may execute an application employing a microservices architecture (e.g., the online application executing on remote computing resources110), according to exemplary embodiments of the present disclosure. Rather than accessing services126directly, client device122may access services126-1,126-2, through126-N via API gateway124. For example, client device122may initiate a call to API gateway124, which may initiate calls to the appropriate services. As illustrated inFIG.1B, client device122may initiate a call to API gateway124, which may initiate calls to Service A126-1and Service B126-2. Additionally, Service B126-2may have an upstream dependency on upstream Service N126-N. Accordingly, Service B126-2may initiate a call (e.g., via an API) to Service N126-N. AlthoughFIG.1Billustrates a simplified implementation where only Service B126-2has an upstream dependency to Service N126-N, the microservice architecture may include any number of services, upstream dependencies, etc. In the implementation illustrated inFIG.1B, as calls are made to each service126, traffic and retry rules and configurations may be enforced to ensure that none of services126are overloaded. The traffic rules and configurations may establish maximum transactions per second (TPS) that may be handled by each respective service126. Accordingly, calls to one or more of services126that exceed the traffic rules and configurations being enforced may result in a failure in response to the call, and the retry rules and configurations may be utilized to attempt the call again to the respective service. According to exemplary embodiments of the present disclosure, the traffic and retry rules and configurations may be determined based on real-time traffic information, real-time computing information, and historical traffic information associated with each of services126. For example, real-time traffic information, real-time computing information, and historical traffic information may be provided as inputs to one or more trained machine learning models, which may generate a traffic prediction based on the real-time traffic information, real-time computing information, and historical traffic information. Based on the traffic prediction generated by the trained machine learning model, traffic and/or retry rules and configurations may be determined and established for each of services126. The traffic and/or retry rules and configurations may then be enforced for each of services126. Exemplary implementations are described in further detail herein in connection withFIGS.2A-2E and4. FIGS.2A-2Eare block diagrams illustrating exemplary traffic management services200,220,240,260, and280, according to exemplary embodiments of the present disclosure. As shown inFIG.2A, exemplary traffic management service200may include controller202, which may communicate with network elements212associated with services210of an application implemented using a microservices architecture to obtain real-time traffic information (e.g., from network elements212) associated with services210. Accordingly, services210may make up the collection of independent services that form the application and may each perform an application process. Additionally, controller202may also communicate with resource manager204, traffic prediction model206, and historical information datastore208. According to certain aspects of the present disclosure, traffic management service200may be implemented on the network layer, and controller202may include a software defined network (SDN) controller. Additionally, traffic prediction model206may communicate with historical information datastore208. Historical information datastore208may be configured to store and maintain historical traffic information associated with services210and/or historical traffic prediction information associated with services210. Historical traffic information may include actual traffic information (e.g., TPS, packets per second, etc.) for certain periods of time (e.g., specific dates, specific weeks of the year, specific holidays (e.g., Black Friday, Cyber Monday, New Year's Day, etc.), specific months, specific time periods (e.g., morning, afternoon, evening, overnight, etc.), specific seasons, and the like) and historical prediction information may include a traffic prediction for a specific period of time based on the historical traffic information. For example, at least a portion of the historical traffic information may be processed by a trained machine learning system (e.g., trained traffic prediction model206or another trained machine learning model) to generate traffic predictions for a period of time (e.g., a specific date, a specific week of the year, a specific holiday (e.g., Black Friday, Cyber Monday, New Year's Day, etc.), a specific month, a specific time period (e.g., morning, afternoon, evening, overnight, etc.), specific seasons, and the like) based on the historical traffic information. The historical prediction information may be periodically updated (e.g., reprocessed by the trained machine learning model) for one or more time periods as additional historical traffic information becomes available. For example, controller202may provide the real-time traffic information associated with services210that is obtained from network elements212to historical information datastore208, which may be stored and maintained in historical information datastore208as additional historical traffic information. Accordingly, historical information datastore208may store and maintain historical traffic information associated with services210and/or historical traffic prediction information associated with services210. In operation, controller202may obtain real-time traffic information (e.g., TPS, packet per seconds, etc.) from network elements212. Network elements212may include, for example, a virtual switch, virtual router, virtual bridge, virtual network interface card, and the like and can provide current real-time traffic information to controller202. In exemplary implementations where traffic management service200is implemented in a virtual private cloud, network elements212may include elastic network interfaces (ENI) and traffic rules may be enforced by via the ENIs and/or virtual private cloud endpoints and/or endpoint interfaces. Additionally, controller202may also obtain real-time computing information associated with services210from resource manager204. For example, controller202may obtain processor usage information, memory usage information, and the like from resource manager204, which may monitor and obtain computing information from the various components on the network (e.g., all hosts, virtual machines, containers, etc.). As shown inFIG.2A, the real-time traffic information and the real-time computing information may be provided to traffic prediction model206. Traffic prediction model may include one or more trained machine learning models that have been trained to generate traffic predictions based on historical traffic information for the relevant time period, real-time traffic information, and real-time computing information. Accordingly, traffic prediction model206may determine a traffic prediction based on historical traffic information obtained for the relevant time period from historical information datastore208, as well as the real-time traffic information and the real-time computing information obtained from controller202, and provide the traffic prediction to controller202. The traffic prediction may include a predicted TPS for each service210. Based on the traffic prediction generated by traffic prediction model206, controller202may determine one or more traffic rules, which may specify metering flow limits, to enforce in connection with each service210. Accordingly, the traffic rules may be continuously and dynamically updated based on real-time traffic and computing conditions experienced by services210. For example, if it is determined that any service210is overloaded, this may be reflected in the real-time computing information (e.g., high processor and/or memory usage), which may produce a lower traffic prediction, thereby resulting in dynamic modification of the traffic rules to reduce traffic to the overloaded service. Conversely, if it is determined that any service210has excess capacity, an increased traffic prediction may be generated by traffic prediction model206, thereby resulting in dynamic modification of the traffic rules to increase traffic to the services having excess capacity. According to certain aspects of the present disclosure, the metering flow limit of the traffic rule may correspond to the predicted TPS included in the traffic prediction. Alternatively and/or in addition, the metering flow limit of the traffic rule may include a value that corresponds to a rate that is lower or higher than the predicted TPS included in the traffic prediction. For example, the metering flow limit of the traffic rule may include a metering flow limit that corresponds to a rate that is lower or higher than the predicted TPS by a predetermined value. In determining the traffic rule, controller202may determine a correlation between TPS and a metering value (e.g., packets per second/number of packets in a transaction). To enforce the traffic rules, controller202may program network elements212with the traffic rule to enforce the traffic flow to and between services210. For example, controller202may program network elements212with the determined traffic rule so that network elements212may enforce the traffic rule determined by controller202by limiting and/or making calls in accordance with the traffic rules to ensure that services210are not overloaded. Based on the programming of network elements212with the traffic rules, when traffic exceeds the metering flow limit, network elements212may drop the excess packets to prevent the calls from reaching their respective services. Additionally, the traffic for upstream services may be controlled at each respective source service. For example, in the implementation illustrated inFIG.2A, since Service A210-1and Service B210-2have upstream dependencies on Service N210-N, in addition to enforcing the traffic rule at network element212-N, which is associated with Service N210-N, the traffic to Service N210-N may be controlled by enforcing the traffic rules at network element212-1, which is associated with Service A210-1, and network element212-2, which is associated with Service B210-2. Accordingly, this can limit the number of calls each source service initiates to each upstream service to ensure that each upstream service is not overloaded. Conversely, traffic may also be increased and/or prioritized at the source service based on the traffic prediction. In the exemplary implementation shown inFIG.2A, if Service A210-1initiates fewer calls to Service N210-N, thereby resulting in excess capacity for Service N210-N, traffic may be increased at Service B210-2to consume the excess capacity of Service N210-N. Subsequently, if the calls initiated by Service A210-1increases, an assured forwarding scheme may be applied, whereby the excess traffic from Service B210-2may be dropped. Accordingly, embodiments of the present disclosure facilitate continuous and dynamic management of traffic to and between services210utilized in a microservices architecture based on real-time traffic and computing conditions experienced by services210. Optionally, traffic management service200may also include a distributed cache, which may be in communication with controller202and network elements212. In such a configuration, controller202may continue to obtain the real-time traffic information from network elements212, which may then be processed, along with real-time computing information (e.g., obtained form resource manager204) and historical traffic information for the relevant time period (e.g., obtained from historical information datastore208), to determine a traffic prediction (e.g., by traffic prediction model206). Similarly, the traffic prediction may be used to determine one or more traffic rules, which may specify metering flow limits, to enforce in connection with each service210. However, rather that program network elements directly, the traffic rules may be written to the distributed cache. Accordingly, to enforce the traffic rules, services210may periodically pull the traffic rules from the cache, and limit and/or make calls in accordance with the traffic rules to ensure that services210are not overloaded. Optionally, the traffic rules may also be directly programmed onto network elements212. FIG.2Billustrates exemplary traffic management service220, according to exemplary embodiments of the present disclosure. Exemplary traffic management service220may be similar to traffic management service200shown inFIG.2A, however, exemplary traffic management service220may be implemented on the application layer. Accordingly, controller222may include an application controller rather than an SDN controller and may communicate with agents232associated with services230rather than network elements associated with each respective service. As shown inFIG.2B, exemplary traffic management service200may include controller222, which may communicate with agents232associated with services230of an application implemented using a microservices architecture to obtain real-time traffic information (e.g., from agents232) associated with services230. Accordingly, services230may make up the collection of independent services that form the application and may each perform an application process. Additionally, controller222may communicate with resource manager224, traffic prediction model226, and historical information datastore228. Additionally, traffic prediction model226may communicate with historical information datastore228. Historical information datastore228may be configured to store and maintain historical traffic information associated with services230and/or historical traffic prediction information associated with services230. Historical traffic information may include actual traffic information (e.g., TPS, packets per second, etc.) for certain periods of time (e.g., specific dates, specific weeks of the year, specific holidays (e.g., Black Friday, Cyber Monday, New Year's Day, etc.), specific months, specific time periods (e.g., morning, afternoon, evening, overnight, etc.), specific seasons, and the like) and historical prediction information may include a traffic prediction for a specific period of time based on the historical traffic information. For example, at least a portion of the historical traffic information may be processed by a trained machine learning system (e.g., trained traffic prediction model226or another trained machine learning model) to generate traffic predictions for a period of time (e.g., a specific date, a specific week of the year, a specific holiday (e.g., Black Friday, Cyber Monday, New Year's Day, etc.), a specific month, a specific time period (e.g., morning, afternoon, evening, overnight, etc.), specific seasons, and the like) based on the historical traffic information. The historical prediction information may be periodically updated (e.g., reprocessed by the trained machine learning model) for one or more time periods as additional historical traffic information becomes available. For example, controller222may provide the real-time traffic information associated with services230that is obtained from agents232to historical information datastore228, which may be stored and maintained in historical information datastore228as additional historical traffic information. Accordingly, historical information datastore228may store and maintain historical traffic information associated with services230and/or historical traffic prediction information associated with services230. In operation, controller222may obtain real-time traffic information (e.g., TPS, packet per seconds, etc.) from agents232, which may be associated with a respective service230. For example, the real-time traffic information obtained by controller222may be in the form of a periodic “heartbeat” signal from each agent232, which may include information such as the number of calls made to each respective service during a specified period of time. Additionally, controller222may also obtain real-time computing information associated with services230from resource manager224. For example, controller222may obtain processor usage information, memory usage information, and the like from resource manager224, which may monitor and obtain computing information from the various components on the network (e.g., all hosts, virtual machines, containers, etc.). As shown inFIG.2B, the real-time traffic information and the real-time computing information may be provided to traffic prediction model226. Traffic prediction model226may include one or more trained machine learning models that have been trained to generate traffic predictions based on historical traffic information for the relevant time period, real-time traffic information, and real-time computing information. Accordingly, traffic prediction model226may determine a traffic prediction based on historical traffic information for the relevant time period obtained from historical information datastore228, as well as the real-time traffic information and the real-time computing information obtained from controller222, and provide the traffic prediction to controller222. The traffic prediction may include a predicted TPS for each service230. Based on the traffic prediction generated by traffic prediction model226, controller222may determine one or more traffic rules, which may specify metering flow limits, to enforce in connection with each service230. Accordingly, the traffic rules may be continuously and dynamically updated based on real-time traffic and computing conditions experienced by services230. For example, if it is determined that any service230is overloaded, this may be reflected in the real-time computing information (e.g., high processor and/or memory usage), which may produce a lower traffic prediction, thereby resulting in dynamic modification of the traffic rules to reduce traffic to the overloaded service. Conversely, if it is determined that any service230has excess capacity, an increased traffic prediction may be generated by traffic prediction model226, thereby resulting in dynamic modification of the traffic rules to increase traffic to the services having excess capacity. According to certain aspects of the present disclosure, the metering flow limit of the traffic rule may correspond to the predicted TPS included in the traffic prediction. Alternatively and/or in addition, the metering flow limit of the traffic rule may include a value that corresponds to a rate that is lower or higher than the predicted TPS included in the traffic prediction. For example, the metering flow limit of the traffic rule may include a metering flow limit that corresponds to a rate that is lower or higher than the predicted TPS by a predetermined value. In determining the traffic rule, controller222may determine a correlation between TPS and a metering value (e.g., packets per second/number of packets in a transaction). To enforce the traffic rules, controller222may program agents232with the traffic rule to enforce the traffic flow to and between services230. For example, controller222may program agents232with the determined traffic rule so that agents232may enforce the traffic rule determined by controller222by limiting and/or making calls in accordance with the traffic rules to ensure that services230are not overloaded. Based on the programming of agents232with the traffic rules, agents232can limit the number of calls made to each respective service230. Additionally, the traffic for upstream services may be controlled at each respective source service. For example, in the implementation illustrated inFIG.2B, since Service A230-1and Service B230-2have upstream dependencies on Service N230-N, in addition to enforcing the traffic rule at agent232-N, which is associated with Service N230-N, the traffic to Service N230-N may be controlled by enforcing the traffic rules at agent232-1, which is associated with Service A230-1, and agent232-2, which is associated with Service B230-2. Accordingly, this can limit the number of calls each source service initiates to each upstream service to ensure that each upstream service is not overloaded. Conversely, traffic may also be increased and/or prioritized at the source service based on the traffic prediction. In the exemplary implementation shown inFIG.2B, if Service A230-1initiates fewer calls to Service N230-N, thereby resulting in excess capacity for Service N230-N, traffic may be increased at Service B230-2to consume the excess capacity of Service N230-N. Subsequently, if the calls initiated by Service A230-1increases, an assured forwarding scheme may be applied, whereby the excess traffic from Service B230-2may be dropped. Accordingly, embodiments of the present disclosure facilitate continuous and dynamic management of traffic to and between services230utilized in a microservices architecture based on real-time traffic and computing conditions experienced by services230. FIG.2Cillustrates exemplary traffic management service240, according to exemplary embodiments of the present disclosure. Exemplary traffic management service240may be similar to traffic management service200and220shown inFIGS.2A and2B. However, as shown inFIG.2Cexemplary traffic management service240may employ cache memory243(e.g., a distributed cache) and network monitoring service245. According to aspect of the present disclosure, traffic management service240may be implemented on either the application layer or on the network layer. In an exemplary implementation where traffic management service240is implemented on the application layer, controller242may include an application controller and may communicate with cache memory243and network monitoring service245, both of which may communicate with services250. As shown inFIG.2C, exemplary traffic management service240may include controller242, which may communicate with cache243and network monitoring service245, both of which may communicate with services250of an application implemented using a microservices architecture. According to certain exemplary implementations, controller242may obtain real-time traffic information associated with services250from network monitoring service245. Accordingly, services250may make up the collection of independent services that form the application and may each perform an application process. Additionally, controller242may communicate with resource manager244, traffic prediction model246, and historical information datastore248. Traffic prediction model246may also communicate with historical information datastore248. Historical information datastore248may be configured to store and maintain historical traffic information associated with services250and/or historical traffic prediction information associated with services250. Historical traffic information may include actual traffic information (e.g., TPS, packets per second, etc.) for certain periods of time (e.g., specific dates, specific weeks of the year, specific holidays (e.g., Black Friday, Cyber Monday, New Year's Day, etc.), specific months, specific time periods (e.g., morning, afternoon, evening, overnight, etc.), specific seasons, and the like) and historical prediction information may include a traffic prediction for a specific period of time based on the historical traffic information. For example, at least a portion of the historical traffic information may be processed by a trained machine learning system (e.g., trained traffic prediction model246or another trained machine learning model) to generate traffic predictions for a period of time (e.g., a specific date, a specific week of the year, a specific holiday (e.g., Black Friday, Cyber Monday, New Year's Day, etc.), a specific month, a specific time period (e.g., morning, afternoon, evening, overnight, etc.), specific seasons, and the like) based on the historical traffic information. The historical prediction information may be periodically updated (e.g., reprocessed by the trained machine learning model) for one or more time periods as additional historical traffic information becomes available. For example, controller242may provide the real-time traffic information associated with services250that is obtained from network monitoring service245to historical information datastore248, which may be stored and maintained in historical information datastore248as additional historical traffic information. Accordingly, historical information datastore248may store and maintain historical traffic information associated with services250and/or historical traffic prediction information associated with services250. In operation, controller242may obtain real-time traffic information (e.g., TPS, packet per seconds, etc.) from logs maintained by network monitoring service245, which may communicate with services250. For example, services250may periodically provide real-time traffic information to network monitoring service245. The real-time traffic information may include information such as TPS, the number of calls made to each respective service during a specified period of time, and the like. Additionally, controller242may also obtain real-time computing information associated with services250from resource manager244. For example, controller242may obtain processor usage information, memory usage information, and the like from resource manager244, which may monitor and obtain computing information from the various components on the network (e.g., all hosts, virtual machines, containers, etc.). As shown inFIG.2C, the real-time traffic information and the real-time computing information may be provided to traffic prediction model246. Traffic prediction model may include one or more trained machine learning models that have been trained to generate traffic predictions based on historical traffic information for the relevant time period, real-time traffic information, and real-time computing information. Accordingly, traffic prediction model246may determine a traffic prediction based on historical traffic for the relevant time period, information obtained from historical information datastore248, as well as the real-time traffic information and the real-time computing information obtained from controller242, and provide the traffic prediction to controller242. The traffic prediction may include a predicted TPS for each service250. Based on the traffic prediction generated by traffic prediction model246, controller242may determine one or more traffic rules, which may specify metering flow limits, to enforce in connection with each service250. Accordingly, the traffic rules may be continuously and dynamically updated based on real-time traffic and computing conditions experienced by services250. For example, if it is determined that any service250is overloaded, this may be reflected in the real-time computing information (e.g., high processor and/or memory usage), which may produce a lower traffic prediction, thereby resulting in dynamic modification of the traffic rules to reduce traffic to the overloaded service. Conversely, if it is determined that any service250has excess capacity, an increased traffic prediction may be generated by traffic prediction model246, thereby resulting in dynamic modification of the traffic rules to increase traffic to the services having excess capacity. According to certain aspects of the present disclosure, the metering flow limit of the traffic rule may correspond to the predicted TPS included in the traffic prediction. Alternatively and/or in addition, the metering flow limit of the traffic rule may include a value that corresponds to a rate that is lower or higher than the predicted TPS included in the traffic prediction. For example, the metering flow limit of the traffic rule may include a metering flow limit that corresponds to a rate that is lower or higher than the predicted TPS by a predetermined value. In determining the traffic rule, controller242may determine a correlation between TPS and a metering value (e.g., packets per second/number of packets in a transaction). To enforce the traffic rules, controller242may write the traffic rules to cache243. Services250may periodically pull the traffic rules from cache243and make calls in accordance with the traffic rules to enforce the traffic flow to and between services250. For example, controller242may program cache243with the determined traffic rule so that services250are not overloaded. Based on the programming of cache243with the traffic rules, the number of calls made to each respective service250can be limited in accordance with the traffic rules. Additionally, the traffic for upstream services may be controlled at each respective source service. For example, in the implementation illustrated inFIG.2C, since Service A250-1and Service B250-2have upstream dependencies on Service N250-N, the traffic to Service N250-N may be controlled by limiting the number of calls that Service A250-1and Service B250-2may make to Service N250-N. Accordingly, this can limit the number of calls each source service initiates to each upstream service to ensure that each upstream service is not overloaded. Conversely, traffic may also be increased and/or prioritized at the source service based on the traffic prediction. In the exemplary implementation shown inFIG.2C, if Service A250-1initiates fewer calls to Service N250-N, thereby resulting in excess capacity for Service N250-N, traffic may be increased at Service B250-2to consume the excess capacity of Service N250-N. Subsequently, if the calls initiated by Service A250-1increases, an assured forwarding scheme may be applied, whereby the excess traffic from Service B250-2may be dropped. Accordingly, embodiments of the present disclosure facilitate continuous and dynamic management of traffic to and between services250utilized in a microservices architecture based on real-time traffic and computing conditions experienced by services250. FIG.2Dillustrates exemplary traffic management service260, according to exemplary embodiments of the present disclosure. Exemplary traffic management service260may be substantially similar to traffic management service240shown inFIG.2C. However, as shown inFIG.2Dand similar to traffic management service220shown inFIG.2B, each service270-1,270-2, and270-N may include an associated agent272-1,272-2, and272-N. Accordingly, in operation, traffic management service260may incorporate functionality described in connection with traffic management services220and/or240. For example, in the exemplary implementation illustrated inFIG.2D, the real-time traffic information may be obtained from agents272via a periodic “heartbeat” signal. Similar to other described implementations, the real-time traffic information may be used, along with historical traffic information for the relevant time period (e.g., from historical information datastore268) and real-time computing information (e.g., from resource manager264), to generate (e.g., by traffic prediction model266) a traffic prediction, which may be used to determine one or more traffic rules. Similar to traffic management service240shown inFIG.2C, the traffic rules may be written to cache263(e.g., a distributed cache), which may be periodically pulled by services270so that calls can be made in accordance with the traffic rules to enforce the traffic rules. Further, the traffic rules may be continuously and dynamically updated in view of the real-traffic and computing information. Additionally, controller262may provide the real-time traffic information associated with services270that is obtained from agents272to historical information datastore268, which may be stored and maintained in historical information datastore268as additional historical traffic information. Alternatively and/or in addition, similar to the traffic management service240, real-time traffic information may be obtained from network monitoring service265. The real-time traffic information may be used, along with historical traffic information for the relevant time period (e.g., from historical information datastore268) and real-time computing information (e.g., from resource manager264), to generate (e.g., by traffic prediction model266) a traffic prediction, which may be used to determine one or more traffic rules. Similar to traffic management service220shown inFIG.2B, agents272may be programmed with the traffic rules, which may manage calls initiated by services270in accordance with the traffic rules to enforce the traffic rules. Further, the traffic rules may be continuously and dynamically updated in view of the real-traffic and computing information. FIG.2Eillustrates exemplary traffic management service280, according to exemplary embodiments of the present disclosure, where one or more of the services can provide the real-time traffic information and enforce the traffic rules. Preferably, exemplary traffic management service280may be implemented on the application layer. As shown inFIG.2E, Service A290-1may periodically query/poll/gossip with upstream dependency services (i.e., Service B290-2and Service N290-N) to obtain real-time traffic and computing information. Service A290-1can then provide the real-time traffic and computing information directly to traffic prediction model286. Similar to other described implementations, a traffic prediction may be generated by traffic prediction model286based on historical traffic information for the relevant time period (e.g., obtained from historical information datastore288), and the real-time traffic and computing information obtained from Service A290-1. The traffic prediction may be used to determine one or more traffic rules, which may be provided to Service A290-1. Service A290-1may then initiate calls to upstream services (e.g., Service B290-2and Service N290-N) in accordance with the traffic rules to enforce the traffic rules which were determined based on the real-time traffic and computing conditions. Accordingly, the traffic rules may be continuously and dynamically updated in view of the real-traffic and computing information. Additionally, service A290-1may provide the real-time traffic information associated with services290to historical information datastore288, which may be stored and maintained in historical information datastore288as additional historical traffic information. Alternatively and/or in addition, exemplary traffic management service280may be implemented on the networking layer. In an exemplary implementation, network elements (e.g., virtual switches, virtual network interface cards, etc.) associated with each service290may periodically query/poll/gossip with upstream dependency services (i.e., Service B290-2and Service N290-N) to obtain real-time traffic and computing information. The network element associated with Service A290-1can then provide the real-time traffic and computing information directly to traffic prediction model286. Similar to other described implementations, a traffic prediction may be generated by traffic prediction model286based on historical traffic information for the relevant time period (e.g., obtained from historical information datastore288), and the real-time traffic and computing information obtained from the network element associated with Service A290-1. The traffic prediction may be used to determine one or more traffic rules. The network element associated with Service A290-1may then manage traffic and initiate calls to upstream services (e.g., Service B290-2and Service N290-N) in accordance with the traffic rules to enforce the traffic rules which were determined based on the real-time traffic and computing conditions. Accordingly, the traffic rules may be continuously and dynamically updated in view of the real-traffic and computing information. FIG.3is a flow diagram of an exemplary traffic management process300, according to exemplary embodiments of the present disclosure. As shown inFIG.3, process300may begin with obtaining historical traffic information for the relevant time period, as in step302. The historical traffic information may include actual traffic information (e.g., TPS, packets per second, etc.) associated with certain services for certain periods of time (e.g., specific dates, specific weeks of the year, specific holidays (e.g., Black Friday, Cyber Monday, New Year's Day, etc.), specific months, specific time periods (e.g., morning, afternoon, evening, overnight, etc.), specific seasons, and the like). The historical traffic information may be processed (e.g., by a trained machine learning model) to determine historical traffic predictions associated with the services for a period of time (e.g., a specific date, a specific week of the year, a specific holiday (e.g., Black Friday, Cyber Monday, New Year's Day, etc.), a specific month, a specific time period (e.g., morning, afternoon, evening, overnight, etc.), specific seasons, and the like), as in step304. According to certain exemplary embodiments, the historical traffic predictions may be determined in advance (e.g., in batch) and may be stored and maintained in a datastore where they may be accessed as needed. Further, the historical prediction information may be periodically updated (e.g., reprocessed by the trained machine learning model) for one or more time periods as additional historical traffic information becomes available. In step306, real-time traffic and computing information (e.g., TPS, packet per seconds, processor/CPU usage, memory usage, etc.) may be obtained (e.g., from network elements associated with services, agents associated with agents, a distributed cache, a network monitoring service, one or more services, a network resource management service, etc.). The historical traffic prediction for the relevant time period, real-time traffic information, and real-time computing information may be processed by a trained machine learning model to dynamically determine a real-time traffic prediction, as in step308. The traffic prediction may include a predicted TPS for each service. In step310, one or more traffic and/or retry rules may be determined based on the traffic prediction. The traffic rules may specify metering flow limits, TPS, packets per second, etc. for each service. The determined traffic rules may then be enforced, as in step312, to mitigate failures in response to calls made to services and/or ensure that services are not overloaded. For example, network elements (e.g., virtual switches, routers, network interface cards, elastic network interfaces, virtual private cloud endpoints, etc.) associated with the various services may be programmed with the traffic rules, and the network elements may manage the traffic in accordance with the traffic rules (e.g., dropping packets, limiting calls to upstream services, etc.) to enforce the traffic rules. Alternatively and/or in addition, agents associated with the various services may be programmed with the traffic rules, and the agents may manage the traffic in accordance with the traffic rules (e.g., dropping packets, limiting calls to upstream services, etc.) to enforce the traffic rules. According to yet another aspect, the traffic rules may be written to a distributed cache, and the various services may periodically pull the traffic rules to be enforced and may manage the traffic in accordance with the traffic rules (e.g., dropping packets, limiting calls to upstream services, etc.) to enforce the traffic rules. Further, any combination of the various configurations for enforcing the traffic rules may be employed. In step314, it may be determined whether a new historical traffic prediction is needed. If a new historical traffic prediction is needed, process300returns to step302, otherwise, process300may return to step306. FIG.4is a flow diagram of an exemplary training process400for training a machine learning model, according to exemplary embodiments of the present disclosure. As shown inFIG.4, training process400is configured to train an untrained machine learning model434operating on computer system440to transform untrained machine learning model434into trained machine learning model436that operates on the same or another computer system, such as remote computing resource110. In the course of training, as shown inFIG.4, at step402, untrained machine learning model434is initialized with training criteria430. Training criteria430may include, but is not limited to, information as to a type of training, number of layers to be trained, candidate labels, etc. At step404of training process400, corpus of training data432(e.g., labeled or unlabeled), may be accessed. For example, if training is to generate a trained machine learning model that predicts traffic flow, training data432may include actual traffic information associated with certain services, such as packets per second, transactions per second (TPS), and the like, for specific periods of time (e.g., specific dates, specific weeks of the year, specific holidays (e.g., Black Friday, Cyber Monday, New Year's Day, etc.), specific months, specific time periods (e.g., morning, afternoon, evening, overnight, etc.), specific seasons, and the like). With training data432accessed, at step406, training data432is divided into training and validation sets. Generally speaking, the items of data in the training set are used to train untrained machine learning model434and the items of data in the validation set are used to validate the training of the machine learning model. As those skilled in the art will appreciate, and as described below in regard to much of the remainder of training process400, there are numerous iterations of training and validation that occur during the training of the machine learning model. At step408of training process400, the data items of the training set are processed, often in an iterative manner. Processing the data items of the training set includes capturing the processed results. After processing the items of the training set, at step410, the aggregated results of processing the training set are evaluated, and at step412, a determination is made as to whether a desired accuracy level has been achieved. If the desired accuracy level is not achieved, in step414, aspects of the machine learning model are updated in an effort to guide the machine learning model to generate more accurate results, and processing returns to step406, where a new set of training data is selected, and the process repeats. Alternatively, if the desired accuracy level is achieved, training process400advances to step416. At step416, and much like step408, the data items of the validation set are processed, and at step418, the processing accuracy of this validation set is aggregated and evaluated. At step420, a determination is made as to whether a desired accuracy level, in processing the validation set, has been achieved. If the desired accuracy level is not achieved, in step414, aspects of the machine learning model are updated in an effort to guide the machine learning model to generate more accurate results, and processing returns to step406. Alternatively, if the desired accuracy level is achieved, the training process400advances to step422. At step422, a finalized, trained machine learning model436is generated. Typically, though not exclusively, as part of finalizing the now-trained machine learning model436, portions of the machine learning model that are included in the model during training for training purposes are extracted, thereby generating a more efficient trained machine learning model436. FIG.5is a block diagram conceptually illustrating example components of a remote computing device, such as remote server500, that may include and/or execute one or more of the above discussed trained machine learning models, in accordance with described implementations. Multiple such servers500may be included in the system. Each of server(s)500may include one or more controllers/processors514, that may each include a central processing unit (CPU) for processing data and computer-readable instructions, and memory516for storing data and instructions. Memory516may individually include volatile random-access memory (RAM), non-volatile read only memory (ROM), non-volatile magnetoresistive (MRAM) and/or other types of memory. Each server may also include a data storage component518, for storing data, controller/processor-executable instructions, training data, labels, etc. Each data storage component may individually include one or more non-volatile storage types such as magnetic storage, optical storage, solid-state storage, etc. Each server500may also be connected to removable or external non-volatile memory and/or storage (such as a removable memory card, memory key drive, networked storage, etc.), internal, and/or external networks550(e.g., the Internet) through respective input/output device interfaces532. Computer instructions for operating each server500and its various components may be executed by the respective server's controller(s)/processor(s)514, using the memory516as temporary “working” storage at runtime. Server500's computer instructions may be stored in a non-transitory manner in non-volatile memory516, data storage518, and/or an external device(s). Alternatively, some or all of the executable instructions may be embedded in hardware or firmware on the respective device in addition to or instead of software. Each server520includes input/output device interfaces532. A variety of components may be connected through the input/output device interfaces. Additionally, each server500may include address/data bus524for conveying data among components of the respective server. Each component within server500may also be directly connected to other components in addition to (or instead of) being connected to other components across bus524. Each server500may also include one or more trained machine learning models536, as discussed herein. The components of the server(s)500, as illustrated inFIG.5, are exemplary, and may be located as a stand-alone device or may be included, in whole or in part, as a component of a larger device or system. The above aspects of the present disclosure are meant to be illustrative. They were chosen to explain the principles and application of the disclosure and are not intended to be exhaustive or to limit the disclosure. Many modifications and variations of the disclosed aspects may be apparent to those of skill in the art. Persons having ordinary skill in the field of computers, communications, media files, and machine learning should recognize that components and process steps described herein may be interchangeable with other components or steps, or combinations of components or steps, and still achieve the benefits and advantages of the present disclosure. Moreover, it should be apparent to one skilled in the art that the disclosure may be practiced without some, or all of the specific details and steps disclosed herein. Moreover, with respect to the one or more methods or processes of the present disclosure shown or described herein, including but not limited to the flow charts shown inFIGS.3and4, orders in which such methods or processes are presented are not intended to be construed as any limitation on the claims, and any number of the method or process steps or boxes described herein can be combined in any order and/or in parallel to implement the methods or processes described herein. In addition, some process steps or boxes may be optional. Also, the drawings herein are not drawn to scale. Aspects of the disclosed system may be implemented as a computer method or as an article of manufacture such as a memory device or non-transitory computer readable storage medium. The computer readable storage medium may be readable by a computer and may comprise instructions for causing a computer or other device to perform processes described in the present disclosure. The computer readable storage media may be implemented by a volatile computer memory, non-volatile computer memory, hard drive, solid-state memory, flash drive, removable disk, and/or other media. In addition, components of one or more of the modules and engines may be implemented in firmware or hardware. Disjunctive language such as the phrase “at least one of X, Y, or Z,” or “at least one of X, Y and Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be any of X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain implementations require at least one of X, at least one of Y, or at least one of Z to each be present. Unless otherwise explicitly stated, articles such as “a” or “an” should generally be interpreted to include one or more described items. Accordingly, phrases such as “a device configured to” or “a device operable to” are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations. For example, “a processor configured to carry out recitations A, B and C” can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C. Language of degree used herein, such as the terms “about,” “approximately,” “generally,” “nearly” or “substantially” as used herein, represent a value, amount, or characteristic close to the stated value, amount, or characteristic that still performs a desired function or achieves a desired result. For example, the terms “about,” “approximately,” “generally,” “nearly” or “substantially” may refer to an amount that is within less than 10% of, within less than 5% of, within less than 1% of, within less than 0.1% of, and within less than 0.01% of the stated amount. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey in a permissive manner that certain implementations could include, or have the potential to include, but do not mandate or require, certain features, elements and/or steps. In a similar manner, terms such as “include,” “including” and “includes” are generally intended to mean “including, but not limited to.” Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more implementations or that one or more implementations necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular implementation. Although the invention has been described and illustrated with respect to illustrative implementations thereof, the foregoing and various other additions and omissions may be made therein and thereto without departing from the spirit and scope of the present disclosure.
63,846
11943145
DETAILED DESCRIPTION OF THE FIGURES In the following certain embodiments are explained with reference to mobile communication devices capable of communication via a wireless cellular system and mobile communication systems serving such mobile communication devices. Before explaining in detail the exemplifying embodiments, certain general principles of a wireless communication system, access systems thereof, and mobile communication devices are briefly explained with reference toFIGS.1,2and3to assist in understanding the technology underlying the described examples. FIG.1shows a schematic representation of a 5G system (5GS). The 5GS may comprises a terminal, a 5G radio access network (5G RAN), a 5G core network (5GC), one or more application function (AF) and one or more data networks (DN). The 5G RAN may comprise one or more gNodeB (gNB) distributed unit functions connected to one or more gNodeB (gNB) centralized unit functions. The 5GC may comprise an access and mobility management function (AMF), a session management function (SMF), an authentication server function (AUSF), a user data management (UDM), a user plane function (UPF) and/or a policy control function (NEF). FIG.2illustrates an example of a control apparatus200for controlling a function of the 5G RAN or 5GC as illustrated onFIG.1. The control apparatus may comprise at least one random access memory (RAM)211a, at least on read only memory (ROM)211b, at least one processor212,213and an input/output interface214. The at least one processor212,213may be coupled to the RAM211aand the ROM211b. The at least one processor212,213may be configured to execute an appropriate software code215. The software code215may for example allow to perform one or more steps to perform one or more of the present aspects. The software code215may be stored in the ROM211b. The control apparatus200may be interconnected with another control apparatus200controlling another function the 5G RAN or 5GC. In some embodiments, each function of the 5G RAN or 5GC comprises a control apparatus200. In alternative embodiments, two or more functions of the 5G RAN or 5GC may share a control apparatus. FIG.3illustrates an example of a terminal300, such as the terminal illustrated onFIG.1. The terminal300may be provided by any device capable of sending and receiving radio signals. Non-limiting examples comprise a user equipment, a mobile station (MS) or mobile device such as a mobile phone or what is known as a ‘smart phone’, a computer provided with a wireless interface card or other wireless interface facility (e.g., USB dongle), a personal data assistant (PDA) or a tablet provided with wireless communication capabilities, a machine-type communications (MTC) device, an Internet of things (IoT) type communication device or any combinations of these or the like. The terminal300may provide, for example, communication of data for carrying communications. The communications may be one or more of voice, electronic mail (email), text message, multimedia, data, machine data and so on. The terminal300may receive signals over an air or radio interface307via appropriate apparatus for receiving and may transmit signals via appropriate apparatus for transmitting radio signals. InFIG.3transceiver apparatus is designated schematically by block306. The transceiver apparatus306may be provided for example by means of a radio part and associated antenna arrangement. The antenna arrangement may be arranged internally or externally to the mobile device. The terminal300may be provided with at least one processor301, at least one memory ROM302a, at least one RAM302band other possible components303for use in software and hardware aided execution of tasks it is designed to perform, including control of access to and communications with access systems and other communication devices. The at least one processor301is coupled to the RAM211aand the ROM211b. The at least one processor301may be configured to execute an appropriate software code308. The software code308may for example allow to perform one or more of the present aspects. The software code308may be stored in the ROM211b. The processor, storage and other relevant control apparatus can be provided on an appropriate circuit board and/or in chipsets. This feature is denoted by reference304. The device may optionally have a user interface such as key pad305, touch sensitive screen or pad, combinations thereof or the like. Optionally one or more of a display, a speaker and a microphone may be provided depending on the type of the device. Although one or more of the preceding and following aspects relate to 5GS, it will be understood that similar aspects may relate to other systems. One or more of the following aspects relate to tactile industrial networks, also known as Industrial IoT (IIoT) or Industry 4.0 networks. In such networks, 3GPP technologies (i.e. cellular technologies) may be applied in addition to wired time sensitive networking (TSN) in industrial environments to provide flexibility (in terms of mobility) and scalability (in terms of number of sensors or actuators). Time sensitive communication assistance information (TSCAI) has been standardized by the 3GPP and reflects the traffic pattern of time sensitive communication (TSC) flows. TSCAI may be used by a gNB to derive an efficient scheduling policy. The determination of TSCAI per TSC flow cannot be done by using gate scheduling information provided by the time sensitive network (TSN) centralized network configuration (CNC) according to IEEE 802.1Qbv. The gate scheduling information according to IEEE 802.1Qbv does not provide any information on the individual TSN streams or information on the ingress port where a particular TSN stream is received. The IEEE802.1Qbv mainly provides information on the egress port and the gates at the egress port in terms of gates opening and closing schedules. For example, the parameters configured by TSN CNC may be the gate state (“open” or “closed”), time interval for which the gate remains in the assigned state, CycleStartTime, AdminCycleTime, queueMaxSDU, etc. Such parameters may be used to determine the time at which the egress port needs to be open, for which duration of time it needs to remain open, and with what periodicity it will repeat such operation. However, using solely these parameters, there may be no means for the 5GS to determine for which particular quality of service (QoS) flow such pattern is applicable, such that the gNB and the UE can be accordingly configured (in order for a 5GS to deliver the expected frames at expected egress at expected time). In other words, there is no means to unambiguously map the IEEE802.Qbv schedule and the QoS flow which needs to be compliant with it. The 3GPP standard [TS 23.501] defines a quality of service (QoS) feature called “periodic deterministic QoS” in order to support periodic deterministic communications for which the traffic patterns are known beforehand. Such feature comprises the following extensions to the 5GS QoS framework: Time sensitive communication assistance information (TSCAI)—a set of parameters describing time sensitive communication (TSC) flow traffic patterns for a 5GS QoS flow. Support for hold and forward buffering in the time sensitive networking (TSN) Translator (UE side and UPF side) for de-jittering TSN flows. The TSCAI may be useful for a gNB to allow for efficiently scheduling periodic, deterministic traffic flows either via Configured Grants, Semi-Persistent Scheduling or with dynamic grants. The TSCAI is provided from SMF to 5G RAN (e.g. after QoS flow establishment). Table 1 taken from [TS 23.501] shows the parameters that are included in TSCAI. Assistance InformationDescriptionFlow DirectionThe direction of the TSC flow (uplink ordownlink)PeriodicityIt refers to the time period between thestart of two data burstsBurst Arrival TimeThe arrival time of a data burst at eitheran ingress interface of the RAN(downlink flow direction) or an egressinterface of the UE (uplink flow direction) In the 3GPP SA2 meeting #133, May 2019, the following has been agreed on the basis of the contribution [S2-1906663]. The TSCAI Burst Size may be used to set a maximum data burst volume (MDBV). The packet delay budget (PDB) may be explicitly divided into 5G RAN PDB and CN PDB. The 5G RAN PDB may be the packet delay budget applicable to the radio interface, including RAN processing. The CN PDB may be the delay between a UPF terminating N6 interface and a 5G RAN. Separate delay budgets may be used for calculation of expected packet transmit times on 5GS interfaces. The TSCAI Burst Arrival Time calculation may use user equipment device side time sensitive networking translator (DS-TT) residence time and CN PDB as per clause 5.27. For downlink (DL) TSCAI Burst Arrival Time determination, CN PDB may be used. For uplink (UL) TSCAI Burst Arrival Time determination, DS-TT residence time may be used. FIG.4shows the IEEE 802.1Q forwarding process (taken from [IEEE 802.1Q-2018, FIG. 8-12, page 188]). A relevant part is the Frame Filtering where based on e.g. Destination MAC address and virtual local area network identifier (VLAN ID), a particular Ethernet frame may be forwarded to a set of egress (transmission) ports. The rules for this forwarding may be defined by a Filtering Database (Fdb, [IEEE802.1.Q-2018, Section 8.8]). It may be noted that all TSN streams may be assigned uniquely to a unicast or multicast MAC address. The unicast or multicast MAC address may not be used as an actual MAC address by any device within the Ethernet network but may have been assigned by a network management tool. Another relevant part is the Flow Metering (or more specifically the Per-Stream Filtering and Policing) where based on filtering and policing rules, frames may be forwarded to individual port gates based on per-TSN-stream decisions. Hence, these rules may provide information on the characteristics of individual TSN streams and how they may be forwarded to individual port gates. For example, IEEE 802.1Qci defines a Stream Filter Instance Table (Section 12.31.2 of IEEE 802.1Qci) which defines such rules via a set of parameters such as StreamHandle (identifier of the stream), StreamGateInstanceID (identifier of the gate), priority of the rule, etc. These rules and a bridge behaviour are described in more detail in IEEE 802.1Qci and IEE 802.1Q-2018, Section 8.6.5. The Transmission Selection of the bridge may be responsible for deciding which frame of which traffic class queue is transmitted at which point in time on a particular port (IEEE 802.1Qbv and IEEE 802.1Q-2018, Section 8.6.8 and in particular Section 8.6.8.4). This may allow to derive the exact time window when a frame of a certain traffic class is supposed to be transmitted to an adjacent input output (IO) device or TSN bridge. One or more of the following aspects describe how the above information can be exploited to derive information for setting up a TSC flow using TSCAI. One or more of the following aspects describe techniques for derivation of TSCAI parameters and to unambiguously map these parameters to a corresponding QoS flow: Based on gate schedule information from IEEE standards and protocols (e.g. 802.1Q, 802.1Qbv, 802.1Qci), the TSCAI parameters burst arrival time and periodicity may be derived. For example, the following parameters may be used: AdminBaseTime: The administrative value of base time, expressed as an IEEE 1588 precision time protocol (PTP) timescale (only used implicitly). CycleStartTime: The time at which a next gate control list execution cycle is to start, expressed as an IEEE 1588 precision time protocol (PTP) timescale (only used implicitly). Oper(Admin)ControlList: defines the sequence with which the individual gates are opened/closed. Oper(Admin)CycleTime: defines the periodicity of the gate cycle. queueMaxSDU: An unsigned integer value, denoting the maximum SDU size supported by the queue. It may be noted that IEEE 802.1Qci comprises similar parameters as listed above, such as PSFPAdminControlList, PSFPOperControlList, PSFPAdminCycleTime, PSFPOperCycleTime, PSFPAdminBaseTime, Maximum SDU size, etc. Table 8-8 from IEEE 801.Q and IEEE Qci shows the correspondence between the procedures/variables used in 802.1Q and 802.1Qbv and PSFP versions of these procedures/variables. These parameters may be used to derive the TSCAI parameters burst arrival time and periodicity for each TSN stream. It may be noted that the filter and policy rules may be defined for each egress port, hence, this information may already be sufficient to derive where a particular TSN stream has to be delivered. The next step is to derive where a TSN stream is received. Based on static filtering tables information in the filtering database [IEEE 802.1Q-2018, RFC 4363] and the IEEE 802.1Qci stream filter instance table, the mapping between the TSCAI parameters derived above and a specific QoS flow may be done (for which the reception port is required). A prerequisite for supporting the per-stream filtering and policing by means of 802.1Qci may require implementation of a stream identification function [IEEE 802.1CB, see Clause 6] where the “stream_handle” (identifier of the stream) provided by this function is used for policing and queuing decisions. Furthermore, in order to perform mapping between the TSCAI parameters derived above and a specific QoS flow, the identification of the TSN stream may need to comprise the destination MAC address, which is unique to each TSN stream: The filtering database tables entries may contain the information on the (ingress/egress) port pairs for specific destination MAC address and VLAN identifier. Furthermore, as a stream identifier may comprise the destination MAC address, the stream may be unambiguously mapped to the (ingress/egress) port pair. Based on a stream filter instance [802.1Qci] which may contain the stream_handle (stream identifier as specified by IEEE 802.1CB) and the identifier of the gate to which the stream with a specific stream_handle should be mapped, the unambiguous mapping between the stream identifier and the stream gate instance identifier may be done. As a result, the unambiguous mapping between (ingress/egress) port pair (PDU session), stream identifier, stream gate instance identifier and corresponding gate schedules may be done. It may be noted that in TSN, gates (queues) may be associated to different traffic classes, and bridges may be configured to map incoming traffic to specific traffic classes/queues/gates based on the priority code point (PCP) field in frame header and pre-configured “PCP-to-traffic class” mapping table. A 5GS bridge may implement different traffic classes by means of different QoS flows. Therefore, a stream gate instance identifier may be mapped to a QoS flow identifier. This may allow accurate derivation of RAN schedules which may need to be applied to particular QoS flows, irrespective of the mapping between TSN streams and QoS flows (one to one or many to one mapping). One or more of the aspects of this disclosure relate to the mapping of TSCAI to QoS flows. It may be assumed that one QoS flow serves one gate of a port of a 5GS bridge. For this purpose, a relevant part is to identify the ports where TSN streams are received and where frames of a TSN stream have to be delivered. For this purpose, the information of the filtering database and the per-stream filtering and policing may be combined. The filtering database may define a dot1qStaticMulticastTable: “A table containing filtering information for Multicast and Broadcast MAC addresses for each VLAN, configured into the device by (local or network) management specifying the set of ports to which frames received from specific ports and containing specific Multicast and Broadcast destination addresses are allowed to be forwarded. A value of zero in this table (as the port number from which frames with a specific destination address are received) is used to specify all ports for which there is no specific entry in this table for that particular destination address. Entries are valid for Multicast and Broadcast addresses only” [RFC 4363]. Similarly, the dot1qStaticUnicastTable may be defined for unicast MAC addresses. Hence, the entries in the filtering database may allow for deriving the ingress and egress ports of frames with a particular destination MAC address. This address, however, may be unique to each TSN Stream as defined by the StreamHandle applied in IEEE 802.1Qci and defined in IEEE 802.1CB: StreamHandle: the stream handle may be associated with the information provided for a specific TSN Stream. A TSN Stream may be associated with a unique unicast/multicast MAC address (see IEEE 802.1Q-2018, Section 35.2.2.8.3). The corresponding MAC address may be derived by using IEEE802.1CB, Section 9.1: A tsnStreamIdEntry, Option 1: tsnStreamIdIdentificationType=1, i.e., Null Stream identification, and tsnCpeNullDownDestMac, which identifies the destination MAC address. Option 2: tsnStreamIdIdentificationType=3, i.e., destination MAC address and VLAN, and tsnCpeDmacVlanDownDestMac, which identifies the destination MAC address. Option 3: tsnStreamIdIdentificationType=4, i.e., IP stream identification, and tsnCpeIpldDestMac, which identifies the destination MAC address. Using the above information, a 5GS bridge may be able to identify the ingress (reception) ports and egress (transmission) ports of each TSN stream. In the next step, the TSCAI parameters burst arrival time, periodicity and flow direction may be derived using a stream filter instance table defined in IEEE 802.1Qci (cf. point (1) in Sec.5). The stream filter instance table may define: Stream handle, which uniquely associates a filter instance of specific gate ID (i.e. StreamGateInstance ID) with a TSN stream. Priority Specification. Stream gate instance, which defines among others (see IEEE 802.1Q-2018, Table 12-33):PSFPAdminControlList and PSFPOperControlList: Defines the control list for the gate, i.e., a sequence of entries each defining the gate state, the time interval, and the maximum number of bits.PSFPAdminCycleTime/PSFPOperCycleTime: Periodicity of each cycle (see IEEE 802.1Q, Section 8.6.9).Maximum SDU size, which allows for deriving a Maximum Databurst Size for the QoS flow. Using this information, the following may be derived. TSCAI Burst Arrival Time (BAT) from the PSFPOperControlList and each individual timestamp tGate_Open state within this list. In one example, for uplink BAT=tGate_Open−PDB, where PDB=delay between UE egress to UPF egress including NW-TT). In another example, for downlink BAT=tGate_Open−(PDB-AN part+UE residence time including DS-TT). TSCAI Periodicity from the PSFPOperCyleTime. Such TSCAI may apply to specific stream gate instance identifier (i.e. StreamGateInstance identifier). Overall, the mapping between the derived TSCAI, the stream identifier, (ingress/egress) port pairs (i.e. PDU session) and the stream gate instance identifier (i.e. QoS flow) may be done. FIG.5summarizes how the above mappings may be derived. The relation between ingress port, egress port, destination MAC address and VLAN identifier may be derived based on one or more filtering database tables. The relation between stream identifier and destination MAC address may be derived based on one or more stream filter instance tables. The relation between stream identifier and stream gate instance identifier may be based on one or more stream filter instance tables. The relation between parameters PSFPAdm inControlList, PSFPOperControlList, PSFPAdminCycleTime, PSFPAdminCycleTime and maxSDU size, etc. defining operation of a gate and a gate may be based on one or more stream filter instance tables. A destination MAC address may be mapped to an ingress port and an egress port using one or more filtering database table entries. The destination MAC address may further be mapped to a stream identifier using one or more stream filter instance tables. In this way, the stream identifier may be mapped to the ingress port and the egress port via the destination MAC address. The stream identifier may be mapped to a stream gate instance identifier using one or more stream filter instance tables. The stream gate instance identifier uniquely identifies a gate associated with a QoS flow. The stream gate instance identifier may be mapped to one or more parameters PSFPAdminControlList, PSFPOperControlList, PSFPAdminCycleTime, PSFPAdminCycleTime and maxSDU size, etc. defining operation of a gate using one or more stream filter instance tables. TSCAI may then be derived for the stream gate instance identifier and therefore for the QoS flow associated with the stream gate instance identifier. The TSCAI may be provided to the RAN. The RAN may derive scheduling rules for the QoS flow using the TSCAI. It will be understood that a filtering database table may also provide information about the TSCAI Flow Direction. If the ingress port is located at a UE and the egress port is located at the UPF, then TSCAI flow direction may be uplink. If the ingress port is located at the UPF and egress port is located at a UE, then TSCAI Flow Direction may be downlink; the burst arrival time may be derived from PSFPOperControlList and the PDB within the core network. If the ingress and egress ports are located at a UE, two TSC flows and their respective TSCAI may have to be derived. An uplink flow from the UE where the ingress port is located and the corresponding TSCAI may be derived using the Burst Arrival Time as described above. For the downlink flow from the UPF to the UE where the egress port is located, the Burst Arrival Time may be derived from the expected arrival of the frame in the uplink flow. The burst arrival time of the downlink flow may take into account the processing required to switch from the uplink TSC flow to the downlink TSC flow. The above derivation of TSCAI parameters may take place in the AF network TSN translator (NW-TT) based on the information provided by the TSN CNC. The derived parameters may be provided to the SMF directly or via the PCF for setting up the corresponding QoS flows. FIG.6illustrates the described procedures and involved entities. Alternatively, the PCF or SMF can perform the TSCAI determination based on the information received from AF. In step1, a PDU session is established between the UE (DS-TT) and the UPF. In step2a, guaranteed min/max independent/dependent delays for established PDU sessions (ingress/egress) ports and traffic classes may be determined. The determination may be performed by the PCF or the AF. In step2b, capabilities (i.e. guaranteed min/max independent/dependent delays for established PDU sessions (ingress/egress) ports and traffic classes determined in step2a) may be reported to the CNC by the AF. In step3, the CNC may compute the transmission schedule for the bridge by using TSN stream requirements and reported bridge delay values. Details of computing the transmission schedule may be found in IEEE.802.1Q specifications. In step4, the CNC may configure a TSN bridge by sending filtering database tables and IEEE 802.1Q parameters to the AF. In step5, the AF may determine and map TSCAI to QoS flows based on the filtering database tables and IEEE 802.1Q parameters. In step6, the AF may signal the TSCAI information and mapping to QoS flows to the SMF. In step7, the SMF may inform the RAN about derived TSCAI for specific QoS flows upon QoS flow establishment. In step8, the RAN may use the TSCAI to schedule traffic for the specific QoS flows. In the case that both ingress and egress ports of a TSN stream are located at a UE, the AF may further configure the UPF such that frames received in the uplink flow may be directly switched/scheduled for the downlink flow. This is not depicted in the flow chart ofFIG.6. Each UE may be configured such that one or more destination MAC addresses (each of which unanimously corresponds to a TSN stream) are assigned to the QoS flow. Using the destination MAC address, the UE may then be able to assign the frame received to the correct QoS flow. FIG.7shows a schematic representation of a method for determining time sensitive communication assistance information for a QoS flow. The method may be performed by an apparatus such as an AF apparatus, PCF apparatus, or an SMF apparatus. In step702, the apparatus may determine ingress and egress ports of a destination address included in the entries of the filtering database tables. The apparatus may for example use one or more filtering database tables as described above (i.e. the destination address may be included in an entry of a filtering database). A frame corresponding to this destination address may be sent by a neighbouring TSN end station or TSN bridge. Generally, the frame structure may be specified in IEEE 802.1 Q documents. On the highest level, it contains a header and a payload (Protocol Data Unit, PDU and Service Data Unit, SDU respectively). Destination (MAC) address may be part of the header, as are VLAN ID, source address, priority code point (PCP) value, etc. The filtering database tables may be provided by the CNC. It will be understood that during operation, the bridge may look up the destination address included in a received frame. During TSCAI derivation, a tuple <ingress port, egress port, destination address, TSCAI, QoS flow> may be available and may be stored. The bridge may use this tuple to select TSCAI for the looked up destination address, and forward the frame using the QoS flow that corresponds to this TSCAI. In step704, the apparatus may determine a stream identifier uniquely identifying a stream associated with the destination address. The apparatus may for example use one or more stream filter instance tables as described above. The stream filter instance tables may be provided by the CNC. In step706, the apparatus may map the stream identifier determined in step704to the ingress and egress ports determined in step702. The apparatus may for example use the one or more stream filter instance tables as described above. In step708, the apparatus may map the stream identifier determined in step704to a stream gate instance identifier. The apparatus may for example use the one or more stream filter instance tables as described above. In step710, the apparatus may identify a gate uniquely identified by the stream gate instance identifier determined in step708and associated with a QoS flow. The association may be stored at the AF, the PCF or at a generic storage function. In step712, the apparatus may determine TSCAI for the QoS flow based on parameters defining the operation of the gate determined in step710. The apparatus may for example use the one or more stream filter instance tables as described above. In step714, the apparatus may send the TSCAI for the quality of service flow determined in step712to another apparatus. When the apparatus is an AF or a PCF apparatus, the other apparatus may be an SMF apparatus. When the apparatus is an SMF apparatus, the other apparatus may be a RAN apparatus. FIG.8shows a schematic representation of a method for forwarding TSCAI for a QoS flow. The method may be performed by an apparatus such as a SMF apparatus. In step802, the apparatus may receive TSCAI for a QoS from a first other apparatus. The first other apparatus may be an AF apparatus or PCF apparatus. In step804, the apparatus may forward the TSCAI for the QoS received in step802to a second other apparatus. The second other apparatus may be a RAN apparatus. FIG.9shows a schematic representation of a method for using TSCAI for a QoS flow. The method may be performed by an apparatus such as a RAN apparatus. In step902, the apparatus may receive TSCAI for a QoS from another apparatus. The other apparatus may be an SMF apparatus. In step904, the apparatus may use the TSCAI for the QoS received in step902to schedule the QoS flow. FIG.10shows a schematic representation of non-volatile memory media1000a(e.g. computer disc (CD) or digital versatile disc (DVD)) and1000b(e.g. universal serial bus (USB) memory stick) storing instructions and/or parameters1002which when executed by a processor allow the processor to perform one or more of the steps of the methods described herein. It is noted that while the above describes example embodiments, there are several variations and modifications which may be made to the disclosed solution without departing from the scope of the present invention. The embodiments may thus vary within the scope of the attached claims. In general, some embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although embodiments are not limited thereto. While various embodiments may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof. The embodiments may be implemented by computer software stored in a memory and executable by at least one data processor of the involved entities or by hardware, or by a combination of software and hardware. Further in this regard it should be noted that any procedures described herein may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions. The software may be stored on such physical media as memory chips, or memory blocks implemented within the processor, magnetic media such as hard disk or floppy disks, and optical media such as for example DVD and the data variants thereof, CD. The memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. The data processors may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs), application integrated circuits (ASIC), gate level circuits and processors based on multi-core processor architecture, as non-limiting examples. Alternatively or additionally some embodiments may be implemented using circuitry. The circuitry may be configured to perform one or more of the functions and/or method steps previously described. That circuitry may be provided in the base station and/or in the communications device. As used in this application, the term “circuitry” may refer to one or more or all of the following:(a) hardware-only circuit implementations (such as implementations in only analogue and/or digital circuitry);(b) combinations of hardware circuits and software, such as:(i) a combination of analogue and/or digital hardware circuit(s) with software/firmware and(ii) any portions of hardware processor(s) with software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as the communications device or base station to perform the various functions previously described; and(c) hardware circuit(s) and or processor(s), such as a microprocessor(s) or a portion of a microprocessor(s), that requires software (e.g., firmware) for operation, but the software may not be present when it is not needed for operation. This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware. The term circuitry also covers, for example integrated device. The foregoing description has provided by way of exemplary and non-limiting examples a full and informative description of some embodiments However, various modifications and adaptations may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings and the appended claims. However, all such and similar modifications of the teachings will still fall within the scope as defined in the appended claims.
33,057
11943146
DETAILED DESCRIPTION In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed. Some embodiments provide a method for an SD-WAN that handles both traffic for a cellular broadband (e.g., 5G) network as well as traffic outside of the cellular broadband network to intelligently prioritize different types of cellular broadband traffic. When an edge device of the SD-WAN receives a data message, the edge device can determine (e.g., by looking at an Internet Protocol (IP) header of the data message) whether the data message is cellular broadband traffic or not and prioritize the traffic relative to other types of traffic. In addition, if the data message is a cellular broadband data message, the edge device of some embodiments examines a set of header fields of a particular type of tunnel header (e.g., a general packet radio service (GPRS) tunneling protocol (GTP) header) associated with the cellular broadband network. Based on the examination of the set of header fields, the edge device identifies a specified traffic priority applicable to the data message and applies that traffic priority to the data message within the SD-WAN. The application of the traffic priority helps to ensure end-to-end application of the traffic priority specified for the data message by the cellular broadband network. Thus, for example, the SD-WAN will not result in the deprioritization of latency-sensitive user traffic for cellular broadband applications such as automated vehicles, drones, etc. Similarly, cellular broadband control traffic (e.g., traffic between components of a distributed radio access network) may be latency-sensitive, so the SD-WAN will also avoid deprioritizing this traffic. FIG.1conceptually illustrates an example of a situation in which an SD-WAN also carries cellular broadband traffic (specifically, 5G traffic). It should be understood that while much of the discussion in this application refers specifically to 5G traffic, similar concepts apply to other cellular broadband traffic, such as 3G, 4G, etc. In the example ofFIG.1, numerous connected endpoint devices105-125operate at an enterprise location100. The enterprise location100could be, e.g., a retail store (e.g., a department store, a home improvement retailer, a supercenter, etc.), a manufacturing plant, a hospital, a research organization, etc. In this example, the connected endpoint devices include a connected camera105, a mobile device110(e.g., a smart phone, a tablet, etc.), an automated device115(e.g., automated manufacturing equipment, a self-driving vehicle, etc.), a laptop120, and a virtual machine125. Three of these devices105-115are 5G devices that connect to a 5G base station130(e.g., a 5G small cell that operates within the enterprise location100), while two of the devices120-125connect via wired (e.g., Ethernet) or wireless (e.g., 802.11 Wi-Fi) forms of connectivity. In many situations, various other types of connected devices (both 5G devices and non-5G devices) will operate within such an environment. Examples of such 5G devices include remote surgery equipment, remote controlled drones, smart watches, health monitoring equipment, Internet of Things (IoT) appliances, etc., while other non-5G devices could include desktop computers, servers, containers, etc. Both the 5G base station130and the other forms of communication communicate externally via a software-defined wide area network (SD-WAN)135. This communication includes enterprise application traffic (for billing, inventory, etc.) from the laptop120and/or virtual machine125as well as various types of 5G traffic (e.g., 5G voice traffic and/or 5G Internet traffic). The SD-WAN135of some embodiments may be used to connect multiple branch sites (e.g., the enterprise location100) to at least one datacenter hub. In this example, the SD-WAN includes at least an edge device140, a gateway145, and a controller150, though in many cases an SD-WAN will include multiple edge devices (e.g., for each of many branch offices, retail locations, etc.), multiple gateways, and/or a hub. The SD-WAN edge device140may be a virtual machine, a container, a program executing on a physical or virtual machine, a stand-alone appliance, etc., that operates at the enterprise location100to connect the devices at that location to other devices, hubs, etc. in the SD-WAN or connected to the SD-WAN. Though shown as a single edge140, this represents a high-availability pair or cluster in some embodiments. The edge device140(and, in many cases, other edge devices of the SD-WAN) exchanges data traffic with the SD-WAN gateway145through one or more connection links (e.g., a cable modem link, a fiber optic link, and/or an MPLS link, etc.). The gateway145(or a hub, which is not shown in this figure) connects the edge device140to other edges (e.g., at other enterprise locations). The gateway145may be located in an on-premises datacenter (e.g., a primary datacenter for an enterprise). In some embodiments, traffic from the endpoint devices105-125at the enterprise location100that is directed externally (e.g., to external destinations not associated with the enterprise location) is routed through the SD-WAN to the gateway145before being sent through a public network to its external destination. This includes the 5G traffic from the devices105-115. The controller150, which may be a single controller or a cluster of controllers, serves as a central point for managing (e.g., defining and modifying) configuration data that is provided to the edge140and gateway145, as well as any other SD-WAN nodes (e.g., other edges, hubs, gateways, etc.), to configure some or all of the operations of these SD-WAN nodes. The controller150may be located in a private datacenter (e.g., a primary enterprise datacenter) or in a public cloud datacenter (or in multiple datacenters, if operating as a cluster). In some embodiments, the controller150actually includes a set of manager servers that define and modify the configuration data, and a set of controller servers that distribute the configuration data to the edge140, gateway145, and any other SD-WAN nodes. The configuration data provided to the edge140may include routing information (e.g., directing the edge140to route data to the gateway145) as well as tunneling information, quality of service (QoS) configuration information, etc. Because the 5G base station130is located within the enterprise location100, the SD-WAN135is used to carry 5G traffic between the base station130and other components of the 5G network. User data traffic from the endpoint devices105-115may need to travel to the 5G packet core, traffic may need to be sent from the base station130to another base station, a radio access network (RAN) component at the base station130may communicate with another RAN component located in a public cloud, etc. FIG.2conceptually illustrates an example of an SD-WAN200being situated between the 5G RAN205and the 5G packet core210. The 5G network, like previous cellular broadband networks, includes a RAN that is often situated at each base station of the 5G network as well as a packet core, both of which are typically operated by a mobile provider. The RAN205implements the radio access technology that allows endpoint devices to wirelessly connect to the 5G network and is generally the access point for these devices. For a cellular broadband network, there are typically many RANs (e.g., at each base station, which serve separate (though often partially overlapping) geographic areas. The packet core is a more centralized network of the mobile provider (though this core may also be implemented in a distributed manner in multiple public and/or private clouds). In some embodiments, the packet core applies various network functions (e.g., load balancing, firewall, etc.) to 5G data messages. In the example ofFIG.2, traffic from the devices215-225that connect to the 5G base station205have RAN processing applied and are then sent to the packet core210. However, for data messages to reach the packet210, these data messages must pass through the SD-WAN200. Thus, the edge device230processes this traffic after RAN processing and tunnels the traffic to the gateway device235, which processes the traffic prior to handing the traffic to the packet core210. FIG.3, on the other hand, illustrates a case when traffic within a RAN300passes through an SD-WAN305. In many cases, the distributed unit (DU) of a RAN is located at the base station along with a radio unit (RU), while the centralized unit (CU) may be located in a cloud and control numerous DUs. The RU310, in some embodiments, provides functions such as analog to digital conversion, filtering, power amplification, and TX/RX, and may be integrated with the antenna of a base station. The DU315, in some embodiments, provides digital processing, including signal modulation, encoding, scheduling, and other functions. The CU320provides radio resource control and packet data convergence protocol layers. In addition to communication from endpoint devices325-335being sent through the RAN300and therefore from the DU315through the SD-WAN305to the CU320, control communications are often sent between the DU315and the CU320as well. In this case, for messages to reach the CU320from the DU315, these messages must pass through the SD-WAN305. Thus, the edge device340processes this traffic after the DU315and tunnels the traffic to the gateway device345, which processes the traffic prior to sending the traffic to the CU320. FIG.4illustrates the case of traffic between two 5G base stations405and410being carried through an SD-WAN400. The X2 interface defines communication between base stations. These communications can include user data or signaling information. For instance, one base station communicates with another when handling handover of a device from one base station to another (e.g., because that device is in motion and moves from the geographic area of the first base station to the geographic area of the second base station). In the figure, the mobile device415is more likely to cause such communication between the base stations405and410than either the smart home device420or the remote surgery controller425, as the latter two endpoint devices are more likely to be static. Like the two previous examples, in order for X2 messages to be sent from the first 5G base station405to the second 5G base station410, the edge device430processes this traffic after the first base station405and tunnels the traffic to the gateway device435, which processes the traffic prior to sending the traffic to the second base station410. Other 5G traffic might pass through an SD-WAN as well, in some embodiments. For instance, Open RAN (O-RAN) implementations can push many of the RAN functionalities (e.g., DU and/or CU operations) to a near-real-time RAN intelligent controller (RIC), which hosts applications (referred to as xApps). Communication on the E2interface between the CU and DU components and the near-real-time RIC, as well as between RIC components and the xApps or between xApps, may be very latency-sensitive while passing through an SD-WAN. Further detail about the near-real-time RIC, xApps, and messages between these components can be found in U.S. patent Ser. No. 17/384,777, filed Jul. 25, 2021, which is incorporated herein by reference. Many of these examples of 5G traffic have extremely low latency requirements. Applications like remote surgery, automated cars or manufacturing, remote machinery control, as well as various types of control messaging, require very high quality of service and cannot be caught up in delays that could occur if there is a bottleneck in the SD-WAN (e.g., if other non-5G traffic increases for a period of time). However, the edge nodes in SD-WAN deployments generally prioritize traffic based on the standard network layer (e.g., IP) data message headers (e.g., by examining the differentiated services code point (DSCP) field) in order to differentiate between traffic. While this can work in some cases to differentiate 5G traffic from other traffic, not all 5G traffic should be given the same priority. For instance, remote surgery or automated vehicle traffic should have a higher priority than mobile phone web browsing. Therefore, to identify the traffic priority for a cellular broadband message, the SD-WAN edge device is configured in some embodiments (e.g., by the SD-WAN controller) to examine fields beyond just the IP header of the message.FIG.5conceptually illustrates a process500of some embodiments for applying different traffic priorities to different types of traffic, including different types of 5G traffic. The process500is performed by an SD-WAN edge node in some embodiments (e.g., by an SD-WAN edge) that handles 5G traffic in addition to non-5G traffic. In some embodiments, SD-WAN gateways also perform similar operations (e.g., for reverse direction traffic). As shown, the process500begins by receiving (at505) a data message at the edge device. In some embodiments, this is a data message with an outer IP header. Internally, the data message may be a 5G user data message, a 5G control message, a non-5G message (e.g., an enterprise application message, etc.). The process500determines (at510) whether stored instructions exist for the flow to which the data message belongs. In some embodiments, the edge device stores a mapping of flow identifiers to processing instructions, which include QoS level information as well as forwarding instructions. The flow identifier of some embodiments includes only IP and/or transport layer (e.g., TCP, UDP, etc.) fields, so that for these data messages the edge device does not need to examine any of the other fields. For instance, in some embodiments the flow identifier includes IP header fields and/or transport layer port numbers (e.g., source and/or destination port numbers) of an outer (encapsulation) header. Some embodiments also include IP and/or transport layer fields from an inner (overlay network) header. As described below, when a data message from a new flow is received and processed, the edge device stores a mapping of the flow identifier for that flow to processing instructions for the flow. If instructions are stored for the flow to which the received data message belongs, then the process500processes (at515) the data message according to the stored instructions. As mentioned, in addition to specifying forwarding instructions (e.g., routing) for the data message, the stored instructions also indicate a priority for the data message and other QoS-related instructions (e.g., the maximum allowable delay, etc.). This priority may be used by the edge device to schedule transmission of the processed data message relative to other data messages with different priorities. The process500then ends. On the other hand, if no instructions are stored for the flow (e.g., because the data message is the first message in a new flow), the process500determines (at520) whether the data message is a 5G data message. In some embodiments, the edge device can identify a data message as a 5G data message based on a DSCP marking or by identifying the presence of a tunnel header of a particular type associated with 5G data messages (e.g., a GTP header). If the data message is not a 5G data message, then the process500processes (at525) the data message as IP traffic, using any forwarding and traffic priority configuration configured for the edge device by the SD-WAN controller. This processing can differentiate between different applications based on various factors. For instance, an administrator could configure the SD-WAN to assign higher priority to certain enterprise applications that require lower latency as compared to less latency-sensitive applications (e.g., billing). The process then ends. When the received data message is a 5G message, the process500examines (at (535) a set of fields within the GTP header to determine a QoS level to apply to the data message. In some embodiments, a GTP header includes, among other fields, a tunnel endpoint identifier (TEID) field. This TEID field specifies a tunnel and therefore conveys both endpoints of the tunnel rather than using a separate field for the source and destination tunnel. However, the TEID field on its own is not enough to determine the QoS level, as each GTP tunnel maps to a protocol data unit (PDU) session, which itself may contain numerous separate flows that should be treated differently. The GTP tunnel, in some embodiments, provides connectivity between the endpoint device (UE) and a destination network (e.g., the Internet or a private corporate network). Other fields typically found in the GTP header include a version field, a protocol type extension header flag, a sequence number flag, an N-PDU number flag, a message type, a length, and an extension header type. FIG.6conceptually illustrates a pair of PDU sessions (GTP-U tunnels) for an endpoint device that illustrate how multiple flows (e.g., for different applications) are contained within a single PDU session. A first PDU session605is used for “voice” applications, which can include flows for videoconferencing, conversational voice (telephone), push-to-talk, and short message service (SMS), among other traffic. A second PDU session610is used for broadband applications, which can include flows for a multitude of different applications depending on the sort of endpoint device initiating the PDU session, including web browsing, connected camera, automated vehicle applications, health monitoring applications, smart home applications, etc. While DSCP marking can be associated with a PDU session in some embodiments, this does not help differentiate between the different flows within a PDU session. Thus, the edge device is configured in some embodiments to examine additional information within the GTP header that provides additional information about the particular flow to which a data message belongs. In some embodiments, an extension header within the GTP header includes a PDU session container header, which in turn includes a QoS flow identifier (QFI) field. This QFI field can be used to differentiate between different flows and even to map to a particular QoS level that should be applied. InFIG.6, for example, the voice PDU session605includes two different flows: a first flow615identified by a QFI value of 1 and a second flow620identified by a QFI value of 2. The broadband PDU session610includes three flows: a first flow625identified by a QFI value of 2, a second flow630identified by a QFI value of 1, and a third flow635identified by a QFI value of 3. It should be noted that the QFI values need not be assigned starting from 1 and increasing as flows are added to a PDU session; rather, the flow identifier is assigned to correspond to a type of application in some embodiments. In addition, as evidence by this example, the same QFI can be used (and have a different meaning) within a voice PDU session and a broadband PDU session. The edge device is configured to map the QFI to a QoS level in some embodiments. In some such embodiments, the QoS level is determined according to a 5G QoS identifier (5QI) value, which itself maps to a specific set of QoS characteristics. In the example shown inFIG.6, the first flow615is mapped (based on a QFI of 1) to a 5QI value of 5 and the second flow620is mapped (based on a QFI of 2) to a 5QI value of 1. For the broadband flows, the first flow625is mapped (based on a QFI of 2) to a 5QI value of 8, the second flow630is mapped (based on a QFI of 1) to a 5QI value of 9, and the third flow635is mapped (based on a QFI of 3) to a 5QI value of 3. In some embodiments, rather than using the QFI field, the edge device is configured to use a service class identifier (SCI) field, which is also part of the GTP extension header. A large range of this SCI field is unassigned and can be customized, so the SD-WAN edge can be customized to map SCI values to QoS levels (e.g., to 5QI values). In various embodiments, the edge device classifies traffic into different QoS levels based on some combination of the TEID, transport layer port numbers (e.g., from the outer or inner header), the QFI, DSCP marking in the outer IP header, and the SCI. In addition, in some embodiments, a RAN container (or new radio (NR) RAN container) field within the extension header may be used to classify front haul traffic vs. backhaul traffic and prioritize traffic for distributed RAN solutions between X2-U, Xn-U, and F1-U user plane interfaces. Finally, the PDU session container is used in some embodiments to classify backhaul traffic to N3 and N9 user plane interfaces, between NG-RAN and user plane function (UPF), or between two UPFs. Returning toFIG.5, the process500applies (at535) the traffic priority and other constraints according to the identified QoS level. In some embodiments, 5QI values map to both guaranteed bit-rate (including delay-critical guaranteed bit-rate) as well as non-guaranteed bit-rate processing. The 5QI values, in addition, map to a priority level, a packet delay budget, a packet error rate, a maximum data burst volume, and a default averaging window. Thus, the 5QI value enables the edge device to treat traffic for different 5G applications differently, depending on the requirements of that application. As part of processing the data message, the edge device also performs forwarding (e.g., routing of the data message) and outputs the data message (e.g., via an SD-WAN link). Because instructions were not previously stored, the process500then stores (at540) instructions to apply to subsequent data messages in the flow. In some embodiments, the edge device stores a mapping of flow identifiers to processing instructions (e.g., the QoS level information as well as forwarding instructions). As described above, the flow identifier of some embodiments includes IP header fields and/or transport layer port numbers (e.g., source and/or destination port numbers) of an outer (encapsulation) header and/or from an inner (overlay network) header. The process500then ends. FIG.7conceptually illustrates an electronic system700with which some embodiments of the invention are implemented. The electronic system700may be a computer (e.g., a desktop computer, personal computer, tablet computer, server computer, mainframe, a blade computer etc.), phone, PDA, or any other sort of electronic device. Such an electronic system includes various types of computer readable media and interfaces for various other types of computer readable media. Electronic system700includes a bus705, processing unit(s)710, a system memory725, a read-only memory730, a permanent storage device735, input devices740, and output devices745. The bus705collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system700. For instance, the bus705communicatively connects the processing unit(s)710with the read-only memory730, the system memory725, and the permanent storage device735. From these various memory units, the processing unit(s)710retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments. The read-only-memory (ROM)730stores static data and instructions that are needed by the processing unit(s)710and other modules of the electronic system. The permanent storage device735, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the electronic system700is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device735. Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device. Like the permanent storage device735, the system memory725is a read-and-write memory device. However, unlike storage device735, the system memory is a volatile read-and-write memory, such a random-access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory725, the permanent storage device735, and/or the read-only memory730. From these various memory units, the processing unit(s)710retrieve instructions to execute and data to process in order to execute the processes of some embodiments. The bus705also connects to the input and output devices740and745. The input devices enable the user to communicate information and select commands to the electronic system. The input devices740include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices745display images generated by the electronic system. The output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as a touchscreen that function as both input and output devices. Finally, as shown inFIG.7, bus705also couples electronic system700to a network765through a network adapter (not shown). In this manner, the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet, or a network of networks, such as the Internet. Any or all components of electronic system700may be used in conjunction with the invention. Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra-density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter. While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself. As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals. This specification refers throughout to computational and network environments that include virtual machines (VMs). However, virtual machines are merely one example of data compute nodes (DCNs) or data compute end nodes, also referred to as addressable nodes. DCNs may include non-virtualized physical hosts, virtual machines, containers that run on top of a host operating system without the need for a hypervisor or separate operating system, and hypervisor kernel network interface modules. VMs, in some embodiments, operate with their own guest operating systems on a host using resources of the host virtualized by virtualization software (e.g., a hypervisor, virtual machine monitor, etc.). The tenant (i.e., the owner of the VM) can choose which applications to operate on top of the guest operating system. Some containers, on the other hand, are constructs that run on top of a host operating system without the need for a hypervisor or separate guest operating system. In some embodiments, the host operating system uses name spaces to isolate the containers from each other and therefore provides operating-system level segregation of the different groups of applications that operate within different containers. This segregation is akin to the VM segregation that is offered in hypervisor-virtualized environments that virtualize system hardware, and thus can be viewed as a form of virtualization that isolates different groups of applications that operate in different containers. Such containers are more lightweight than VMs. Hypervisor kernel network interface modules, in some embodiments, is a non-VM DCN that includes a network stack with a hypervisor kernel network interface and receive/transmit threads. One example of a hypervisor kernel network interface module is the vmknic module that is part of the ESXi™ hypervisor of VMware, Inc. It should be understood that while the specification refers to VMs, the examples given could be any type of DCNs, including physical hosts, VMs, non-VM containers, and hypervisor kernel network interface modules. In fact, the example networks could include combinations of different types of DCNs in some embodiments. While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. In addition, a number of the figures (includingFIG.5) conceptually illustrate processes. The specific operations of these processes may not be performed in the exact order shown and described. The specific operations may not be performed in one continuous series of operations, and different specific operations may be performed in different embodiments. Furthermore, the process could be implemented using several sub-processes, or as part of a larger macro process. Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.
31,396
11943147
DETAILED DESCRIPTION InFIG.1, reference numeral1refers to a telecommunications system1for exchanging data packets2a-2din accordance with a data transmission protocol between a first device4and a second device6operatively connected to the telecommunications system1, by an intermediate node5of an access network, such as a Radio Access Network, RAN, for example. The first device4is identified by a first device identification, and the second device6is identified by a second device6identification. The data packets2a-2d, among a content part, preamble and/or other parts for transmission purposes, for example, each comprise an address part3including a destination address3aand a source address3b. The term address part used in the present disclosure and claims is to be construed as referring to any part of a data packet on any level of a respective network or data transport protocol level relating to an origin or source or a destination of the data packet. In the event of data packet transmitted from the first device4to the second device6, the source address3bof this data packet refers to the first device identification and the destination address3arefers to the second device identification. In case of a data packet transmitted from the second device6to the second first device4, the source address3bof this data packet refers to the second device identification and the destination address3arefers to the first device identification. The node5comprises transceiver equipment8arranged for exchanging the data packets2a-2din the telecommunications system1between the first device4and the second device6. When a data packet2a, transmitted from the first device4to the second device6, is received by the node5, the node5modifies the source address part3bof the data packet2abefore transmitting the data packet to the second device6. The modified data packet2b, having a modified source address part3b, is then transmitted by the node5to the second device6. In the node5, an address translation table7is maintained which links at least the source address part3bof data packet2ato the modified address part3of the data packet2b. Before transmitting data packet2bto the second device6, a first point in time, the node maintains a first timestamp in order to identify the first point in time at which the data packet2bwas transmitted by the node5. At a second point in time, after the first point in time, the second device6transmits, in response to the received data packet2b, a further data packet2c. The destination address part3aof this further data packet2ccomprises the modified source address part3bof the data packet2band the source address part3bof this further data packet2crefers to the second device identification. At receipt of this further data packet2cwith the node5, the node5, prior to transmitting the further data packet2cto the first device4, inspects the address part3of the further data packet2cfor correspondence with address parts3stored in the address translation table7. As the address part3of the further data packet2ccontains the content of the address part3of data packet2b, which content is stored in the address translation table7, based on the entry of the address part3of data packet2b, the node5modifies the address part3of the further data packet2cby replacing the destination address part3aof this further data packet2cby the source address part of data packet2amaintained in the address translation table7. This results in the data packet2dto be transmitted by the node5to the first device4. The second point in time of receiving the further data packet2cat the node5, the node5maintains a second timestamp in order to identify the time at which the data packet2cwas received by the node5. From the difference between the two time stamps, or otherwise taking into account processing times at the node5, for example, the node5then determines the respective RTT relating to the transmission of the data packets2band2cin the part of the telecommunications system1connecting the node5and the second device6. It will be appreciated that the above disclosed method may be applied, in a similar manner, for data packet exchange between the node5and the first device4, to measure the RTT at the part of the telecommunication system1, between the first device4and the node5. FIG.2shows the above disclosed steps in flowchart type diagram20, wherein time is assumed running from top to bottom of the figure. Block21, “Receiving data packet”, refers to receiving of a data packet2aat the node5. Block22, “Modifying source address”, refers to the modification, by the node5, of the source address3aof the received data packet2a, of the first device identification. Block23, “Providing modified data packet”, illustrates modification, by the node5, of the received data packet2a, providing a modified data packet2bhaving an address part comprising the modified first device identification as source address3band the second device identification as a destination address3a. Block24, “Linking address parts”, refers to the linking, by the node5, in the address translation table7, of the address part3of the received data packet2awith the address part3of the modified data packet2b. Block25, “Transmit modified data packet”, refers to the transmitting, by the node5, to the second device6, at a first point in time, the modified data packet2b. Block26, “Receiving further data packet”, shows receipt, at a second point in time later than the first point in time, at the node5, of a further data packet2chaving an address part3comprising the modified first device identification as destination address3aand the second device identification as source address3b. Block27, “Providing modified further data packet”, refers to the modification, by the node5, of the received further data packet2cusing the address translation table7, providing a modified further data packet2d, having an address part3comprising the first device identification as destination address3aand the second device identification as source address3b. Block28, “Transmitting modified further data packet”, illustrates the transmission, by the node5, of the modified further data packet2dto the first device4. Block29, “Determining RTT delay” refers to the calculation, by the node5, of the RTT delay from the first and second point in time. The skilled person will appreciate that the method may be used to determine the RTT between any two peers or between two devices situated on different hierarchical levels—for example a UE and a server. For the uplink direction, the first device4or client may be a User Equipment, UE, connecting to an access network of the telecommunications system1, such as a mobile UE connecting to a RAN. For Round Trip Time client, RTTc, delay in uplink communication direction the second device6may be a server of the telecommunications system1. For measuring Round Trip Time server, RTTs, delay in downlink communication direction, the first device4is the server of telecommunications system1and the second device6is the User Equipment, UE. Once a modified data packet2bis transmitted by the node5, receiving a further data packet2cat the node5includes inspecting, by the node5, the address part3of each data packet received from the second device6, until receiving a first further data packet2chaving an address part3comprising the modified source address of the first device4as destination address and a source address referring to the second device6. Modification of the address part3of a data packet may comprise modification of a respective device identification to which the address part refers. In an Internet Protocol, IP, type data transmission protocol, the address part3of a data packet comprises 5-tuple information including the IP address and port number of the first device4as the first device identification, and the IP address and port number of the second device6as the second device identification, and a protocol identification identifying the data transmission protocol. The node5, prior to modifying the address part3of received data packet2bmay check the protocol identification included in the address part3. For completeness sake it is noted that port number refers to Transport level, i.e. level four, and IP addresses refer to Network level, i.e. level three, of the conceptual seven-layer Open Systems Interconnection, OSI, communications model. Hence, in an IP data type transmission protocol, it is possible to modify the device identification by modifying one or both of source IP address and port number. Modifying the source port number may comprise one of:incrementing the source port number by 1;decrementing the source port number by 1;incrementing the source port number by an arbitrary integer value;decrementing the source port number by an arbitrary integer value. The source IP address comprises a host part and a network part, wherein modifying the source IP address comprises modifying the host part by one of:incrementing the source IP address by 1;decrementing the source IP address by 1;incrementing the source IP address by an arbitrary integer value;decrementing the source IP address by an arbitrary integer value. When modifying the IP address, it can be checked whether the modified IP address is unique, that is not being in use in communications of the node, either temporarily or permanently, for example, or whether the modified IP address is part of a reserved range or otherwise uniquely. Modifying the address part of data packet2areceived from the first device4may be applied, by node5, to all subsequent data packets2afor a same data stream received from the first device4, such that modifying the address part of the further data packets2creceived from the second device6, by node5, applies to all subsequent further data packets2cfor a same data stream received from the second device6. The first point in time and second point in time may be stored as a first time stamp and a second time stamp in the address translation table7, stored in the node5. The node5may be a Packet Data Network, PDN, Gateway, P-GW, or a Policy Control and Enforcement Point, PCEP, arranged in an access network of the telecommunications system1, for example. A node5for operating in a telecommunications system1, for determining the passive Round Trip Time, RTT, delay in accordance with the present disclosure, is to be arranged with a storage device9for storing the address translation table7, and a computer controlled RTT measuring device, schematically indicated by reference numeral10inFIG.1, wherein the RTT measuring device10is arranged for controlling transceiver equipment8and storage device9for performing the passive Round Trip Time calculation. The techniques and mechanisms described in this invention are applicable to any cloud implementation. This technique provides a transparent marking mechanism for the passive calculation of RTT towards the client and towards the server. The technique is in all scenarios completely transparent towards one of the two endpoints. Towards the other endpoint the passive monitoring may result in an effect similar to a change of the access network (changed in assigned IP address or port). The solution allows for multiple RTT measurements per flow. The solution works for Quick UDP Internet Connections, QUIC, protocol data traffic and other half-transport protocols over UDP that allow for resiliency against access network changes. Passive calculation of RTT allows for the implementation of QoE measurements, analytics use cases and optimization of traffic. The following example will describe, with reference toFIG.3, the calculation of RTTs in the case of QUIC protocol. This example assumes an implementation on the functionality required in a Packet Gateway, P-GW,52or a Policy Control and Enforcement Point Function, PCEF,52this being the node5of a Mobile Access Network as in a telecommunications system1, according toFIG.1. In this example the following steps should be followed for the calculation of RTTs using address part3of a data packet2in particular in this example port value is address part3, first device4is a source, second device6is a server. In the particular example referring toFIGS.3-6, The device51is a UE or a QUIC client and the device53is a QUIC server with which the UE51communicates. The implementation on the P-GW52identifies55a QUIC flow with the following 5-tuple information2a. When a new passive RTTs measurement is required, the P-GW52will wait for a new data packet2aon the uplink for the chosen QUIC flow. When a data packet is received57at the P-GW52, the source port3is incremented58by one. Alternatively, the source port3may be incremented or decremented by any other value. The P-GW52will store this modification58in a table by storing the “old source port” from the UE51, the “new source port” and the rest of the values of the UDP 5-tuple. The data packet is then sent59to the server53with the new source port3. The P-GW52stores a timestamp of this event59i.e.—the modification and forwarding of the uplink data packet2b. For all subsequent data packets2bon the uplink for this QUIC flow (UDP 5-tuple) the source port3with value “old source port” should be replaced by the “new source port” value. The P-GW52inspects downlink traffic61that matches the four original parameters of the 5-tuple3. Specifically: protocol UDP, source IP address srvIP, source port srvPort and destination IP address ueIP IP addresses and ports are mirrored from the uplink. When a data packet2con the downlink matches this filter, check the destination port. If the destination port matches the “old source port” the data packet2dis forwarded and it is assumed that this data packet2dis not an acknowledgment for the modified uplink data packet2c. For the first data packet61that matches the previous filter with the destination port matching the “new source port”, a new timestamp is stored in the P-GW52signalling the first acknowledgment of the modified uplink data packet2c. RTTs is calculated62as the difference of the two timestamps recorded. Therefore, the RTTs is measured as the time difference between the first timestamp obtained when transmitting59the modified data packet and the second timestamp obtained when a first data packet matching the filter is received61. From this data packet2con—and for all the data packets2creceived on the downlink for this QUIC flow (UDP 5-tuple)—the destination port containing a value “new source port” should be replaced63by the original value “old source port”. This mechanism introduces, per RTTs measurement, only one port change on the uplink (detected by the server) and no changes on the downlink (no changed changes detectable by the client) as the change is undone on the downlink for all subsequent data packets2c. Once an RTTs measurement is completed another one can be performed on the same flow, by introducing additional increments (or decrements) of source ports3. The source port3change introduced simulates an Access Network change of the client/UE51or a Carrier Grade Network Address Translation, CGNAT, change affecting the client/UE51. This is more typical since such changes might only affect client/UE ports and not IP addresses. Similar steps should be followed for the calculation of RTTc using port marking, which is explained in reference toFIG.4. In this example100, a P-GW52is the Access Network5, first device4is a QUIC server53and second device6is a client/UE510, and the modified information of an address data packet3is a port value. The implementation on the P-GW52identifies a QUIC flow55with the following 5-tuple information: UDP as the transport protocol, client IP address and port being ueIP and uePort and server IP address and port being srvIP and srvPort. When a new passive RTTc measurement is required the P-GW52will wait for a new data packet101on the downlink for the chosen QUIC flow56. When the data packet is received101, the source port is incremented102by one. Alternatively, the source port can be incremented or decremented by any other value. The P-GW52will store this modification in a table by storing the “old source port” from the server, the “new source port” and the rest of the values of the UDP 5-tuple. The data packet2bis then sent103to the UE51with the new source port. The P-GW52stores a timestamp of this event (the modification and forwarding of the downlink data packet2b). For all subsequent data packets2bon the downlink for this QUIC flow (UDP 5-tuple) the source port with value “old source port” should be replaced by the “new source port” value. The P-GW52inspects uplink traffic2cthat matches the four original parameters of the 5-tuple. Specifically: protocol UDP, source IP address ueIP, source port uePort and destination IP address srvIP (IP addresses and ports are mirrored from the downlink). When a data packet105on the uplink matches this filter, check the destination port. If the destination port matches the “old source port” the data packet2dis forwarded and it is assumed that this data packet2dis not an acknowledgment for the modified downlink data packet2c. For the first data packet105that matches the previous filter with the destination port matching the “new source port”, a new timestamp is stored in the P-GW52signalling the first acknowledgment of the modified downlink data packet2c. RTTs is calculated62as the difference of the two timestamps recorded. From this data packet105on—and for all the data packets2creceived on the uplink for this QUIC flow (UDP 5-tuple)—the destination port containing a value “new source port” should be replaced by the original value “old source port”. This mechanism introduces, per RTTs measurement, only one port change on the downlink (detected by the client) and no changes on the uplink (no changes detectable by the server) as the change is undone on the uplink for all subsequent data packets2c. Once an RTTs measurement62is completed another one can be performed on the same flow, by introducing additional increments (or decrements) of source ports. The source port change introduced simulates an Access Network5change of the server53or a CGNAT change affecting the server53—more typical since such changes might only affect server53ports and not IP addresses. In any case these changes are rarer in the case of the server53(measuring RTTc) than of the client/UE51(measuring RTTs). If the server53is sitting in a Mobile Access Network1then the client/UE51and server53roles can be reversed and the calculation of RTTs and RTTc becomes symmetrical. IP Modification Example The same techniques can be implemented using marking on the IP addresses and not on the ports. The techniques are further elaborated usingFIGS.5and6. This example150assumes an implementation on the functionality required in a P-GW52or a PCEF function this being the node5of a Mobile Access Network as in a telecommunications system1, according toFIG.1. In this example the following steps should be followed for the calculation of RTTs using address part3of a data packet2in particular in this example IP value is address part3, first device4is a UE51, second device6is a server53. The implementation on the P-GW52identifies a QUIC flow155with the following 5-tuple information: UDP as the transport protocol, client IP address and port being ueIP and uePort and server IP address and port being srvIP and srvPort. When a new passive RTTs measurement is required the P-GW52will wait for a new data packet2aon the uplink for the chosen QUIC flow. When the packet2ais received152, the source IP address is incremented158by one. Alternatively, the source IP address can be incremented or decremented by any other value. The P-GW52will store this modification in a table by storing the “old source IP address” from the UE, the “new source IP address” and the rest of the values of the UDP 5-tuple. The data packet2bis sent159to the server53with the new source IP address. The P-GW52stores a timestamp of this event—the modification and forwarding of the uplink packet. For all subsequent data packets2bon the uplink for this QUIC flow (UDP 5-tuple) the source IP address with value “old source IP address” should be replaced by the “new source IP address” value. The P-GW52inspects downlink traffic that matches the four original parameters of the 5-tuple. Specifically: protocol UDP, source IP address srvIP, source port srvPort and destination port uePort (IP addresses and ports are mirrored from the uplink). When a data packet2con the downlink matches this filter161, check the destination IP address. If the destination IP address matches the “old source IP address” the data packet2dis forwarded and it is assumed that this data packet2dis not an acknowledgment for the modified uplink data packet2c. For the first data packet2cthat matches the previous filter161with the destination IP address matching the “new source IP address”, a new timestamp is stored in the P-GW52signalling the first acknowledgment of the modified uplink data packet2c. RTTs is calculated162as the difference of the two timestamps recorded. From this data packet2con—and for all the data packets2creceived on the downlink for this QUIC flow (UDP 5-tuple)—the destination IP address containing a value “new source IP address” should be replaced by the original value “old source IP address”. This mechanism introduces, per RTTs measurement, only one IP address change on the uplink (detected by the server) and no changes on the downlink (no changes detectable by the client) as the change is undone on the downlink for all subsequent data packets2c. Once an RTTs measurement162is completed another one can be performed on the same flow, by introducing additional increments (or decrements) of source IP addresses. The source IP address change introduced simulates an Access Network5change of the client/UE4or a CGNAT change affecting the client/UE4. Similar steps should be followed for the calculation of RTTc using IP marking. This example assumes an implementation on the functionality required in a P-GW52or a PCEF this being the node5of a Mobile Access Network as a telecommunications system1. In this example200inFIG.6, the following steps should be followed for the calculation of RTTs using address part3of a data packet2in particular in this example IP value is address part3, first device6is a UE51, second device4is a server53. The implementation on the P-GW52identifies a QUIC flow202with the following 5-tuple information: UDP as the transport protocol, client IP address and port being ueIP and uePort, respectively and server IP address and port being srvIP and srvPort respectively. When a new passive RTTc measurement is required, the P-GW52will wait for a new data packet2aon the downlink for the chosen QUIC flow. When the data packet2ais received204, the source IP address is incremented205by one. Alternatively, the source IP address can be incremented or decremented by any other value. The P-GW52will store this modification in a table by storing the “old source IP address” from the server, the “new source IP address” and the rest of the values of the UDP 5-tuple. The data packet2bis sent206to the UE51with the new source IP address. The P-GW52stores a timestamp of this event—the modification and forwarding of the downlink packet. For all subsequent data packets2bon the downlink for this QUIC flow (UDP 5-tuple) the source IP address with value “old source IP address” should be replaced by the “new source IP address” value. The P-GW52inspects uplink traffic that matches the four original parameters of the 5-tuple. Specifically: protocol UDP, source IP address ueIP, source port uePort and destination IP port srvPort wherein the IP addresses and ports are mirrored from the downlink. When a data packet2con the uplink matches this filter, check the destination IP address. If the destination IP address matches the “old source IP address” the data packet2dis forwarded and it is assumed that this data packet2dis not an acknowledgment for the modified downlink data packet2c. For the first data packet2cthat matches208the previous filter with the destination IP address matching the “new source IP address”, a new timestamp is stored in the P-GW52signalling the first acknowledgment of the modified downlink data packet2c. RTTs is calculated209as the difference of the two timestamps recorded. From this data packet2con—and for all the data packets2creceived on the uplink for this QUIC flow (UDP 5-tuple)—the destination IP address containing a value “new source IP address” should be replaced by the original value “old source IP address”. This mechanism introduces, per RTTs measurement, only one IP address change on the downlink (detected by the client) and no changes on the uplink (no changes detectable by the server) as the change is undone on the uplink for all subsequent data packets2c. Once an RTTs measurement is completed another one can be performed on the same flow, by introducing additional increments (or decrements) of source IP addresses. The source IP address change introduced simulates an Access Network5change of the server server or a CGNAT change affecting the server. In any case these changes are rarer in the case of the server (measuring RTTc) than of the client/UE (measuring RTTs). If the server is sitting in a Mobile Access Network then the client/UE and server roles can be reversed and the calculation of RTTs and RTTc becomes symmetrical. The example listed above change either the IP addresses or the port. However, additionally other implementations could easily change both the IP address or the port. These scenarios are a natural evolution of the above listed examples and are easily understood by the person skilled in the art. Therefore an explicit description of such scenarios has been avoided. The use cases listed calculate RTTs and RTTc for QUIC. As it has been shown, no requirement on QUIC exists other than supporting Network Access changes. Any other protocol running on UDP that implements similar techniques for resilience to Access Network changes could also be supported for measurements of RTT—for example like P2P protocols. As will be readily understood by those familiar with communications design, that functions means or modules may be implemented using digital logic and/or one or more microcontrollers, microprocessors, or other digital hardware. In some embodiments, several or all of the various functions may be implemented together, such as in a single application-specific integrated circuit (ASIC), or in two or more separate devices with appropriate hardware and/or software interfaces between them. Several of the functions may be implemented on a processor shared with other functional components of a wireless terminal or network node, for example. Alternatively, several of the functional elements of the processing means discussed may be provided through the use of dedicated hardware, while others are provided with hardware for executing software, in association with the appropriate software or firmware. Thus, the term “processor” or “controller” as used herein does not exclusively refer to hardware capable of executing software and may implicitly include, without limitation, digital signal processor (DSP) hardware, read-only memory (ROM) for storing software, random-access memory for storing software and/or program or application data, and non-volatile memory. Other hardware, conventional and/or custom, may also be included. Modifications and other embodiments of the disclosed embodiments will come to mind to one skilled in the art having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the embodiment(s) is/are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of this disclosure. Although specific terms may be employed herein, they are used in a generic and descriptive sense only, and not for purposes of limitation.
28,181
11943148
DESCRIPTION OF EMBODIMENTS In embodiments of this application, a traffic flow is a data flow of a service. A traffic flow includes a data packet. A data packet is an Ethernet frame in an Ethernet technology; a data packet is an IP packet in an internet protocol (IP) technology; and a data packet is an MPLS packet in a multi-protocol label switching (MPLS) technology. In a current high-reliability network transmission technology, there is no technical solution to configuring each node for transmission of a traffic flow, for traffic flow transmission and protection. For example, a deterministic network (Detnet) technology is a technology that provides an extremely low packet loss rate and a bounded latency for a real-time application traffic flow. A Detnet architecture includes two different types of nodes: an edge node and a relay node. The edge node is a source node and/or a destination node at a Detnet service layer, and the relay node is a node on a Detnet path from the source node to the destination node. A network path from the source node to the destination node includes a path segment, and the relay node is configured to connect neighboring path segments together, and provide a function of service protection. The Detnet architecture defines functions that can be implemented on a node, including a packet replication function (PRF), a packet elimination function (PEF), and a packet ordering function (POF), which are collectively referred to as PREOF. The packet replication function means to replicate a packet and forward the packet to one or more next-hop nodes through one or more segments. The packet elimination function means to eliminate a redundant packet of packets from one or more segments, to prevent of excessive packet flooding on a network or send a repeated packet beyond a Detnet domain. The packet ordering function means to re-order disordered packets. Although the Detnet architecture defines the functions that can be implemented on a node, the Detnet architecture does not specify that a specific function or specific functions is/are configured on a specific node for traffic flow transmission and protection. To resolve the foregoing technical problem, embodiments of this application provide a traffic flow processing method and apparatus for traffic flow transmission and protection. The following describes, with reference to the accompanying drawings, the traffic flow processing method and apparatus, and the device provided in the embodiments of this application. Referring toFIG.1, a traffic flow processing method provided in an embodiment of this application includes the following steps. S101: Obtain configuration information of a plurality of nodes. In this embodiment of this application, the plurality of nodes include edge nodes and relay nodes, where the edge nodes include a first edge node and a second edge node. An edge node may be connected to a customer edge node through an attachment circuit. For example, the first edge node is connected to a first customer edge node, and the second edge node is connected to a second customer edge node. The first customer edge node transmits a traffic flow to the first edge node, and the first edge node may adaptively encapsulate the traffic flow as data packets, so that the first edge node performs transmission. There are at least two paths between the first edge node and the second edge node, and there is at least one relay node on each path. A data packet starts from the first edge node and arrives at the second edge node through the at least two paths. The second edge node can receive the data packet provided that one path is not faulty. This ensures reliability of data transmission. After receiving the data packet, the second edge node may decapsulate non-repeated data packets, restore the non-repeated data packets to a traffic flow, and send the traffic flow to the second customer edge node. FIG.2is a schematic diagram of a possible architecture of a plurality of nodes. In this figure, a first edge node (Edge A), a second edge node (Edge B), and two relay nodes are included, where the two relay nodes are a first relay node (Relay C) and a second relay node (Relay D). The edge A, the relay C, and the relay D are connected to each other, and both the relay C and the relay D are connected to the edge B. The edge A is connected to a CE1through an AC1, so that the edge A can receive a traffic flow from the CE1and adaptively encapsulate the traffic flow as data packets. The edge B is connected to a CE2through an AC2, so that the edge B can restore received data packets to a traffic flow and transmit the traffic flow to the CE2. InFIG.2, there may be a total of four paths for a data packet from the edge A to the edge B, where the four paths are as follows:path 1: edge A→relay C→edge B;path 2: edge A→relay C→relay D→edge B;path 3: edge A→relay D→edge B; andpath 4: edge A→relay D→relay C→edge B. The traffic flow sent by the CE1may be transmitted from the edge A to the edge B through the four paths, and then arrives at the CE2. The CE2can receive the traffic flow provided that any one of the paths is not faulty. To ensure that a data packet of the edge A can pass through the four paths, corresponding configuration information may be generated for each of the first edge node, the second edge node, the first relay node, and the second relay node, to configure a corresponding functional module. First configuration information may be generated for the edge A. The first configuration information is used to: replicate a data packet included in a received traffic flow; output, to the relay C, a first data packet obtained through replication; and output, to the relay D, a second data packet obtained through replication. That is, the first configuration information is used to configure a replication module for the edge A. The replication module can enable the edge A to replicate the data packet included in the traffic flow received from the CE1, to obtain the first data packet and the second data packet, to send the first data packet to the relay C, and to send the second data packet to the relay D. There may be one or more packets included in the received traffic flow. Second configuration information may be generated for the relay C. The second configuration information is used to: replicate the first data packet; output, to the relay D, a third data packet obtained through replication; obtain a data packet in a fourth data packet obtained through replication and a data packet from the relay D; and send the obtained data packet to the edge B. Optionally, the obtained data packet may be a data packet that is first received by the relay C between the fourth data packet obtained through replication and the data packet from the relay D. That is, the second configuration information is used to configure a replication module and an elimination module for the relay C. The replication module can enable the relay C to replicate the received first data packet to obtain the third data packet and the fourth data packet, and to send the third data packet to the relay D. The elimination module can enable the relay C to obtain a first received data packet between the fourth data packet and the data packet from the relay D (namely, the following described fifth data packet), and to send the first received data packet to the edge B. If the relay C first receives the fourth data packet, the relay C sends the fourth data packet to the edge B. If the relay C first receives the fifth data packet, the relay C sends the fifth data packet to the edge B. Fourth configuration information may be generated for the relay D. The fourth configuration information is used to: replicate the second data packet from the edge A; output, to the relay C, the fifth data packet obtained through replication; obtain a data packet from a sixth data packet obtained through replication and the third data packet from the relay C; and send the data packet to the edge B. Optionally, the obtained data packet may be a first received data packet between the sixth data packet obtained through replication and the third data packet from the relay C. That is, the fourth configuration information is used to configure a replication module and an elimination module for the relay D. The replication module of the relay D can enable the relay D to replicate the received second data packet to obtain the fifth data packet and the sixth data packet, and to send the fifth data packet to the relay C. The elimination module of the relay D can enable the relay D to obtain a first received data packet between the sixth data packet obtained through replication and the third data packet from the relay C, to send the first received data packet to the edge B, and may delete a later received data packet. In this embodiment of this application, the relay D may obtain data packets in a receiving order of the data packets, and obtains a data packet that is first received. If the relay D receives the sixth data packet before the third data packet, the relay D sends the sixth data packet to the edge B. If the relay D receives the third data packet before the sixth data packet, the relay D sends the third data packet to the edge B. To avoid excessive flooding on a network or send a repeated data packet beyond a domain, third configuration information may be generated for the edge B. The third configuration information is used to obtain and output a data packet in a data packet received from the relay C and a data packet received from the relay D. Optionally, the obtained data packet may be a first received data packet between the data packet from the relay C and the data packet from the relay D. That is, the third configuration information is used to configure an elimination module for the edge B, and the module can enable the edge B to obtain a first received data packet between the data packet from the relay C and the data packet from the relay D, and to send the first received data packet to the CE2. For example, if the data packet from the relay C is the fourth data packet, the data packet from the relay D is the sixth data packet, and the edge B receives the fourth data packet before the sixth data packet, the edge B may delete the sixth data packet, and send the fourth data packet to the CE2. Based on the foregoing analysis, refer toFIG.3(a).FIG.3(a)is a schematic diagram of a possible functional module architecture of the plurality of nodes according to the embodiment shown inFIG.2. InFIG.3(a), a replication module (Rep for short hereinafter)101may be configured for the edge A, a rep102and an elimination module (Elm for short hereinafter)103may be configured for the relay C, a rep104and an elm105may be configured for the relay D, and an elm106may be configured for the edge B. The rep101is connected to both the rep102and the rep104, the rep102is connected to both the elm103and the elm105, the rep104is connected to both the elm103and the elm105, and both the elm103and the elm105are connected to the elm106. The rep101is configured to: replicate a received data packet to obtain a first data packet and a second data packet, send the first data packet to the rep102, and send the second data packet to the rep104. The rep102replicates the first data packet to obtain a third data packet and a fourth data packet, sends the third data packet to the elm105, and sends the fourth data packet to the elm103. The rep104replicates the second data packet to obtain a fifth data packet and a sixth data packet, sends the fifth data packet to the elm103, and sends the sixth data packet to the elm105. The elm103obtains a first received data packet between the fourth data packet and the fifth data packet, and sends the first received data packet to the elm106. The elm105obtains a first received data packet between the third data packet and the sixth data packet, and sends the first received data packet to the elm106. The elm106obtains a first received data packet between a data packet from the elm103and a data packet from the elm105, and sends the first received data packet to the CE2. It should be noted that, as described above, a traffic flow may include one or more data packets. To identify different data packets, a sequence number may be used to identify each data packet of the traffic flow. Specifically, the edge A may identify data packets in a traffic flow in ascending order or descending order of sequence numbers, and continuity of the sequence numbers of the data packets reflects continuity of content of the data packets. For example, a data packet 1 indicates a 1stdata packet (an initial data packet) in a traffic flow, a data packet 2 indicates a 2nddata packet in the traffic flow, and a packet 3 indicates a 3rdpacket in the traffic flow. When replicating the data packets in the traffic flow, the replication module of the edge A may replicate the data packets one by one in ascending order or descending order of the sequence numbers. For example, the edge A replicates the data packet 1, the data packet 2, and the data packet 3 that are included in the traffic flow. To be specific, the edge A first replicates the data packet 1, then replicates the data packet 2, and finally replicates the data packet 3. After a data packet is replicated, content and a sequence number of a data packet obtained by replicating the data packet are respectively the same as the content and sequence number of the replicated data packet. For example, at least two data packets obtained by replicating the data packet 1 each is still the data packet 1, and at least two data packets obtained by replicating the data packet 2 each is still the data packet 2. Therefore, if a traffic flow includes a plurality of data packets, there are accordingly a plurality of first data packets obtained through replication and a plurality of second data packets obtained through replication. For example, if a traffic flow includes a data packet 1, a data packet 2, and a data packet 3, the first data packet obtained through replication also includes the data packet 1, the data packet 2, and the data packet 3; and the second data packet obtained through replication also includes the data packet 1, the data packet 2, and the data packet 3. In this embodiment, when replicating a data packet, a replication module of a relay node may replicate data packets one by one based on a receiving order of the data packets. For example, if the replication module of the relay C receives the data packet 1, the data packet 2, and the data packet 3 in order, the replication module of the relay C may replicate the data packets in the receiving order. If the data packets are out of order during transmission, the packets can still be replicated in a receiving order. For example, if the replication module of the relay C receives the data packet 1, the data packet 3, and the data packet 2 in order, the replication module of the relay C may replicate the data packets in the particular receiving order. For the elimination module, there are two cases. In one case, the elimination module obtains at least two data packets with a same sequence number, and may select one of the at least two data packets for sending. In another case, if the elimination module receives only one data packet with a specific sequence number due to packet loss during transmission of data packets, the elimination module may send the data packet. In the former case, if the elimination module successively receives at least two data packets with a same sequence number, the elimination module may output a first received data packet; or if the elimination module synchronously receives at least two data packets with a same sequence number, the elimination module may select any one of the data packets for outputting. For example, if the fourth data packet includes the data packet 1, and the fifth data packet also includes the data packet 1, the elimination module of the relay C selects a first received data packet 1 for sending. If the two data packets 1 are received simultaneously, the elimination module of the relay C may select either of the data packets 1 for outputting. In an embodiment, an indication flag may be set for each data packet on the elimination module, and the indication flag indicates whether a data packet with a sequence number is sent. For example, if an indication flag of the data packet 1 is 1, it indicates that a data packet with a sequence number 1 has been sent; and if the indication flag of the data packet 1 is 0, it indicates that the data packet with the sequence number 1 has not been sent. In this way, if the elimination module receives a data packet with a sequence number, the elimination module may query for an indication flag corresponding to the data packet with the sequence number. If the corresponding indication flag is 0, it indicates that the elimination module has not sent the data packet with the sequence number before, and therefore the elimination module may send the data packet. If the corresponding indication flag is 1, it indicates that the elimination module has sent the data packet with the sequence number before, and therefore the elimination module may delete the data packet. To enable a data packet to be transmitted according to a flow direction shown inFIG.3(a), configuration information of each node may include ingress information and egress information of a corresponding module. The ingress information may be, for example, ingress segment information and/or ingress port information. The ingress segment information includes at least an ingress segment identifier, and optionally, may further include an ingress segment IP address and/or an ingress segment label. The ingress segment IP address includes an IP address of a start node of an ingress segment and an IP address of an end node of the ingress segment. The ingress port information may include an ingress port identifier. Egress information may be, for example, egress segment information and/or egress port information. The egress segment information includes at least an egress segment identifier, and optionally, may further include an egress segment IP address and/or an egress segment label. The egress segment IP address includes an IP address of a start node of an egress segment and an IP address of an end node of the egress segment. The egress port information may include an egress port identifier. In this embodiment, the ingress segment identifier and the egress segment identifier each may be a 32-bit unsigned integer, and the IP address of the start node and the IP address of the end node each may be a 32-bit IPv4 address or a 128-bit IPv6 address. In this specification, only the IPv4 address is used as an example for description. The ingress segment label and the egress segment label each may be an integer within a range of [16, 1048575]. Before ingress segment information and egress segment information of each node are described, a concept of segment and concepts related to the segment are first described. In the embodiments of this application, a link between nodes and a connection inside a node may be referred to as a segment, and a segment identifier is an identifier of a segment. A segment is directional. For example, a segment from the edge A to the relay C and a segment from the relay C to the edge A are not a same segment. Accordingly, segment identifiers corresponding to the two segments are also different. For example, inFIG.2andFIG.3(a), a segment identifier of the segment from the edge A to the relay C is “1001”, a segment identifier of a segment from the relay C to the edge B is “1002”, a segment identifier of a segment from the relay C to the relay D is “1003”, a segment identifier of a segment from the relay D to the relay C is “1004”, a segment identifier of a segment from the edge A to the relay D is “1005”, and a segment identifier of a segment from the relay D to the edge B is “1006”. The segment identifiers “1001” to “1006” are identifiers of segments between nodes. Segment identifiers inside nodes include, for example, a segment identifier “2001” of a segment from the rep102to the elm103in the relay C and a segment identifier “2002” of a segment from the rep104to the elm105in the relay D. In this embodiment, a segment label is a label corresponding to a segment identifier. For example, inFIG.2andFIG.3(a), a segment label of the segment from the edge A to the relay C is “501”, a segment label of the segment from the relay C to the edge B is “502”, a segment label of the segment from the relay C to the relay D is “503”, a segment label of the segment from the relay D to the relay C is “504”, a segment label of the segment from the edge A to the relay D is “505”, and a segment label of the segment from the relay D to the edge B is “506”. The segment labels “501” to “506” are labels of segments between nodes. Segment labels inside nodes include, for example, a segment label “20” of the segment from the rep102to the elm103in the relay C and a segment label “21” of the segment from the rep104to the elm105in the relay D. In addition, each node may further have an IP address. For example, inFIG.2andFIG.3(a), an IP address of the edge A is 10.0.0.1, an IP address of the edge B is 10.0.0.2, an IP address of the relay C is 10.0.0.3, and an IP address of the relay D is 10.0.0.4. In this case, configuration information of the edge A, namely, the first configuration information, may also be considered as configuration information of the rep101, and may include an egress segment identifier “1001” and an egress segment identifier “1005”. Optionally, the first configuration information may further include: an egress segment IP address 10.0.0.1 (an IP address of a start node) and an egress segment IP address 10.0.0.3 (an IP address of an end node) that correspond to the egress segment identifier “1001”, and the egress segment IP address 10.0.0.1 (the IP address of the start node) and an egress segment IP address 10.0.0.4 (an IP address of an end node) that correspond to the egress segment identifier “1005”. In certain embodiments, the first configuration information may include or further include: the segment label “501” corresponding to the egress segment identifier “1001” and the segment label “505” corresponding to the egress segment identifier “1005”. Configuration information of the relay C, namely, the second configuration information, includes configuration information of the rep102and configuration information of the elm103. The configuration information of the rep102includes: an ingress segment identifier “1001”, an egress segment identifier “2001”, and an egress segment identifier “1003”. Optionally, the configuration information of the rep102may further include: the segment IP address 10.0.0.1 (the IP address of the start node) and the segment IP address 10.0.0.3 (the IP address of the end node) that correspond to the ingress segment identifier “1001”, and the segment IP address 10.0.0.3 (the IP address of the start node) and the segment IP address 10.0.0.3 (the IP address of the end node) that correspond to the egress segment identifier “2001”, and the segment IP address 10.0.0.3 (the IP address of the start node) and the segment IP address 10.0.0.4 (the IP address of the end node) that correspond to the egress segment identifier 1003. In an embodiment, the configuration information of the rep102may include or further include: the segment label “501” corresponding to the ingress segment identifier “1001”, the segment label “20” corresponding to the egress segment identifier “2001”, and the segment label “503” corresponding to the egress segment identifier “1003”. The configuration information of the elm103includes: an ingress segment identifier “2001”, an ingress segment identifier “1004”, and an egress segment identifier “1002”. Optionally, the configuration information of the elm103may further include: the segment IP address 10.0.0.3 (the IP address of the start node) and the segment IP address 10.0.0.3 (the IP address of the end node) that correspond to the ingress segment identifier “2001”, the segment IP address 10.0.0.4 (the IP address of the start node) and the segment IP address 10.0.0.3 (the IP address of the end node) that correspond to the ingress segment identifier “1004”, and the segment IP address 10.0.0.3 (the IP address of the start node) and the segment IP address 10.0.0.2 (the IP address of the end node) that correspond to the egress segment identifier “1002”. In an embodiment, the configuration information of the elm103may include or further include: the segment label “20” corresponding to the ingress segment identifier “2001”, the segment label “504” corresponding to the ingress segment identifier “1004”, and the segment label “502” corresponding to the egress segment identifier “1002”. Configuration information of the relay D, namely, the fourth configuration information, includes configuration information of the rep104and configuration information of the elm105. The configuration information of the rep104includes: an ingress segment identifier “1005”, an egress segment identifier “2002”, and an egress segment identifier “1004”. Optionally, the configuration information of the rep104may further include: the segment IP address 10.0.0.1 (the IP address of the start node) and the segment IP address 10.0.0.4 (the IP address of the end node) that correspond to the ingress segment identifier “1005”, the segment IP address 10.0.0.4 (the IP address of the start node) and the segment IP address 10.0.0.4 (the IP address of the end node) that correspond to the egress segment identifier “2002”, and the segment IP address 10.0.0.4 (the IP address of the start node) and the segment IP address 10.0.0.3 (the IP address of the end node) that correspond to the egress segment identifier 1004. In an embodiment, the configuration information of the rep104may include or further include: the segment label “505” corresponding to the ingress segment identifier “1005”, the segment label “21” corresponding to the egress segment identifier “2002”, and the segment label “504” corresponding to the egress segment identifier “1004”. The configuration information of the elm105includes: an ingress segment identifier “2002”, an ingress segment identifier “1003”, and an egress segment identifier “1006”. Optionally, the configuration information of the elm105may further include: the segment IP address 10.0.0.4 (the IP address of the start node) and the segment IP address 10.0.0.4 (the IP address of the end node) that correspond to the ingress segment identifier “2002”, the segment IP address 10.0.0.3 (the IP address of the start node) and the segment IP address 10.0.0.4 (the IP address of the end node) that correspond to the ingress segment identifier “1003”, and the segment IP address 10.0.0.4 (the IP address of the start node) and the segment IP address 10.0.0.2 (the IP address of the end node) that correspond to the egress segment identifier “1006”. In an embodiment, the configuration information of the elm105may include or further include: the segment label “21” corresponding to the ingress segment identifier “2002”, the segment label “503” corresponding to the ingress segment identifier “1003”, and the segment label “506” corresponding to the egress segment identifier “1006”. Configuration information of the edge B, namely, the third configuration information, may also be considered as configuration information of the elm106, and may include an ingress segment identifier “1002” and an ingress segment identifier “1006”. Optionally, the third configuration information may further include: the ingress segment IP address 10.0.0.3 (the IP address of the start node) and the ingress segment IP address 10.0.0.2 (the IP address of the end node) that correspond to the ingress segment identifier “1002”, and the ingress segment IP address 10.0.0.4 (the IP address of the start node) and the ingress segment IP address 10.0.0.2 (the IP address of the end node) that correspond to the ingress segment identifier “1006”. In an embodiment, the third configuration information may include or further include: the segment label “502” corresponding to the ingress segment identifier “1002” and the segment label “506” corresponding to the ingress segment identifier “1006”. Certainly, it may be understood that the modules included in the foregoing nodes and compositions of the modules do not constitute any limitation on this application, and a person skilled in the art may design the modules depending on an actual situation. For example, fifth configuration information may be generated for the relay D. The fifth configuration information is used to: obtain and replicate a first received data packet between the second data packet from the edge A and the third data packet from the relay C; output, to the relay C, the fifth data packet obtained through replication; and output, to the edge B, the sixth data packet obtained through replication. That is, the fifth configuration information is used to configure an elimination module and a replication module for the relay D. The elimination module of the relay D can enable the relay D to obtain a first received data packet between the second data packet from the edge A and the third data packet from the relay C. The replication module of the relay D is configured to: replicate the obtained data packet to obtain the fifth data packet and the sixth data packet, send the fifth data packet to the relay C, and send the sixth data packet to the edge B. As shown inFIG.3(b), a rep201may be configured for an edge A, an elm206may be configured for an edge B, a rep202and an elm203may be configured for a relay C, and an elm204and a rep205may be configured for a relay D. The rep201, the rep202, and the elm204are connected to each other; the elm203, the rep205, and the elm206are connected to each other; the rep202is further connected to the elm203; and the elm204is further connected to the rep205. The rep201is configured to replicate a received data packet to obtain a first data packet and a second data packet, send the first data packet to the rep202, and send the second data packet to the elm204. The rep202replicates the first data packet to obtain a third data packet and a fourth data packet, sends the third data packet to the elm204, and sends the fourth data packet to the elm203. The elm204obtains a first received data packet between the second data packet and the third data packet, and sends the first received data packet to the rep205. The rep205replicates a data packet from the elm204to obtain a fifth data packet and a sixth data packet, sends the fifth data packet to the elm203, and sends the sixth data packet to the elm206. The elm203obtains a first received data packet between the fourth data packet and the fifth data packet, and sends the first received data packet to the elm206. The elm206obtains a first received data packet between a data packet from the elm203and a data packet from the elm205, and sends the first received data packet to the CE2. Correspondingly, for segment identifiers and segment labels of segments between the four nodes: the edge A, the relay C, the relay D, and the edge B, refer to the foregoing descriptions. Details are not repeated herein. Segment identifiers inside nodes include, for example, a segment identifier “3001” of a segment from the rep202to the elm203in the relay C and a segment identifier “3002” of a segment from the elm204to the rep205in the relay D. Segment labels inside nodes include, for example, a segment label “31” of the segment from the rep202to the elm203in the relay C and a segment label “32” of the segment from the elm204to the rep205in the relay D. Configuration information of the rep201in the edge A is the same as the configuration information of the rep101, and configuration information of the elm206in the edge B is the same as the configuration information of the elm106. Therefore, details are not repeated herein. Configuration information of the rep202and configuration information of the elm203in the relay C are as follows. The configuration information of the rep202includes: an ingress segment identifier “1001”, an egress segment identifier “3001”, and an egress segment identifier “1003”. Optionally, the configuration information of the rep202may further include: a segment IP address 10.0.0.1 (an IP address of a start node) and a segment IP address 10.0.0.3 (an IP address of an end node) that correspond to the ingress segment identifier “1001”, the segment IP address 10.0.0.3 (the IP address of the start node) and the segment IP address 10.0.0.3 (the IP address of the end node) that correspond to the egress segment identifier “3001”, and the segment IP address 10.0.0.3 (the IP address of the start node) and a segment IP address 10.0.0.4 (an IP address of an end node) that correspond to the egress segment identifier 1003. In an embodiment, the configuration information of the rep202may include or further include: the segment label “501” corresponding to the ingress segment identifier “1001”, the segment label “31” corresponding to the egress segment identifier “3001”, and the segment label “503” corresponding to the egress segment identifier “1003”. The configuration information of the elm203includes: an ingress segment identifier “3001”, an ingress segment identifier “1004”, and an egress segment identifier “1002”. Optionally, the configuration information of the elm203may further include: the segment IP address 10.0.0.3 (the IP address of the start node) and the segment IP address 10.0.0.3 (the IP address of the end node) that correspond to the ingress segment identifier “3001”, the segment IP address 10.0.0.4 (the IP address of the start node) and the segment IP address 10.0.0.3 (the IP address of the end node) that correspond to the ingress segment identifier “1004”, and the segment IP address 10.0.0.3 (the IP address of the start node) and a segment IP address 10.0.0.2 (an IP address of an end node) that correspond to the egress segment identifier “1002”. In an embodiment, the configuration information of the elm203may include or further include: the segment label “31” corresponding to the ingress segment identifier “3001”, the segment label “504” corresponding to the ingress segment identifier “1004”, and the segment label “502” corresponding to the egress segment identifier “1002”. Configuration information of the rep204and configuration information of the elm205in the relay D are as follows: The configuration information of the elm204includes: an ingress segment identifier “1005”, an ingress segment identifier “1003”, and an egress segment identifier “3002”. Optionally, the configuration information of the elm204may further include: the segment IP address 10.0.0.1 (the IP address of the start node) and the segment IP address 10.0.0.4 (the IP address of the end node) that correspond to the ingress segment identifier “1005”, the segment IP address 10.0.0.3 (the IP address of the start node) and the segment IP address 10.0.0.4 (the IP address of the end node) that correspond to the ingress segment identifier “1003”, and the segment IP address 10.0.0.4 (the IP address of the start node) and the segment IP address 10.0.0.4 (the IP address of the end node) that correspond to the egress segment identifier “3002”. In an embodiment, the configuration information of the elm204may include or further include: a segment label “505” corresponding to the ingress segment identifier “1005”, the segment label “503” corresponding to the ingress segment identifier “1003”, and a segment label “32” corresponding to the egress segment identifier “3002”. The configuration information of the rep205includes: an ingress segment identifier “3002”, the ingress segment identifier “1004”, and an egress segment identifier “1006”. Optionally, the configuration information of the rep105may further include: the segment IP address 10.0.0.4 (the IP address of the start node) and the segment IP address 10.0.0.4 (the IP address of the end node) that correspond to the ingress segment identifier “3002”, the segment IP address 10.0.0.4 (the IP address of the start node) and the segment IP address 10.0.0.3 (the IP address of the end node) that correspond to the ingress segment identifier “1004”, and the segment IP address 10.0.0.4 (the IP address of the start node) and the segment IP address 10.0.0.2 (the IP address of the end node) that correspond to the egress segment identifier “1006”. In an embodiment, the configuration information of the rep105may include or further include: the segment label “32” corresponding to the ingress segment identifier “3002”, a segment label “504” corresponding to the egress segment identifier “1004”, and a segment label “506” corresponding to the egress segment identifier “1006”. In addition to the ingress information and the egress information, the configuration information corresponding to each node may further include one or more of the following: a flow identifier, a module identifier, a node identifier, and the like. The flow identifier is an identifier of a data packet. In this embodiment of this application, the flow identifier may be a 32-bit unsigned integer. Each traffic flow received by the edge A corresponds to one flow identifier, and configuration information, for a same traffic flow, of nodes includes a same flow identifier. For example, a flow identifier in each of the first configuration information, the second configuration information, the third configuration information, the fourth configuration information, and the fifth configuration information is “101”. The module identifier is an identifier of a functional module in a node. For example, an identifier of a replication module is “PRF”, an identifier of an elimination module is “PEF”, and an identifier of an ordering module is “POF”. The node identifier is an identifier of a node. The identifier of the node may include an IP address of the node, a number of the node, and the like. For example, a node identifier of the first relay node may include “10.0.0.3”, “relay C”, “node C”, and the like. The configuration information may be stored in a form of a configuration file, and the configuration file may be, for example, an extensible markup language (XML) configuration file. In this embodiment, after the configuration information of the plurality of nodes is obtained, S102may be performed. S102: Configure the corresponding configuration information for the plurality of nodes. In this embodiment, S102and S202are performed by a controller. The controller may be independent of each node, or may be deployed on one of the nodes, for example, the first edge node. If the controller is independent of each node, the controller may configure the corresponding configuration information for each node by sending the corresponding configuration information to each node. To be specific, the controller sends first configuration information to the first edge node, sends second configuration information to a first relay node, sends third configuration information to the second edge node, and sends fourth configuration information or fifth configuration information to a second relay node. Specifically, the controller may send the corresponding configuration information to each node according to a protocol such as a network configuration protocol (NETCONF), a RESTCONF protocol, a path computation element communication protocol (PCEP), a border gateway protocol (BGP), or an OpenFlow protocol. The RESTCONF protocol is used to provide a hypertext transfer protocol (HTTP) interface that complies with a representational state transfer (REST) principle, and is compatible with a NETCONF data storage model. Optionally, the controller may send a corresponding remote procedure call (RPC) instruction to each node, where the instruction includes the configuration information corresponding to each node. If the controller is deployed on the first edge node, the first edge node may directly perform configuration based on the first configuration information, and configure the corresponding configuration information for each node by sending the corresponding configuration information to the other nodes. To be specific, the first edge node sends the second configuration information to the first relay node, sends the third configuration information to the second edge node, and sends the fourth configuration information or the fifth configuration information to the second relay node. Optionally, the first edge node may send a path message to another node according to the resource reservation protocol-traffic engineering (RSVP-TE), where the path message carries the configuration information corresponding to the another node. Specifically, the path message includes explicit route objects (ERGs), where the explicit path object is in a message format shown in Table 1. The message format may include a PREOF module type (namely, the foregoing described module identifier), a flow identifier (flow ID), a field length, and a TLV field. The TLV field may include ingress information and egress information of a functional module corresponding to the module identifier. TABLE 1PREOF module typeField length(Function Type)Flow identifier (flow ID)TLV For example, the relay C includes the rep102and the elm103. Table 2 shows a message format of a path message corresponding to the relay C. TABLE 2Rep64201(1001, 201, 10.0.0.1, 10.0.0.3, 501)(2001, 201, 10.0.0.3, 10.0.0.3, 0)(1003, 201, 10.0.0.3, 10.0.0.4, 503)Elm64201(2001, 201, 10.0.0.3, 10.0.0.3, 0)(1004, 201, 10.0.0.4, 10.0.0.3, 504)(1002, 201, 10.0.0.3, 10.0.0.2, 502) In this embodiment, after obtaining the corresponding configuration information, a corresponding module may be configured for each of the plurality of nodes based on the corresponding configuration information. For example, after obtaining the first configuration information, a replication module may be configured for the first edge node. The replication module of the first edge node is configured to: replicate a received data packet; output, to the first relay node, a first data packet obtained through replication; and output, to the second relay node, a second data packet obtained through replication. After obtaining the second configuration information, a replication module and an elimination module may be configured for the first relay node. The replication module of the first relay node is configured to replicate the first data packet; and output, to the second relay node, a third data packet obtained through replication. The elimination module of the first relay node is configured to: obtain a first received data packet between a fourth data packet obtained through replication and a data packet from the second relay node, and send the first received data packet to the second edge node. After obtaining the third configuration information, an elimination module may be configured for the second edge node. The elimination module of the second edge node is configured to: obtain a first received data packet between a data packet received from the first relay node and a data packet received from the second relay node, and output the first received data packet. After obtaining the fourth configuration information, a replication module and an elimination module may be configured for the second relay node. The replication module of the second relay node is configured to replicate the second data packet from the first edge node; and output, to the first relay node, a fifth data packet obtained through replication. The elimination module of the second relay node is configured to: obtain a first received data packet between a sixth data packet obtained through replication and the third data packet from the first relay node, and send the first received data packet to the second edge node. Alternatively, after obtaining the fifth configuration information, an elimination module and a replication module may be configured for the second relay node. The elimination module of the second relay node is configured to obtain a first received data packet between the second data packet from the first edge node and the third data packet from the first relay node. The replication module of the second relay node is configured to: replicate the data packet; output, to the first relay node, a fifth data packet obtained through replication; and output, to the second edge node, a sixth data packet obtained through replication. In an example embodiment, each node may generate a forwarding entry of a forwarding plane based on ingress information and egress information that are included in configuration information of the node, to implement a corresponding forwarding function. A forwarding entry of a node on a forwarding plane may include an identifier of a functional module included in the node, and an incoming label and an outgoing label of the functional module. If the configuration information does not include a segment label, the forwarding plane may determine, based on a mapping relationship between a segment identifier and a segment label, the segment label corresponding to the segment identifier. Specifically, a segment label corresponding to an ingress segment identifier is an incoming label, and a segment label corresponding to an egress segment identifier is an outgoing label. For example, Table 3 shows forwarding entries of the relay C on the forwarding plane. TABLE 3Functional moduleIncomingOutgoingidentifierlabellabelRep50131503Elm31502504 To forward a data packet on the forwarding plane by a node, each functional module needs to attach an outgoing label (to add a label or to replace a label) to the data packet when outputting the data packet. In this way, after receiving the data packet, a current node may obtain, through parsing, the outgoing label that is attached to the data packet on a previous node, where the outgoing label is an incoming label from the perspective of the current node, and then the current node queries for a forwarding entry based on the incoming label to determine a corresponding outgoing label. For example, if the replication module of the relay C receives a data packet with the label 501, the replication module looks up Table 3 to determine that outgoing labels corresponding to a functional module with an identifier rep are 31 and 503. This indicates that the data packet with the label 501 needs to be replicated, an outgoing label 31 and an outgoing label 503 are attached to the two data packets obtained through replication, and the two data packets each are sent on a corresponding link. After receiving the data packet with the outgoing label 31 attached by the replication module of the relay C, the elimination module of the relay C looks up Table 3 to determine that an outgoing label corresponding to a functional module with an identifier Elm is 502. Therefore, after receiving a data packet with an outgoing label 504 attached by a previous node, the elimination module of the relay C obtains one of the two data packets and outputs the data packet on a link with the outgoing label 502. In addition, it should be noted that the “link” described in this embodiment of this application may be a physical connection, or may be a virtual connection. In the foregoing embodiments, there are two relay nodes. In actual application, there may be more than two relay nodes. The following provides detailed descriptions by using an example in which there are four relay nodes. Because an embodiment in the case of four relay nodes is similar to the embodiment in the case of two relay nodes, same or similar content is not repeated, and reference may be made to the foregoing descriptions. FIG.4is a schematic diagram of a possible architecture of a plurality of nodes. In this figure, a first edge node edge A′, a second edge node edge B′, and four relay nodes are included, where the four relay nodes include a first relay node relay C′, a second relay node relay D′, a third relay node relay E, and a fourth relay node relay F. The edge A′, the relay C′, and the relay D′ are connected to each other, the relay C′, the relay D′, the relay E′, and the relay F′ are connected to each other, and the relay E′, the relay F′, and the edge B′ are connected to each other. An edge node may be connected to a customer edge node through an attachment circuit. Specifically, the edge A′ is connected to a CE1through an AC1, so that the edge A′ can receive a traffic flow from the CE1. The edge B′ is connected to a CE2through an AC2, so that the edge B′ can transmit, to the CE2, a traffic flow into which received data packets are integrated. InFIG.4, there are a total of 16 paths from the edge A′ to the edge B′, where the 16 paths are as follows:path 1: edge A′→relay C′→relay E′→edge B′;path 2: edge A′→relay C′→relay E′→relay F→edge B′;path 3: edge A′→relay C′→relay F→edge B′;path 4: edge A′→relay C′→relay F→relay E′→edge B′;path 5: edge A′→relay C′→relay D′→relay E′→edge B′;path 6: edge A′→relay C′→relay D′→relay F→edge B′;path 7: edge A′→relay C′→relay D′→relay F→relay E′→edge B′;path 8: edge A′→relay C′→relay D′→relay E′→relay F→edge B′;path 9: edge A′→relay D′→relay F→edge B′;path 10: edge A′→relay D′→relay F→relay E′→edge B′;path 11: edge A′→relay D′→relay E′→edge B′;path 12: edge A′→relay D′→relay E′→relay F→edge B′;path 13: edge A′→relay D′→relay C′→relay E→edge B′;path 14: edge A′→relay D′→relay C′→relay E→relay F→edge B′;path 15: edge A′→relay D′→relay C′→relay F→edge B′; andpath 16: edge A′→relay D′→relay C′→relay F→relay E→edge B′. The traffic flow sent by the CE1may be transmitted from the edge A to the edge B through the 16 paths, and then arrives at the CE2. The CE2can receive the traffic flow provided that one path is not faulty. To ensure that a data packet in the traffic flow can pass through the 16 paths, corresponding configuration information may be generated for each of the first edge node the edge A′, the second edge node the edge B′, the first relay node relay C′, the second relay node relay D′, the third relay node relay E, and the fourth relay node relay F, to configure a corresponding module. First configuration information may be generated for the edge A′. The first configuration information is used to: replicate a data packet included in a received traffic flow; output, to the relay C′, a first data packet obtained through replication; and output, to the relay D′, a second data packet obtained through replication. That is, the first configuration information is used to configure a replication module for the edge A′. The module can enable the edge A′ to replicate the data packet in the traffic flow received from the CE1, to obtain the first data packet and the second data packet, send the first data packet to the relay C′, and send the second data packet to the relay D′. Second configuration information may be generated for the relay C′. The second configuration information is used to: replicate the first data packet; output, to the relay D′, a third data packet obtained through replication; obtain and replicate a first received data packet between a fourth data packet obtained through replication and a data packet (the following described fifth data packet) from the relay D; output, to the relay E, a seventh data packet obtained through replication; and output, to the relay F, an eighth data packet obtained through replication. That is, the second configuration information is used to configure two replication modules and one elimination module for the relay C′. A first replication module (for example, a rep302inFIG.6(a)orFIG.8) of the relay C′ is configured to: replicate the first data packet to obtain the third data packet and the fourth data packet, and send the third data packet to the relay D′. The elimination module of the relay C′ is configured to obtain a first received data packet between the fourth data packet and the fifth data packet that is from the relay D′. A second replication module (for example, a rep304inFIG.6(a)orFIG.8) of the relay C′ is configured to replicate the first received data packet to obtain the seventh data packet and the eighth data packet, send the seventh data packet to the relay E′, and send the eighth data packet to the relay F. Sixth configuration information may be generated for the relay E′. The sixth configuration information is used to: obtain and replicate a first received data packet between the seventh data packet and a data packet (namely, the following described eleventh data packet) from the relay D′; output, to the relay F, a ninth data packet obtained through replication; obtain a first received data packet between a tenth data packet obtained through replication and a data packet (namely, the following described thirteenth data packet) from the relay F; and send the first received data packet to the edge B′. That is, the sixth configuration information is used to configure two elimination modules and a replication module for the relay E′. A first elimination module (for example, an elm305inFIG.6(a)orFIG.8) of the relay E′ is configured to obtain a first received data packet between the seventh data packet and the eleventh data packet. The replication module of the relay E is configured to: replicate the first received data packet to obtain the ninth data packet and the tenth data packet, and send the ninth data packet to the relay F. A second elimination module (for example, an elm307inFIG.6(a)orFIG.8) of the relay E′ is configured to: obtain a first received data packet between the tenth data packet and the thirteenth data packet, and send the first received data packet to the edge B′. Fourth configuration information may be generated for the relay D′. The fourth configuration information is used to: replicate the second data packet from the edge A′; output, to the relay C′, the fifth data packet obtained through replication; obtain and replicate a first received data packet between the sixth data packet obtained through replication and the third data packet from the relay C′; send, to the relay E′, the eleventh data packet obtained through replication; and send, to the relay F′, a twelfth data packet obtained through replication. That is, the fourth configuration information is used to configure two replication modules and one elimination module for the relay D′. A first replication module (for example, a rep308inFIG.7(a)orFIG.8) of the relay D′ is configured to: replicate the second data packet to obtain the fifth data packet and the sixth data packet, and send the fifth data packet to the relay C′. The elimination module of the relay D′ is configured to obtain a first received data packet between the sixth data packet and the third data packet from the relay D′. A second replication module (for example, a rep310inFIG.7(a) orFIG.8) of the relay D′ is configured to replicate the first received data packet to obtain the eleventh data packet and the twelfth data packet, send the eleventh data packet to the relay E, and send the twelfth data packet to the relay F. Seventh configuration information may be generated for the relay F. The seventh configuration information is used to: obtain and replicate a first received data packet between the eighth data packet and the twelfth data packet; send, to the relay E′, the thirteenth data packet obtained through replication; obtain a first received data packet between the fourteenth data packet obtained through replication and the ninth data packet from the relay E′; and send the first received data packet to the edge B′. That is, the seventh configuration information is used to configure two elimination modules and one replication module for the relay F. A first elimination module (for example, an elm311inFIG.7(a)orFIG.8) of the relay F′ is configured to obtain a first received data packet between the eighth data packet and the twelfth data packet. The replication module of the relay F is configured to: replicate the first received data packet to obtain the thirteenth data packet and the fourteenth data packet, and send the thirteenth data packet to the relay E′. A second elimination module (for example, an elm313inFIG.7(a)orFIG.8) of the relay F′ is configured to: obtain a first received data packet between the fourteenth data packet and the ninth data packet, and send the first received data packet to the edge B′. Third configuration information may be generated for the edge B′. The third configuration information is used to obtain a first received data packet between a data packet received from the relay E′ and a data packet received from the relay F′, and may send the first received data packet to the CE2. That is, the third configuration information is used to configure an elimination module for the edge B′, and the module can enable the edge B′ to obtain the first received data packet in the data packet from the relay E′ and the data packet from the relay F, and to send the first received data packet to the CE2. Based on the foregoing analysis, refer toFIG.5. In the figure, a rep301may be configured for the edge A′, and an elm314may be configured for the edge B′. The rep301is configured to: obtain the data packet from the CE1, replicate the received data packet to obtain the first data packet and the second data packet, send the first data packet to the relay C′, and send the second data packet to the relay D′. The elm314is configured to: receive a data packet from the relay E′ and a data packet from the relay F′, and may obtain a first received data packet and send the first received data packet to the CE2. There may be a plurality of implementations for a quantity of modules included in each of the relay C′, the relay D′, the relay E′, and the relay F′ and manners of connections between the modules. For the relay C′, as shown inFIG.6(a), in an example embodiment, the rep302, an elm303, and the rep304may be configured for the relay C′. The rep302is configured to: receive and replicate the first data packet to obtain the third data packet and the fourth data packet, send the third data packet to the relay D′, and send the fourth data packet to the elm303. The elm303obtains the first received data packet in the fourth data packet and the fifth data packet from the relay D′, and sends the first received data packet to rep304. The rep304replicates the received data packet to obtain the seventh data packet and the eighth data packet, and sends the seventh data packet to the relay E′, and sends the eighth data packet to the relay F. Correspondingly, the elm305, a rep306, and the elm307may be configured for the relay E. The elm305is configured to: obtain the first received data packet in the seventh data packet and the eleventh data packet from the relay D′, and send the first received data packet to the rep306. The rep306replicates the received data packet to obtain the ninth data packet and the tenth data packet, sends the ninth data packet to the relay F′, and sends the tenth data packet to the elm307. As shown inFIG.6(b), in another embodiment, a rep402and an elm403may be configured for the relay C′. The rep402is configured to: receive and replicate the first data packet to obtain the third data packet, the fourth data packet, and the eighth data packet; send the third data packet to the relay D′; send the fourth data packet to the elm403; and send the eighth data packet to the relay F. The elm403obtains the first received data packet in the fourth data packet and the fifth data packet that is from the relay D′, and sends the first received data packet to the relay E. Correspondingly, a rep404and an elm405may be configured for the relay E. The rep404is configured to: replicate a data packet from the relay C′ to obtain the ninth data packet and the tenth data packet, send the ninth data packet to the relay F′, and send the tenth data packet to the elm405. The elm405obtains a first received data packet between the tenth data packet, the eleventh data packet from the relay D′, and the thirteenth data packet from the relay F′, and sends the first received data packet to the edge B′. For the relay D′, as shown inFIG.7(a), in an example embodiment, the rep308, an elm309, and the rep310may be configured for the relay D′. The rep308replicates the second data packet to obtain the fifth data packet and the sixth data packet, sends the fifth data packet to the relay C′, and sends the sixth data packet to the elm309. The elm309obtains a first received data packet between the sixth data packet and the third data packet that is from the relay C′, and sends the first received data packet to the rep310. The rep310replicates a data packet from the elm309to obtain the eleventh data packet and the twelfth data packet, sends the eleventh data packet to the relay E′, and sends the twelfth data packet to the relay F. Correspondingly, the elm311, a rep312, and the elm313may be configured for the relay F. The elm311is configured to: obtain a first received data packet between the twelfth data packet and the eighth data packet that is from the relay C′, and sends the first received data packet to the rep312. The rep312replicates the received data packet to obtain the thirteenth data packet and the fourteenth data packet, sends the thirteenth data packet to the relay E′, and sends the fourteenth data packet to the elm313. The elm313is configured to obtain the first received data packet in the fourteenth data packet and the ninth data packet that is from the relay E′, and sends the first received data packet to the edge B′. As shown inFIG.7(b), in another embodiment, a rep408and an elm409may be configured for the relay D′. The rep408is configured to: replicate the second data packet from the edge A′ to obtain the fifth data packet, the sixth data packet, and the eleventh data packet; send the fifth data packet to the relay C′; send the sixth data packet to the elm409; and send the eleventh data packet to the relay E′. The elm409is configured to: obtain a first received data packet between the sixth data packet and the third data packet that is from the relay C′, and send the first received data packet to the relay F. Correspondingly, a rep410, and an elm411may be configured for the relay F. The rep410is configured to: replicate a data packet from the relay D′ to obtain the thirteenth data packet and the fourteenth data packet, send the thirteenth data packet to the relay E, and send the fourteenth data packet to the elm411. The elm411is configured to: obtain a first received data packet between the fourteenth data packet, the eighth data packet from the relay C′, and the ninth data packet from the relay E′; and send the first received data packet to the edge B′. Certainly, the modules included in the nodes and manners of connections between the modules shown inFIG.6(a),FIG.6(b),FIG.7(a), andFIG.7(b)do not constitute any limitation on the technical solutions of this application. A person skilled in the art may further perform designing depending on an actual situation. In addition, in embodiments of this application, either ofFIG.6(a)andFIG.6(b)and either ofFIG.7(a)andFIG.7(b)may be randomly combined. The following uses an example architecture obtained by combiningFIG.5,FIG.6(a), andFIG.7(a), namely, an architecture shown inFIG.8, for detailed description. Specifically, inFIG.8, a rep301is connected to both a rep302and a rep308; the rep302is connected to both an elm303and an elm309; the rep308is connected to both the elm303and the elm309; the elm303is connected to a rep304; the rep304is connected to both an elm305and an elm311; the elm309is connected to a rep310; the rep310is connected to both the elm305and the elm311; the elm305is connected to a rep306; the rep306is connected to both an elm307and an elm313; the elm311is connected to a rep312; the rep312is connected to both the elm307and the elm313; and both the elm307and the elm313are connected to an elm314. The rep301receives a data packet from a CE1, replicates the received data packet to obtain a first data packet and a second data packet, sends the first data packet to the rep302, and sends the second data packet to the rep308. The rep302replicates the first data packet to obtain a third data packet and a fourth data packet, sends the third data packet to the elm309, and sends the fourth data packet to the elm303. The rep308replicates the second data packet to obtain a fifth data packet and a sixth data packet, sends the fifth data packet to the elm303, and sends the sixth data packet to the elm309. The elm303obtains a first received data packet between the fourth data packet and the fifth data packet, and sends the first received data packet to the rep304. The rep304replicates the received data packet to obtain a seventh data packet and an eighth data packet, sends the seventh data packet to the elm305, and sends the eighth data packet to the elm311. The elm309obtains a first received data packet between the third data packet and the sixth data packet, and sends the first received data packet to the rep310. The rep310replicates the received data packet to obtain an eleventh data packet and a twelfth data packet, sends the eleventh data packet to the elm305, and sends the twelfth data packet to the elm311. The elm305obtains a first received data packet between the seventh data packet and the eleventh data packet, and sends the first received data packet to the rep306. The rep306replicates the received data packet to obtain a ninth data packet and a tenth data packet, sends the ninth data packet to the elm313, and sends the tenth data packet to the elm307. The elm311obtains a first received data packet between the eighth data packet and the twelfth data packet, and sends the first received data packet to the rep312. The rep312replicates the received data packet to obtain a thirteenth data packet and a fourteenth data packet, sends the thirteenth data packet to the elm307, and sends the fourteenth data packet to the elm313. The elm307obtains a first received data packet between the tenth data packet and the thirteenth data packet, and sends the first received data packet to the elm314. The elm313obtains a first received data packet between the ninth data packet and the fourteenth data packet, and sends the first received data packet to the elm314. The elm314obtains a first received data packet between a data packet from the elm307and a data packet from the elm313, and sends the first received data packet to the CE2. In this embodiment, a link between nodes and a connection between modules inside a node may be represented by a segment, and different segments have different segment identifiers. For example, inFIG.8, a segment identifier of a segment from the edge A′ to the relay C′ is “4001”, a segment identifier of a segment from the edge A′ to the relay D′ is “4002”, a segment identifier of a segment from the relay C′ to the relay D′ is “4003”, a segment identifier of a segment from the relay D′ to the relay C′ is “4004”, a segment identifier of a segment from the relay C′ to the relay F′ is “4005”, a segment identifier of a segment from the relay D′ to the relay E′ is “4006”, a segment identifier of a segment from the relay C′ to the relay E is “4007”, a segment identifier of a segment from the relay D′ to the relay F is “4008”, a segment identifier of a segment from the relay E′ to the relay F is “4009”, a segment identifier of a segment from the relay F to the relay E′ is “4010”, a segment identifier of a segment from the relay E′ to the edge B′ is “4011”, and a segment identifier of a segment from the relay F to the edge B′ is “4012”. The segment identifiers “4001” to “4012” are identifiers of segments between nodes. Segment identifiers inside the nodes, for example, include: a segment identifier “5001” of a segment from rep302to elm303and a segment identifier “5002” of a segment from elm303to rep304in the relay C′; a segment identifier “5003” of a segment from elm305to rep306and a segment identifier “5004” of a segment from rep306to elm307in the relay E′; a segment identifier “5005” of a segment from rep308to elm309and a segment identifier “5006” of a segment from elm309to rep310in the relay D′; and a segment identifier “5007” of a segment from elm311to rep312and a segment identifier “5008” of a segment from rep312to elm313in the relay F′. In this embodiment, a segment between nodes and a segment between modules inside a node may further have a segment label. For example, inFIG.8, a segment label of the segment from the edge A′ to the relay C′ is “601”, a segment label of the segment from the edge A′ to the relay D′ is “602”, a segment label of the segment from the relay C′ to the relay D′ is “603”, a segment label of the segment from the relay D′ to the relay C′ is “604”, a segment label of the segment from the relay C′ to the relay F is “605”, a segment label of the segment from the relay D′ to the relay E′ is “606”, a segment label of the segment from the relay C′ to the relay E′ is “607”, a segment label of the segment from the relay D′ to the relay F is “608”, a segment label of the segment from the relay E to the relay F is “609”, a segment label of the segment from the relay F′ to the relay E′ is “610”, a segment label of the segment from the relay E′ to the edge B′ is “611”, and a segment label of the segment from the relay F to the edge B′ is “612”. The segment labels “601” to “612” are labels of segments between nodes. Segment labels inside nodes, for example, include: a segment label “41” of a segment from the rep302to the elm303and a segment label “42” of a segment from the elm303to the rep304in the relay C′, a segment label “43” of a segment from the elm305to the rep306and a segment label “44” of a segment from the rep306to the elm307in the relay E′, a segment label “45” of a segment from the rep308to the elm309and a segment label “46” of a segment from the elm309to the rep310in the relay D′, and a segment label “47” of a segment from the elm311to the rep312and a segment label “48” of a segment from the rep312to the elm313in the relay F. In addition, each node may further have an IP address. For example, inFIG.4, an IP address of the edge A′ is 10.1.0.1, an IP address of the edge B′ is 10.1.0.2, an IP address of the relay C′ is 10.1.0.3, an IP address of the relay D′ is 10.1.0.4, an IP address of the relay E is 10.1.0.5, and an IP address of the relay F is 10.1.0.6. In this case, configuration information of the edge A, namely, the first configuration information, may also be considered as configuration information of the rep201, and may include an egress segment identifier “4001” and an egress segment identifier “4002”. Optionally, the first configuration information may further include: an egress segment IP address 10.1.0.1 (an IP address of a start node) and an egress segment IP address 10.1.0.3 (an IP address of an end node) that correspond to the egress segment identifier “4001”, and the egress segment IP address 10.1.0.1 (the IP address of the start node) and an egress segment IP address 10.1.0.4 (an IP address of an end node) that correspond to the egress segment identifier “4002”. In an embodiment, the first configuration information may include or further include: the segment label “601” corresponding to the egress segment identifier “4001” and the segment label “602” corresponding to the egress segment identifier “4002”. Configuration information of the relay C′, namely the second configuration information, includes configuration information of the rep302, configuration information of the elm303, and configuration information of the rep304. The configuration information of the rep302includes: an ingress segment identifier “4001”, an egress segment identifier “4003”, and an egress segment identifier “5001”. Optionally, the configuration information of the rep302may further include: the ingress segment IP address 10.1.0.1 (the IP address of the start node) and the ingress segment IP address 10.1.0.3 (the IP address of the end node) that correspond to the ingress segment identifier “4001”, the egress segment IP address 10.1.0.3 (an IP address of a start node) and the egress segment IP address 10.1.0.4 (the IP address of the end node) that correspond to the egress segment identifier “4003”, and the egress segment IP address 10.1.0.3 (the IP address of the start node) and the egress segment IP address 10.1.0.3 (the IP address of the end node) that correspond to the egress segment identifier 5001. In an embodiment, the configuration information of the rep302may include or further include: the segment label “601” corresponding to the ingress segment identifier “4001”, the segment label “603” corresponding to the egress segment identifier “4003”, and the segment label “41” corresponding to the egress segment identifier “5001”. The configuration information of the elm303includes: an ingress segment identifier “5001”, an ingress segment identifier “4004”, and an egress segment identifier “5002”. Optionally, the configuration information of the elm303may further include: the ingress segment IP address 10.1.0.3 (the IP address of the start node) and the ingress segment IP address 10.1.0.3 (the IP address of the end node) that correspond to the ingress segment identifier “5001”, an ingress segment IP address 10.1.0.4 (an IP address of a start node) and the ingress segment IP address 10.1.0.3 (the IP address of the end node) that correspond to the ingress segment identifier “4004”, and the egress segment IP address 10.1.0.3 (the IP address of the start node) and the egress segment IP address 10.1.0.3 (the IP address of the end node) that correspond to the egress segment identifier “5002”. In an embodiment, the configuration information of the elm303may include or further include: the segment label “41” corresponding to the ingress segment identifier “5001”, the segment label “604” corresponding to the ingress segment identifier “4004”, and the segment label “42” corresponding to the egress segment identifier “5002”. The configuration information of the rep304includes: an ingress segment identifier “5002”, an egress segment identifier “4005”, and an egress segment identifier “4007”. Optionally, the configuration information of the rep304may further include: the ingress segment IP address 10.1.0.3 (the IP address of the start node) and the ingress segment IP address 10.1.0.3 (the IP address of the end node) that correspond to the ingress segment identifier “5002”, the egress segment IP address 10.1.0.3 (the IP address of the start node) and an egress segment IP address 10.1.0.6 (an IP address of an end node) that correspond to the egress segment identifier “4005”, and the egress segment IP address 10.1.0.3 (the IP address of the start node) and an egress segment IP address 10.1.0.5 (an IP address of an end node) that correspond to the egress segment identifier 4007. In an embodiment, the configuration information of the rep304may include or further include: the segment label “42” corresponding to the ingress segment identifier “5002”, the segment label “605” corresponding to the egress segment identifier “4005”, and the segment label “607” corresponding to the egress segment identifier “4007”. Configuration information of the relay D′, namely, the fourth configuration information, includes configuration information of the rep308, configuration information of the elm309, and configuration information of the rep310. The configuration information of the rep308includes: an ingress segment identifier “4002”, an egress segment identifier “4004”, and an egress segment identifier “5005”. Optionally, the configuration information of the rep308may further include: the ingress segment IP address 10.1.0.1 (the IP address of the start node) and the ingress segment IP address 10.1.0.4 (the IP address of the end node) that correspond to the ingress segment identifier “4002”, the egress segment IP address 10.1.0.4 (the IP address of the start node) and the egress segment IP address 10.1.0.3 (the IP address of the end node) that correspond to the egress segment identifier “4004”, and the egress segment IP address 10.1.0.4 (the IP address of the start node) and the egress segment IP address 10.1.0.4 (the IP address of the end node) that correspond to the egress segment identifier 5005. In an embodiment, the configuration information of the rep308may include or further include: the segment label “602” corresponding to the ingress segment identifier “4002”, the segment label “604” corresponding to the egress segment identifier “4004”, and the segment label “45” corresponding to the egress segment identifier “5005”. The configuration information of the elm309includes: an ingress segment identifier “5005”, an ingress segment identifier “4003”, and an egress segment identifier “5006”. Optionally, the configuration information of the elm309may further include: the ingress segment IP address 10.1.0.4 (the IP address of the start node) and the ingress segment IP address 10.1.0.4 (the IP address of the end node) that correspond to the ingress segment identifier “5005”, the ingress segment IP address 10.1.0.3 (the IP address of the start node) and the ingress segment IP address 10.1.0.4 (the IP address of the end node) that correspond to the ingress segment identifier “4003”, and the egress segment IP address 10.1.0.4 (the IP address of the start node) and the egress segment IP address 10.1.0.4 (the IP address of the end node) that correspond to the egress segment identifier 5006. In an embodiment, the configuration information of the elm309may include or further include: the segment label “45” corresponding to the ingress segment identifier “5005”, the segment label “603” corresponding to the ingress segment identifier “4003”, and the segment label “46” corresponding to the egress segment identifier “5006”. The configuration information of the rep310includes: an ingress segment identifier “5006”, an egress segment identifier “4006”, and an egress segment identifier “4008”. Optionally, the configuration information of the rep310may further include: the ingress segment IP address 10.1.0.4 (the IP address of the start node) and the ingress segment IP address 10.1.0.4 (the IP address of the end node) that correspond to the ingress segment identifier “5006”, the egress segment IP address 10.1.0.4 (the IP address of the start node) and the egress segment IP address 10.1.0.5 (the IP address of the end node) that correspond to the egress segment identifier “4006”, and the egress segment IP address 10.1.0.4 (the IP address of the start node) and the egress segment IP address 10.1.0.6 (the IP address of the end node) that correspond to the egress segment identifier 4008. In an embodiment, the configuration information of the rep304may include or further include: the segment label “46” corresponding to the ingress segment identifier “5006”, the segment label “606” corresponding to the egress segment identifier “4006”, and the segment label “608” corresponding to the egress segment identifier “4008”. Configuration information of the relay E, namely, the sixth configuration information, includes configuration information of the elm305, configuration information of the rep306, and configuration information of the elm307. The configuration information of the elm305includes: an ingress segment identifier “4006”, an ingress segment identifier “4007”, and an egress segment identifier “5003”. Optionally, the configuration information of the elm305may further include: the ingress segment IP address 10.1.0.4 (the IP address of the start node) and the ingress segment IP address 10.1.0.5 (the IP address of the end node) that correspond to the ingress segment identifier “4006”, the ingress segment IP address 10.1.0.3 (the IP address of the start node) and the ingress segment IP address 10.1.0.5 (the IP address of the end node) that correspond to the ingress segment identifier “4007”, and the egress segment IP address 10.1.0.5 (an IP address of a start node) and the egress segment IP address 10.1.0.5 (the IP address of the end node) that correspond to the egress segment identifier 5003. In an embodiment, the configuration information of the elm305may include or further include: the segment label “606” corresponding to the ingress segment identifier “4006”, the segment label “607” corresponding to the ingress segment identifier “4007”, and the segment label “43” corresponding to the egress segment identifier “5003”. The configuration information of the rep306includes: an ingress segment identifier “5003”, an egress segment identifier “5004”, and an egress segment identifier “4009”. Optionally, the configuration information of the rep306may further include: the ingress segment IP address 10.1.0.5 (the IP address of the start node) and the ingress segment IP address 10.1.0.5 (the IP address of the end node) that correspond to the ingress segment identifier “5003”, the egress segment IP address 10.1.0.5 (the IP address of the start node) and the egress segment IP address 10.1.0.5 (the IP address of the end node) that correspond to the egress segment identifier “5004”, and the egress segment IP address 10.1.0.5 (the IP address of the start node) and the egress segment IP address 10.1.0.6 (the IP address of the end node) that correspond to the egress segment identifier 4009. In an embodiment, the configuration information of the rep306may include or further include: the segment label “43” corresponding to the ingress segment identifier “5003”, the segment label “44” corresponding to the egress segment identifier “5004”, and the segment label “609” corresponding to the egress segment identifier “4009”. The configuration information of the elm307includes: an ingress segment identifier “4010”, an ingress segment identifier “5004”, and an egress segment identifier “4011”. Optionally, the configuration information of the elm307may further include: the ingress segment IP address 10.1.0.6 (an IP address of a start node) and the ingress segment IP address 10.1.0.5 (the IP address of the end node) that correspond to the ingress segment identifier “4010”, the ingress segment IP address 10.1.0.5 (the IP address of the start node) and the ingress segment IP address 10.1.0.5 (the IP address of the end node) that correspond to the ingress segment identifier “5004”, and the egress segment IP address 10.1.0.5 (the IP address of the start node) and an egress segment IP address 10.1.0.2 (an IP address of an end node) that correspond to the egress segment identifier 4011. In an embodiment, the configuration information of the elm307may include or further include: the segment label “610” corresponding to the ingress segment identifier “4010”, the segment label “44” corresponding to the ingress segment identifier “5004”, and the segment label “611” corresponding to the egress segment identifier “4011”. Configuration information of the relay F, namely, the seventh configuration information, includes configuration information of the elm311, configuration information of the rep312, and configuration information of the elm313. Configuration information of the elm311includes: an ingress segment identifier “4005”, an ingress segment identifier “4008”, and an egress segment identifier “5007”. Optionally, the configuration information of the elm311may further include: the ingress segment IP address 10.1.0.3 (the IP address of the start node) and the ingress segment IP address 10.1.0.6 (the IP address of the end node) that correspond to the ingress segment identifier “4005”, the ingress segment IP address 10.1.0.4 (the IP address of the start node) and the ingress segment IP address 10.1.0.6 (the IP address of the end node) that correspond to the ingress segment identifier “4008”, and the egress segment IP address 10.1.0.6 (the IP address of the start node) and the egress segment IP address 10.1.0.6 (the IP address of the end node) that correspond to the egress segment identifier 5007. In an embodiment, the configuration information of the elm311may include or further include: the segment label “605” corresponding to the ingress segment identifier “4005”, the segment label “608” corresponding to the ingress segment identifier “4008”, and the segment label “47” corresponding to the egress segment identifier “5007”. The configuration information of the rep312includes: an ingress segment identifier “5007”, an egress segment identifier “5008”, and an egress segment identifier “4010”. Optionally, the configuration information of the rep312may further include: the ingress segment IP address 10.1.0.6 (the IP address of the start node) and the ingress segment IP address 10.1.0.6 (the IP address of the end node) that correspond to the ingress segment identifier “5007”, the egress segment IP address 10.1.0.6 (the IP address of the start node) and the egress segment IP address 10.1.0.6 (the IP address of the end node) that correspond to the egress segment identifier “5008”, and the egress segment IP address 10.1.0.6 (the IP address of the start node) and the egress segment IP address 10.1.0.5 (the IP address of the end node) that correspond to the egress segment identifier 4010. In an embodiment, the configuration information of the rep312may include or further include: the segment label “47” corresponding to the ingress segment identifier “5007”, the segment label “48” corresponding to the egress segment identifier “5008”, and the segment label “610” corresponding to the egress segment identifier “4010”. The configuration information of the elm313includes: an ingress segment identifier “4009”, an ingress segment identifier “5008”, and an egress segment identifier “4012”. Optionally, the configuration information of the elm313may further include: the ingress segment IP address 10.1.0.5 (the IP address of the start node) and the ingress segment IP address 10.1.0.6 (the IP address of the end node) that correspond to the ingress segment identifier “4009”, the ingress segment IP address 10.1.0.6 (the IP address of the start node) and the ingress segment IP address 10.1.0.6 (the IP address of the end node) that correspond to the ingress segment identifier “5008”, and the egress segment IP address 10.1.0.6 (the IP address of the start node) and the egress segment IP address 10.1.0.2 (the IP address of the end node) that correspond to the egress segment identifier 4012. In an embodiment, the configuration information of the elm313may include or further include: the segment label “609” corresponding to the ingress segment identifier “4009”, the segment label “48” corresponding to the ingress segment identifier “5008”, and the segment label “612” corresponding to the egress segment identifier “4012”. Configuration information of the edge B′, namely, the third configuration information, may be considered as configuration information of the elm314. The configuration information of the elm314includes: an ingress segment identifier “4011” and an ingress segment identifier “4012”. Optionally, the configuration information of the elm314may further include: the ingress segment IP address 10.1.0.5 (the IP address of the start node) and the ingress segment IP address 10.1.0.2 (the IP address of the end node) that correspond to the ingress segment identifier “4011”, and the ingress segment IP address 10.1.0.6 (the IP address of the start node) and the ingress segment IP address 10.1.0.2 (the IP address of the end node) that correspond to the ingress segment identifier “4012”. In an embodiment, the configuration information of the elm314may include or further include: the segment label “611” corresponding to the ingress segment identifier “4011” and the segment label “612” corresponding to the ingress segment identifier “4012”. In the foregoing embodiment, the second edge node has only the elimination module. In actual application, the second edge node may further have an ordering module, configured to order a plurality of received data packets. As described above, to distinguish between different data packets, different data packets may correspond to different data packet identifiers, and the data packet identifiers may be ordered in ascending order based on a generation order of the data packets. When receiving data packets with different data packet identifiers, the second edge node may order the data packets based on the data packet identifiers, so that the CE2can receive the data packets that are sequentially ordered. Therefore, the third configuration information generated for the second edge node may be further used to order a plurality of data packets output by the elimination module of the second edge node. In addition, it should be noted that the replication module may perform a packet replication function, the elimination module may perform a packet elimination function, and the ordering module may perform a packet ordering function. FIG.9is a schematic structural diagram of a traffic flow processing apparatus according to an embodiment of this application. The traffic flow processing apparatus provided in this embodiment may be used in a controller. The traffic flow processing apparatus500specifically includes:a first configuration module501, configured to generate and configure first configuration information for a first edge node, where the first configuration information is used to: replicate a data packet included in a received traffic flow; output, to a first relay node, a first data packet obtained through replication; and output, to a second relay node, a second data packet obtained through replication;a second configuration module502, configured to generate and configure second configuration information for the first relay node, where the second configuration information is used to: replicate the first data packet; output, to the second relay node, a third data packet obtained through replication; and obtain and output a first received data packet between a fourth data packet obtained through replication and a fifth data packet from the second relay node; anda third configuration module503, configured to generate and configure third configuration information for a second edge node, where the third configuration information is used to obtain a first received data packet between a data packet from the first relay node and a data packet from the second relay node, and the second relay node is not on a link from the first edge node to the second edge node through the first relay node. In this embodiment, protection and transmission of a traffic flow is implemented by using the first configuration module501, the second configuration module502, and the third configuration module503. Optionally, the apparatus500further includes: a fourth configuration module, configured to generate and configure fourth configuration information for the second relay node, where the fourth configuration information is used to: replicate the second data packet from the first edge node; output, to the first relay node, the fifth data packet obtained through replication; and obtain and output a first received data packet between a sixth data packet obtained through replication and the third data packet from the first relay node. Optionally, the first configuration information includes: a first egress segment identifier of a replication module of the first edge node and a second egress segment identifier of the replication module of the first edge node. The first egress segment identifier of the replication module of the first edge node is used to identify a segment from the first edge node to the first relay node. The second egress segment identifier of the replication module of the first edge node is used to identify a segment from the first edge node to the second relay node. Optionally, the second configuration information includes: an ingress segment identifier of a replication module of the first relay node, a first egress segment identifier of the replication module of the first relay node, and a second egress segment identifier of the replication module of the first relay node. The ingress segment identifier of the replication module of the first relay node is used to identify a segment from the first edge node to the first relay node. The first egress segment identifier of the replication module of the first relay node is used to identify a segment from the replication module of the first relay node to an elimination module of the first relay node. The second egress segment identifier of the replication module of the first relay node is used to identify a segment from the first relay node to the second relay node. The second configuration information further includes: a first ingress segment identifier of the elimination module of the first relay node, a second ingress segment identifier of the elimination module of the first relay node, and an egress segment identifier of the elimination module of the first relay node. The first ingress segment identifier of the elimination module of the first relay node is used to identify the segment from the replication module of the first relay node to the elimination module of the first relay node. The second ingress segment identifier of the elimination module of the first relay node is used to identify a segment from the second relay node to the first relay node. The egress segment identifier of the elimination module of the first relay node is used to identify a segment from the first relay node to the second edge node. Optionally, the third configuration information includes: a first ingress segment identifier of an elimination module of the second edge node and a second ingress segment identifier of the elimination module of the second edge node. The first ingress segment identifier of the elimination module of the second edge node is used to identify the segment from the first relay node to the second edge node. The second ingress segment identifier of the elimination module of the second edge node is used to identify a segment from the second relay node to the second edge node. Optionally, the first configuration information further includes: a first egress segment IP address of the replication module of the first edge node and a second egress segment IP address of the replication module of the first edge node. The first egress segment IP address of the replication module of the first edge node includes an IP address of the first edge node and an IP address of the first relay node, and the first egress segment IP address of the replication module of the first edge node corresponds to the segment from the first edge node to the first relay node. The second egress segment IP address of the replication module of the first edge node includes the IP address of the first edge node and an IP address of the second relay node, and the second egress segment IP address of the replication module of the first edge node corresponds to the segment from the first edge node to the second relay node. Optionally, the second configuration information further includes: an ingress segment IP address of the replication module of the first relay node, a first egress segment IP address of the replication module of the first relay node, and a second egress segment IP address of the replication module of the first relay node. The ingress segment IP address of the replication module of the first relay node includes an IP address of the first edge node and an IP address of the first relay node, and the ingress segment IP address of the replication module of the first relay node corresponds to the segment from the first edge node to the first relay node. The first egress segment IP address of the replication module of the first relay node includes the IP address of the first relay node, and the first egress segment IP address of the replication module of the first relay node corresponds to the segment from the replication module of the first relay node to the elimination module of the first relay node. The second egress segment IP address of the replication module of the first relay node includes the IP address of the first relay node and an IP address of the second relay node, and the second egress segment IP address of the replication module of the first relay node corresponds to the segment from the first relay node to the second relay node. The second configuration information further includes: a first ingress segment IP address of the elimination module of the first relay node, a second ingress segment IP address of the elimination module of the first relay node, and an egress segment IP address of the elimination module of the first relay node. The first ingress segment IP address of the elimination module of the first relay node includes the IP address of the first relay node, and the first ingress segment IP address of the elimination module of the first relay node corresponds to the segment from the replication module of the first relay node to the elimination module of the first relay node. The second ingress segment IP address of the elimination module of the first relay node includes the IP address of the second relay node and the IP address of the first relay node, and the second ingress segment IP address of the elimination module of the first relay node corresponds to the segment from the second relay node to the first relay node. The egress segment IP address of the elimination module of the first relay node includes the IP address of the first relay node and an IP address of the second edge node, and the egress segment IP address of the elimination module of the first relay node corresponds to the segment from the first relay node to the second edge node. Optionally, the third configuration information further includes: a first ingress segment IP address of the elimination module of the second edge node and a second ingress segment IP address of the elimination module of the second edge node. The first ingress segment IP address of the elimination module of the second edge node includes an IP address of the first relay node and an IP address of the second edge node, and the first ingress segment IP address of the elimination module of the second edge node corresponds to the segment from the first relay node to the second edge node. The second ingress segment IP address of the elimination module of the second edge node includes an IP address of the second relay node and an IP address of the second edge node, and the second ingress segment IP address of the elimination module of the second edge node corresponds to the segment from the second relay node to the second edge node. Optionally, the first configuration information further includes: a first egress segment label of the replication module of the first edge node and a second egress segment label of the replication module of the first edge node. The first egress segment label of the first edge node corresponds to the segment from the first edge node to the first relay node. The second egress segment label of the first edge node corresponds to the segment from the first edge node to the second relay node. Optionally, the second configuration information further includes: an ingress segment label of the replication module of the first relay node, a first egress segment label of the replication module of the first relay node, and a second egress segment label of the replication module of the first relay node. The ingress segment label of the replication module of the first relay node corresponds to the segment from the first edge node to the first relay node. The first egress segment label of the replication module of the first relay node corresponds to the segment from the replication module of the first relay node to the elimination module of the first relay node. The second egress segment label of the replication module of the first relay node corresponds to the segment from the first relay node to the second relay node. The second configuration information further includes: a first ingress segment label of the elimination module of the first relay node, a second ingress segment label of the elimination module of the first relay node, and an egress segment label of the elimination module of the first relay node. The first ingress segment label of the elimination module of the first relay node corresponds to the segment from the replication module of the first relay node to the elimination module of the first relay node. The second ingress segment label of the elimination module of the first relay node corresponds to the segment from the second relay node to the first relay node. The egress segment label of the elimination module of the first relay node corresponds to the segment from the first relay node to the second edge node. Optionally, the third configuration information further includes: a first ingress segment label of the elimination module of the second edge node and a second ingress segment label of the elimination module of the second edge node. The first ingress segment label of the elimination module of the second edge node corresponds to the segment from the first relay node to the second edge node. The second ingress segment label of the elimination module of the second edge node corresponds to the segment from the second relay node to the second edge node. Optionally, the fourth configuration information further includes: an ingress segment identifier of a replication module of the second relay node, a first egress segment identifier of the replication module of the second relay node, and a second egress segment identifier of the replication module of the second relay node. The ingress segment identifier of the replication module of the second relay node is used to identify a segment from the first edge node to the second relay node. The first egress segment identifier of the replication module of the second relay node is used to identify a segment from the replication module of the second relay node to an elimination module of the second relay node. The second egress segment identifier of the replication module of the second relay node is used to identify a segment from the second relay node to the first relay node. The fourth configuration information further includes: a first ingress segment identifier of the elimination module of the second relay node, a second ingress segment identifier of the elimination module of the second relay node, and an egress segment identifier of the elimination module of the second relay node. The first ingress segment identifier of the elimination module of the second relay node is used to identify the segment from the replication module of the second relay node to the elimination module of the second relay node. The second ingress segment identifier of the elimination module of the second relay node is used to identify a segment from the first relay node to the second relay node. The egress segment identifier of the elimination module of the second relay node is used to identify a segment from the second relay node to the second edge node. Optionally, the fourth configuration information further includes: an ingress segment IP address of the replication module of the second relay node, a first egress segment IP address of the replication module of the second relay node, and a second egress segment IP address of the replication module of the second relay node. The ingress segment IP address of the replication module of the second relay node includes an IP address of the first edge node and an IP address of the second relay node, and the ingress segment IP address of the replication module of the second relay node corresponds to the segment from the first edge node to the second relay node. The first egress segment IP address of the replication module of the second relay node includes the IP address of the second relay node, and the first egress segment IP address of the replication module of the second relay node corresponds to the segment from the replication module of the second relay node to the elimination module of the second relay node. The second egress segment IP address of the replication module of the second relay node includes the IP address of the second relay node and an IP address of the first relay node, and the second egress segment IP address of the replication module of the second relay node corresponds to the segment from the second relay node to the first relay node. The fourth configuration information further includes: a first ingress segment IP address of the elimination module of the second relay node, a second ingress segment IP address of the elimination module of the second relay node, and an egress segment IP address of the elimination module of the second relay node. The first ingress segment IP address of the elimination module of the second relay node includes the IP address of the second relay node, and the first ingress segment IP address of the elimination module of the second relay node corresponds to the segment from the replication module of the second relay node to the elimination module of the second relay node. The second ingress segment IP address of the elimination module of the second relay node includes the IP address of the second relay node, and the second ingress segment IP address of the elimination module of the second relay node corresponds to the segment from the first relay node to the second relay node. The egress segment IP address of the elimination module of the second relay node includes the IP address of the second relay node and the IP address of the second edge node, and the egress segment IP address of the elimination module of the second relay node corresponds to the segment from the second relay node to the second edge node. Optionally, the fourth configuration information further includes: an ingress segment label of the replication module of the second relay node, a first egress segment label of the replication module of the second relay node, and a second egress segment label of the replication module of the second relay node. The ingress segment label of the replication module of the second relay node corresponds to the segment from the first edge node to the second relay node. The first egress segment label of the replication module of the second relay node corresponds to the segment from the replication module of the second relay node to the elimination module of the second relay node. The second egress segment label of the replication module of the second relay node corresponds to the segment from the second relay node to the first relay node. The fourth configuration information further includes: a first ingress segment label of the elimination module of the second relay node, a second ingress segment label of the elimination module of the second relay node, and an egress segment label of the elimination module of the second relay node. The first ingress segment label of the elimination module of the second relay node corresponds to the segment from the replication module of the second relay node to the elimination module of the second relay node. The second ingress segment label of the elimination module of the second relay node corresponds to the segment from the first relay node to the second relay node. The egress segment label of the elimination module of the second relay node corresponds to the segment from the second relay node to the second edge node. Optionally, the apparatus further includes: a fifth configuration module, configured to generate and configure fifth configuration information for the second relay node. The fifth configuration information is used to: obtain and replicate a first received data packet between the second data packet from the first edge node and the third data packet from the first relay node; output, to the first relay node, the fifth data packet obtained through replication; and output, to the second edge node, a sixth data packet obtained through replication. Optionally, the second configuration information is further used to: replicate the obtained data packet that is first received; output, to a third relay node, a seventh data packet obtained through replication; and output, to a fourth relay node, an eighth data packet obtained through replication. The apparatus further includes a sixth configuration module, configured to generate and configure sixth configuration information for the third relay node. The sixth configuration information is used to: obtain and replicate a first received data packet between the seventh data packet from the first relay node and a data packet from the second relay node; output, to the fourth relay node, a ninth data packet obtained through replication; obtain a first received data packet between a tenth data packet obtained through replication and a data packet from the fourth relay node; and send the first received data packet to the second edge node. The third relay node is a node on the link between the first relay node and the second edge node, and the fourth relay node is not on a link from the first edge node to the second edge node through the third relay node. Optionally, the apparatus further includes a fourth configuration module, configured to generate and configure fourth configuration information for the second relay node. The fourth configuration information is used to: replicate the second data packet from the first edge node; output, to the first relay node, the fifth data packet obtained through replication; obtain and replicate a first received data packet between a sixth data packet obtained through replication and the third data packet from the first relay node; send, to the third relay node, an eleventh data packet obtained through replication; and send, to the fourth relay node, a twelfth data packet obtained through replication. Optionally, the apparatus further includes a seventh configuration module, configured to generate and configure seventh configuration information for the fourth relay node. The seventh configuration information is used to: obtain and replicate a first received data packet between the eighth data packet from the first relay node and the twelfth data packet from the second relay node; send, to the third relay node, a thirteenth data packet obtained through replication; obtain a first received data packet between the fourteenth data packet obtained through replication and the ninth data packet from the third relay node; and send the first received data packet to the second edge node. Optionally, the second configuration information is further used to: send, to a fourth relay node, an eleventh data packet obtained by replicating the first data packet; and output, to a third relay node, the obtained data packet that is first received. The apparatus further includes a sixth configuration module, configured to generate and configure sixth configuration information for the third relay node. The sixth configuration information is used to: replicate a data packet from the first relay node; send, to the fourth relay node, a twelfth data packet obtained through replication; obtain a first received data packet between the data packet from the second relay node, a data packet from the fourth relay node, and a thirteenth data packet obtained through replication; and send the first received data packet to the second edge node. The third relay node is a node on the link between the first relay node and the second edge node, and the fourth relay node is not on a link from the first edge node to the second edge node through the third relay node. As shown inFIG.10, an embodiment of this application provides a controller600. The controller600may include a memory601and a processor602. The memory601is configured to store an instruction. The processor602is configured to execute the instruction in the memory601, to perform the foregoing traffic flow processing method. The memory601and the processor602are connected to each other through a bus603. The bus603may be a peripheral component interconnect (PCI) bus, an extended industry standard architecture (EISA) bus, or the like. The bus603may be classified into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is used to represent the bus inFIG.10, but this does not mean that there is only one bus or only one type of bus. The memory601may be a random access memory (RAM), a flash memory, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), a register, a hard disk, a removable hard disk, a CD-ROM, or any other form of storage medium known to a person skilled in the art. The processor602may be, for example, a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), or a field programmable gate array (FPGA) or another programmable logic device, a transistor logic device, a hardware component, or any combination thereof. The processor may implement or execute various example logical blocks, modules, and circuits described with reference to content disclosed in this application. Alternatively, the processor may be a combination of processors implementing a computing function, for example, a combination of one or more microprocessors, or a combination of a DSP and a microprocessor. An embodiment of this application further provides a computer-readable storage medium, including one or more instructions. When the one or more instructions are run on a computer, the computer is enabled to perform the foregoing traffic flow processing method applied to a controller. In the specification, claims, and accompanying drawings of this application, the terms “first”, “second”, “third”, “fourth”, and the like (if existent) are intended to distinguish between similar objects but do not necessarily indicate a specific order or sequence. It should be understood that the data used in such a way are interchangeable in proper circumstances so that the embodiments described herein can be implemented in other orders than the order illustrated or described herein. Moreover, the terms “include”, “have” and any other variants mean to cover the non-exclusive inclusion, for example, a process, method, system, product, or device that includes a list of steps or units is not necessarily limited to those steps or units, but may include other steps or units not expressly listed or inherent to such a process, method, system, product, or device. It may be clearly understood by a person skilled in the art that, for convenient and brief description, for a detailed working process of the foregoing system, apparatus, and unit, refer to a corresponding process in the foregoing method embodiments, and details are not repeated herein. In the several embodiments provided in this application, it should be understood that, the disclosed system, apparatus, and method may be implemented in another manner. For example, the described apparatus embodiments are merely examples. For example, the division into units is merely logical function division and may be other division in actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one location, or may be distributed on a plurality of network units. Some or all of the units may be obtained depending on actual requirements to achieve the objectives of the solutions in the embodiments. In addition, functional units in the embodiments of this application may be integrated into one processing unit, or each of the units may exist alone physically, or at least two units are integrated into one unit. The integrated unit may be implemented in a form of hardware, or may be implemented in a form of a software module unit. When the integrated unit is implemented in the form of a software module unit and sold or used as an independent product, the integrated unit may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of this application essentially, or the part contributing to the conventional technology, or all or some of the technical solutions may be implemented in the form of a software product. The computer software product is stored in a storage medium and includes instructions for instructing a computer device (which may be a personal computer, a server, a network device, or the like) to perform all or some of the steps of the methods described in the embodiments of this application. The foregoing storage medium includes any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, and an optical disc. A person skilled in the art should be aware that in the one or more examples, functions described in the present invention may be implemented by hardware, software, firmware, or any combination thereof. When the functions described in the embodiments are implemented by software, the foregoing functions may be stored in a computer-readable medium or transmitted as one or more instructions or code in a computer-readable medium. The computer-readable medium includes a computer storage medium and a communications medium, where the communications medium includes any medium that facilitates transmission of a computer program from one place to another. The storage medium may be any available medium accessible to a general-purpose or a dedicated computer. In the foregoing specific implementations, the objectives, technical solutions, and benefits of the present invention are further described in detail. It should be understood that the foregoing descriptions are merely example implementations of the present invention. In conclusion, the foregoing embodiments are merely intended to describe the technical solutions of this application, but not to limit this application. Although this application is described in detail with reference to the foregoing embodiments, a person of ordinary skill in the art should understand that the technical solutions described in the foregoing embodiments may still be modified, or some technical features in the technical solutions may be equivalently replaced. These modifications or replacements do not make the essence of the corresponding technical solutions fall outside the scope of the technical solutions of the embodiments of this application.
119,616
11943149
DETAILED DESCRIPTION OF SPECIFIC EXAMPLES In many cases when the arbiter is active multiple options may have equal priorities. When this is the case it is important that the arbiter select evenly from the equal options. If an arbiter has multiple route options, but has a bias to select one or more particular paths, then fabric traffic will experience greater congestion resulting in longer transaction latencies to cross the fabric. If an arbiter is biased toward selecting equal age transactions from one source over others, then fabric latencies will become unequal based on the source of the transactions. Implementations of the disclosed technology provide a tree-based arbiter that provides fair, randomized tie-breaking when candidates under arbitration have equal prioritizations. FIG.1illustrates an example device including an arbitration system providing randomized and balanced tie-breaking. In various implementations, the illustrated device may be implemented as hardware, as software stored on a non-transitory computer readable medium, or as a combination thereof. The device determines an overall winning candidate124for resource allocation from amongst a plurality of competing candidates101. The illustrated device may be implemented in various manners to provide arbitration for various resource types. For example, the illustrated device may be a route arbiter. In this example, a fabric may have multiple allowed routes for a packet to take to reach its destination and the candidates101may comprise those allowed routes for a packet to be transmitted. As another example, the illustrated device may be a packet arbiter. For instance, the device may be an arbiter to select amongst a plurality of packets101for transmission on a particular egress port of a network device. The device includes a priority weighting system107. The priority weighting system107assigns a corresponding priority weight109,111,113,115,117to each candidate102,103,104,105,106, respectively. The priority weighting system107assigns the priority weights based on a metric determined according an arbitration technique. For example, a packet arbiter may apply age-based arbitration, where the priority weighting system107may assign priority values to the candidates102-106based on their age such that older packets receive higher priority values. For example, the priority weighting system107may assign the priority weights based on the values of time-to-live (TTL) packet fields. As another example, the priority weighting system107may assign priority values to candidate routes based on total route hop counts, route congestion, time since a particular route or egress port has been used, or other factors. The device further includes a random weighting system108. The random weighting system108assigns an uncorrelated random weight110,112,114,116,118to each candidate102,103,104,105,106, respectively. In some implementations, the random weighting system108may use a pseudorandom/statistically random number generator to generate the random weights. For example, the random weighting system108may comprise a linear feedback shift register (LFSR) configured to generate a pseudorandom bit stream. In other implementations, the random weighting system108may use a true random number generator to generate the random weights. For example, the random weighting system108may comprise a hardware random number generator based on the binary sampling of noise signals, such as thermal noise. In some implementations, the random weighting system108may assign all of its random weights according to a uniform distribution. For example, if the random weights are each 4 bits long (i.e., decimals between 0 and 15), the weights may be randomly drawn uniformly between 0000 and 1111. In other implementations, the random weighting system108uses different distributions for different candidates. This may enable the arbitration system to preferentially select preferred candidates while retaining a balanced distribution of overall winning candidates. For example, the random weighting system108may use a first distribution for candidate routes having a first bandwidth and a second distribution for candidate routes of having a second bandwidth class. As an example, with 4 bit random weights and two routes, with a first route having twice the bandwidth of the second route, the random weighting system108might draw randomly from between 0000 and 1111 for the first route and from between 0000 and 0111 for the second route. (For example, the random weighting system may perform a single bit right shift for the random weight assigned to the second route.) In some implementations, the priority weights109,111,113,115,117and the random weights110,112,114,116,118are combined into a single overall weight for each candidate. For example, the priority weight may be the most significant digits of the overall weight and the random weight may be the least significant digits of the overall weight. For example, the overall weights may be formed by concatenating the outputs of the random weighting system108to the outputs of priority weighting system107. In one example, the priority weights109,111,113,115,117and the random weights110,112,114,116,118are binary numbers and the length of the random weights are at least log2(x) where x is the total number of leaf nodes of the tree119. For example, in a system with up to 16 possible candidates, the random weights110,112,114,116,118are at least 4 bits long. The device further comprises a tree119of selectors120-123. Each selector receives a plurality of inputs corresponding to candidates to be evaluated and outputs a winning candidate of its evaluated candidates. The lowest level selector123outputs the overall winning candidate124. Each selector compares the input weights of the candidates it evaluates. If a single evaluated candidate has a greatest priority weight109,111,113,115,117, then the selector selects that evaluated candidate as its winning candidate. If at least two evaluated candidates are tied for the greatest priority weight, the selector selects the evaluated candidate of the tied candidates that a greatest random weight as its winning candidate. In various implementations, different techniques may be employed if multiple evaluated candidates are tied for greatest priority and random weight. For example, each selector may randomly select one of the tied candidates as its winning candidate. In the illustrated example, each candidate102-106has a combined weight comprising its priority weight109,111,113,115,117as the most significant digits and its random weight110,112,114,116,118as the least significant digits. The tree119is a binary tree with each selector120-123comprising a digital comparator. The use of the random weights that carry through the entire tree provide a fair selection of equally prioritized candidates regardless of how the candidates are spread amongst the tree's119inputs. For example, consider the case where the three candidates C1, C3, and C5are routinely tied for the greatest priority weight (e.g., 11111b in a 5 bit weighted arbitration system). In a system where each comparator simply randomly selected a winner between two tied candidates, C1and C3would each win 25% of the time while C5would win 50% of the time. This is because C1would win 50% of the time at stage122, C2would win 50% of the time at stage122. The winner of stage122would then be evaluated at stage123against C5. At stage123the winner of stage122would win 50% of the time (i.e. C1wins 25% of the time and C3wins 25% of the time) and C5would win 50% of the time. Continuing the example, in the illustrated device, each of the candidates102-106is assigned a random weight at the input stage that is carried through the entire tree119. If C1, C3, and C5are all tied for the highest priority weight, the candidate assigned the highest random weight will eventually be the overall winner124. Accordingly, each of the three candidates will win approximately ⅓ of the time, where the total number of bits of the random weights determines how close to a completely fair selection occurs (i.e., how infrequently an independent random tie breaker occurs at each comparator stage). FIG.2illustrates an example device200as an implementation of a device according toFIG.1. Example device200is a network switch200including a fair weighted arbiter217for route selection. The switch200comprises a plurality of input ports201,202,203,204,205,206and a plurality of output ports208,209,210,211,212,213. In the illustrated example, the inputs ports are output ports are connected by a crossbar207. Other examples, may use any of a variety of different switch architectures. A network controller214controls the coupling of the input ports to output ports to route a packet received on one of the input ports201,202,203,204,205,206to one of the output ports208,209,210,211,212,213. In this example, the switch200employs adaptive routing on received packets. When a packet is received via an input port, the controller214determines candidate egress ports for the packet corresponding to the packet's candidate routes. The controller214includes a priority weighting system215that includes a routing table that defines the potential routes for different destinations. The priority weighting system215provides a weight for each candidate route. The weight may be based on static conditions, such as number of network hops on the candidate route, the total bandwidth on the route, whether the route is dedicated for packets having a particular trait (such as packet type, traffic class), or other static factors. The weight may also be based on dynamic conditions, such as network congestion on each candidate route, or the historical use of the route (for example, to allow load balancing over routes). The switch controller216further comprises a random weighting system216to generate a random weight for each of the candidate routes. For example, the source216may concatenate a string of random bits to the priority weight bits generated by the weighting system215. The resulting combined weights are then input to the arbiter217for determining an overall winning route. In one example, the arbiter217comprises an input218,219,220,221222,223for each of the egress ports208,209,210,211,212,213. The weights for the ports corresponding to the candidate routes are input to the corresponding input. For example, if a received packet has 3 potential routes over egress ports201,203,206, then the weights for those routes are input on inputs218,220,223, respectively. The other inputs219,221,222may have null inputs. In another example, the inputs218-223are not tied to particular egress ports and controller214inputs the candidate weights in an arbitrary order. For example, if a received packet has 3 potential routes over egress ports201,203,206, then the weights for those routes may be input on inputs218,219,220, respectively, and the remaining inputs are null. The winning route is output on output224. In response, the controller214routes the packet from the received ingress port to the winning egress port using the cross bar207. For example, the controller214may place the packet in an output queue for the winning egress port for future transmission. FIG.3illustrates an example device300as an implementation of a device described according toFIG.1. In this example, the network device300includes a controller311to select a packet from one of a plurality of queues301,302,303,304,305,306,307,308,309,310for transmission over an egress port317. For example, the plurality of queues may comprise queues corresponding to different packet traffic classes. The number of potential candidate packets for transmission over the egress port317depends on the number of queues and how packets may be drawn from queues. As an example, if packets may be selected only from the head of a queue, then each queue with a head of queue packet has a candidate for transmission over the egress port317. As another example, if a packet may be selected from anywhere in a queue, then each packet in any queue is a candidate for transmission over the egress port317. In some examples, the potential queues may be limited to a subset of the queues301-310. For example, the network may employ credit based flow control with different amounts of credits for different queues. In this example, the queues providing potential packets may be limited to those queues having sufficient credits. The controller311includes a weighting system312to assign a priority weight to each candidate packet. The priority weights may be assigned based on various factors. For example, in an age-based arbitration system, the candidate packets may be weighted based on the value of their time-to-live (TTL) fields (where a packet with less time to live is given a higher priority than a packet with more time to live). As a further example, the age-based weights may be combined with priority values for different queues301-310to generate a combined prioritization. In other implementations employing different arbitration techniques further packet characteristics may be employed in determining the priority weights. For example, such additional characteristics may include: historical information such as when the packet's queue was last selected, when the packet's source was last selected, or when the packet's destination was last selected; or congestion information, such as congestion along each candidate packet's potential route to its destination. The switch controller311further comprises a random weighting system313to generate a random weight for each of the candidate packets. For example, the source313may concatenate a string of random bits to the priority weight bits generated by the weighting system312. The resulting combined weights are then input to the tree314for determining an overall winning packet for transmission on the egress port317. In the illustrated example, the tree314includes one input315for each queue301-310because a packet may only be retrieved from the head of queue. In an examples where other packets may be retrieved from a queue, the tree314may include an input315for the maximum possible number of packets that may be evaluated in a given candidate selection cycle. FIG.4illustrates an example method of operation of a device implemented as described with respect toFIG.1. The method includes block401. Block401includes assigning each of a plurality of arbitration candidates a corresponding priority weight. As described above, the candidates may be any resource selected through a weighted arbitration technique. For example, the candidates may be candidate packets for transmission or candidate routes for transmitting a packet. The priority weights may be assigned based on a prioritization of the candidates under evaluation according to a variety of weighted arbitration schemes as discussed above with regards toFIGS.1-3. The method further includes block402. Block402includes assigning each of the plurality of candidates a corresponding random weight. For example, the random weights may be generated and assigned as described above with respect toFIGS.1-3. In various implementations, the random weights may be generating using a deterministic random number generator such as an LFSR or a non-deterministic random number generator (for example, based on sampling of some noise signal such as thermal noise). In some implementations, the random weights may be assigned using different random number distributions for different candidates. The method further includes block403. Block403includes, for each candidate of the plurality, combining the candidate's corresponding priority weight and the candidate's corresponding random weight to generate a corresponding overall weight for the candidate. For example, block403may be performed by concatenating the priority weight and the random weight such that the priority weight occurs at most significant digits of the overall weight and the random weight occurs at least significant digits of the overall weight. The method further includes block404. Block404includes determining an overall winning candidate from the plurality of candidates. As described above, the overall winning candidate may be determined using a tree of selectors, such as a binary tree of comparators. Each comparator in the binary tree compares a pair of candidates and selects a winning candidate based on a comparison of the overall weights of each of the pair of candidates. The comparison is performed such that a candidate having the greatest priority weight is the winning candidates. If each of the pair has equal priorities, then the candidate with the greatest random weight is selected as the winning candidate. Different implementations may handle ties of both the priority and random weights differently. For example, each comparator may select the winning candidate randomly if the overall weights of each of the pair of candidates are equal. FIG.5illustrates an example non-transitory computer readable medium500storing a design for a hardware implementation for a device as described with respect toFIGS.1-3. For example, the medium500may comprise computer storage or memory, or a portable computer readable storage. The design501may be for a portion or block of an integrated circuit (IC), an IC in its entirety, or a device comprise multiple ICs. The design501may be stored in various formats. For example, the design501may comprise a synthesizable register transfer level (RTL) block design, gate-level netlist, or a transistor layout. The overall designs may include designs for any of the implementations discussed above. In one implementation, the design501for the device includes a design502for a priority weighting system to assign each of a plurality of candidates a corresponding priority weight. For example, the design501may be for a network device and the candidates may be for network device resources that are allocated based on a weighted arbitration scheme, such as packets to be selected for transmission or routes to be used for a packet. In this implementation, the design501includes a design503for a random weighting system to assign each of the plurality of candidates a corresponding random weight. For example, the random weighting system may be designed to assign a random weight to the candidates by concatenating random bits to each candidates' priority weight such that corresponding priority weight are the most significant digits and each corresponding random weight are the least significant digits of a corresponding overall weight assigned to each of the plurality of candidates. The design501further includes a design504for an arbitration system to select an overall winning candidate, the arbitration system comprising a tree of selectors. Each selector evaluates a plurality of candidates and selects a winning candidate based on the candidate weights. The evaluated candidates may be winning candidates output by earlier selectors in the tree, original inputs to the tree, or a combination thereof, depending on the selector's location in the tree. As discussed above, if a single evaluated candidate has greatest priority weight, then the selector selects the single evaluated candidate as the winning candidate. If more than one of the candidates are tied for the greatest priority weight, then the selector selects the candidate having the greatest random weight as the winning candidate. In one example, the selectors are binary comparators that compare the concatenated overall weights of the candidates and select the evaluated candidate having the greatest overall weight. In the foregoing description, numerous details are set forth to provide an understanding of the subject disclosed herein. However, implementations may be practiced without some or all of these details. Other implementations may include modifications and variations from the details discussed above. It is intended that the appended claims cover such modifications and variations.
20,125
11943150
DESCRIPTION OF EXAMPLE EMBODIMENTS Overview This disclosure describes various technologies for tracking the compute capacity of a scalable application service platform to perform dynamic bandwidth allocation for data flows associated with applications hosted by the service platform. By way of example, and not limitation, a method according to the various technologies presented herein may include allocating a first amount of bandwidth of a physical underlay of a network for data flows associated with an application. The method may also include receiving, at a network controller associated with the network and from a scalable application service hosting the application, an indication of an amount of computing resources of the scalable application service that are allocated to host the application. Based at least in part on the indications, a second amount of bandwidth of the physical underlay to allocate for the data flows may be determined. In this way, the method may include allocating the second amount of bandwidth of the physical underlay of the network for the data flows associated with the application. Additionally, the techniques described herein may be performed as a method and/or by a system having non-transitory computer-readable media storing computer-executable instructions that, when executed by one or more processors, performs the techniques described above. Example Embodiments As discussed above, in typical overlay network deployments that provide connectivity to applications (e.g., SD-WAN), bandwidth in the underlay is usually over-provisioned and/or dynamically billed since bandwidth requirements of different applications might unpredictably change. Generally, underlay connectivity providers need to allocate enough bandwidth for these applications and constantly monitor actual bandwidth consumption to ensure adequate provisioning and billing, resulting in the provisioning and de-provisioning of bandwidth over time. On the other hand, overlay network controllers need not only make sure that the necessary bandwidth is actually provisioned in the underlay, but also that the application requirements in terms of bandwidth are actually met. Today, these operations are usually based on estimations by human operators of both the overlay and the underlay networks, observed behavior in the past, predictions of the future, and the like, all combined with a fair amount of guesswork. Additionally, in some cases, to compensate for required over-provisioning of bandwidth on the underlay, overlay operators may opt-in for a dynamic billing model, where the underlay connectivity provider may only bill for the bandwidth actually consumed by customers. This, in turn, forces the underlay provider to appropriately instrument their network to reliably measure for actual bandwidth usage and, equally important, to provide verifiable proof to the overlay operator. While meeting application bandwidth requirements has been a long-standing problem, it has become even more relevant with modem applications. As applications moved from bare metal to virtual machines (VMs), and from there to the cloud, the elastic vision of compute resources has finally become a reality. In other words, as load on an application increases, new compute resources can be allocated for that specific application. Furthermore, with the move to containers and scalable container-orchestration systems (e.g., Kubernetes, Amazon ECS, Docker Swarm, Apache Mesos, etc.), the elastic model of compute resources has reached another level. While VMs can request more resources, or be cloned to allow for dynamic application loads, the transition to containers has made automated application scaling the baseline for modem applications. For instance, the Kubernetes application replication model introduces clear and well-defined mechanisms to dynamically accommodate the compute requirements of auto-scalable applications. At its core, these mechanisms monitor computing resource utilization and/or other metrics of a current set of replicas for a given application to decide when an application needs to be scaled. Although different autoscaling approaches may be used by these container-orchestration systems, in many cases the extra resources and/or replicas are allocated according to pre-defined usage thresholds defined by the DevOps managing the application. Unfortunately, however, wide area networks providing customers access to applications hosted by these container-orchestration systems have yet to catch up to the elasticity that these systems enable. While using a software-defined/application programming interface (API)-driven underlay provider (e.g., PacketFabric, Megaport, Equinix Fabric, PCCW Global's ConsoleConnect, etc.) could offer flexible models to plan capacity on WAN connections, as of today they do not provide a way to automatically select and balance the provisioned network capacity according to the actual application load. Accordingly, this disclosure describes various improvements in technologies that, among other things, provides for tracking of compute capacity of a scalable application service platform (e.g., a container-orchestration system such as Kubernetes, Amazon ECS, Docker Swarm, Apache Mesos, etc.) to perform dynamic bandwidth allocation for data flows associated with applications hosted by the service platform. For instance, if API-driven underlays are available (e.g., PacketFabric), the techniques disclosed herein extend network controllers (e.g., vManage on Cisco Viptela SD-WAN) to offer to the application service an autoscaling network by tracking automated application scaling of a container-orchestration system (e.g., Kubernetes, etc.). Thus, rather than having to monitor the application's bandwidth utilization and guessing when extra bandwidth may be required, the network controller tracks the allocated computing resources and/or number of replicas of a certain application in a container-orchestration system cluster. The network controller may then tie the dynamic allocation of bandwidth of a software-defined/API-driven underlay (e.g., PacketFabric, Megaport, etc.) with a scalable application service's scalability mechanisms. Accordingly, the techniques of this disclosure realize an improvement in computing technology by more precisely allocating bandwidth of an underlay of a network when it is needed, and de-allocating bandwidth when it is not needed. This improves the efficiency of software-defined/API-driven underlays. Additionally, by performing the techniques herein as described, the scalable application service, including individual applications hosted by the service, are completely agnostic to the autoscaling of bandwidth in the network underlay. In this way, the application configuration/infrastructure are unmodified, and no change is required to the application or the scalable application service to perform the techniques described herein. By way of example and not limitation, a method according to the techniques disclosed herein may include allocating a first amount of bandwidth of a physical underlay of a network for data flows associated with an application. For instance, a computing device that is in whole or in part programmed as a network controller of the network may send a request to allocate the first amount of bandwidth to an application programming interface (API) associated with the physical underlay of the network. In turn, the API may cause the first amount of bandwidth of the physical underlay to be allocated for the data flows associated with the application. In this way, application traffic/data may flow between client devices and a scalable application service that is hosting the application via the data flows. In some instance, the physical underlay may include one or more networking devices and/or components, such as routers, switches, modems, cables, and the like. In some examples, the method may include receiving, from the scalable application service hosting the application, an indication of an amount of computing resources of the scalable application service that are allocated to host the application (e.g., a number of replicas of the application generated and/or hosted by the scalable application service, a compute capacity of the scalable application service, an amount of containers currently hosted by the scalable application service, an amount of computing resources allocated by the scalable application service, and the like). As used herein, a “scalable application service” means a container-orchestration system as described above, such as, for example, Kubemetes, Amazon ECS, Docker Swarm, Apache Mesos, and the like. Additionally, the scalable application service may be cloud-based, cloud-native, and/or datacenter-based. In some instances, the indication may be received by the network controller of the network. Additionally, or alternatively, the network controller may receive the indication based at least in part on polling an API associated with the scalable application service for the indication (e.g., by sending, to the API, a request for the API to provide the indication of the amount of computing resources that are allocated to host the application). Based at least in part on the indication of the amount of computing resources of the scalable application service that are allocated to host the application, the method may include determining that the first amount of bandwidth of the physical underlay is to be increased or decreased. In some examples, this may include determining a second amount of bandwidth of the physical underlay to allocate for the data flows in addition to the first amount of bandwidth. Additionally, or alternatively, this may include determining a second amount of bandwidth of the physical underlay to de-allocate for the data flows (e.g., by de-allocating a portion of the first amount of bandwidth such that less bandwidth is allocated for the data flows). In some instances, determining that the first amount of bandwidth of the physical underlay is to be increased or decreased may be based at least in part on a bandwidth-allocation model. The bandwidth-allocation model may be a user-defined bandwidth allocation model or may be determined based at least in part on using machine learning (e.g., by associating prior amounts of allocated bandwidth with prior amounts of allocated computing resources for applications). For example, the network controller may receive a user defined bandwidth-allocation model specifying respective amounts of bandwidth of the physical underlay to allocate for the data flows based at least in part on respective amounts of computing resources of the scalable application service that are allocated to host applications. Additionally, or alternatively, determining the amount of bandwidth that is to be allocated or de-allocated may be based at least in part on a prior amount of bandwidth of the physical underlay that was allocated for the data flows at a time when the amount of the computing resources of the scalable application service were allocated to host the application. That is, a history of prior amounts of bandwidth of the physical underlay that were allocated to serve respective amounts of computing resources of the scalable application may be used to determine how much bandwidth of the physical underlay should be allocated or de-allocated. The method may also include, in some examples, causing the first amount of bandwidth of the physical underlay to be increased or decreased. That is, the first amount of bandwidth may be increased or decreased such that a second amount of bandwidth of the physical underlay is allocated for the data flows associated with the application. In some instances, the second amount of bandwidth may comprise more bandwidth than the first amount of bandwidth, and in other instances the second amount of bandwidth may comprise less bandwidth than the first amount of bandwidth. In any event, however, whether the second amount of bandwidth comprises more or less bandwidth than the first amount of bandwidth depends on the amount of computing resources of the scalable application service that are allocated to host the allocation. That is, if the amount of computing resources has increased then bandwidth may increase as well, and if the amount of computing resources has decreased then bandwidth may decrease as well. In this way, more or less data flows associated with the application may utilize the physical underlay to communicate traffic/data between client devices and the application service. In some instances, if the first amount of bandwidth of the physical underlay is decreased such that a portion of the first amount of bandwidth is de-allocated, then the portion of the first amount of bandwidth may be re-allocated to be used by other data flows associated with other applications. In some examples, causing the first amount of bandwidth of the physical underlay to be increased or decreased may comprise sending, to an API associated with the network, a request for the API to allocate the second amount of bandwidth of the physical underlay of the network. In some instances, the API may be associated with the physical underlay. In some examples, the amount of underlay bandwidth that is to be allocated based at least in part on the current application load may be established by a network operator. By way of example, and not limitation, the operator could establish that each (milli) CPU unit allocated to the application might require 5 MB/s of bandwidth, or that each replica of the application requires 100 MB/s on a WAN connection. As extra CPU units are allocated or new application replicas are instantiated, the network may automatically provision more bandwidth. Further fine tuning may be possible as well. For instance, the operator could configure that the first 10 replicas require 100 MB/s each but that any replicas above 10 require 70 MB/s each. Additionally, or alternatively, the amount of underlay bandwidth that is to be allocated based at least in part on the current application load may be controller-driven (e.g., determined by the network controller device). For instance, the network controller may automatically determine the bandwidth required per CPU unit and/or application replica. In some examples, this may be determined by observing actual throughput generated by the application over time and correlating it with the application autoscaling state. Those skilled in the art should understand that this method may require some trial and error on the heuristics of the network controller before efficient bandwidth allocation can be achieved and thus might require overprovisioning of bandwidth on the underlay for some time. However, by gathering a history of prior network needs by application replica, various methods exist to estimate the predicted bandwidth need as well as probability of outliers, etc. Furthermore, as the number of application replica increases, the accuracy percentage of predicting the needed bandwidth generally increases (via central limit theorem, machine learning, etc.), so generally prediction performance improves with increasing history and as the number of application replicas for which the bandwidth needs to be predicted increases. Additionally, or alternatively, the amount of underlay bandwidth that is to be allocated based at least in part on the current application load may be a hybrid of operator-driven and controller driven methods. That is the operator may define some baseline numbers of bandwidth associated with each application replica and/or compute units, but the controller may enable fine tuning of the recommended bandwidth as it observes the actual consumption over time. The recommendations may be then applied automatically by the controller or given as an option to the operator. Certain implementations and embodiments of the disclosure will now be described more fully below with reference to the accompanying figures, in which various aspects are shown. However, the various aspects may be implemented in many different forms and should not be construed as limited to the implementations set forth herein. The disclosure encompasses variations of the embodiments, as described herein. Like numbers refer to like elements throughout. FIGS.1A and1Billustrate a system-architecture diagram of an example environment100in which a network controller102may track compute capacity of an application service104to dynamically allocate bandwidth106for data flows108associated with one or more applications110A-110N (hereinafter referred to collectively as “applications110,” and where N represents any number of applications greater than or equal to 1) hosted by the application service104. For example, the network controller102of the network112may receive telemetry data114(e.g., indicating an amount of computing resources120A-120N of the application service104that are currently allocated to host the applications110) from the application service104and, based at least in part on the telemetry data114, the network controller102may determine that more or less of the bandwidth106is to be allocated or de-allocated. Accordingly, the network controller102may send one or more bandwidth allocation requests116to an underlay API118(e.g., an application programming interface (API) associated with an underlay of the network112) and, in response, the underlay API118may cause more or less bandwidth106to be allocated or de-allocated (as shown by the differences betweenFIGS.1A and1B) for increased or decreased amounts of the data flows108(also shown by the differences betweenFIGS.1A and1B) associated with the applications110. In this way, when applications110are replicated by the application service and/or when additional computing resources120are being utilized by the applications110, the bandwidth106for the data flows108between the client devices122and the applications110(e.g., using the underlay devices124A and124B of the network112) may more accurately be allocated based on current demand to improve efficiency of the underlay. In some examples, the network controller102may include one or more processors126and memory128communicatively coupled with the one or more processors126. In examples, the one or more processors126may execute instructions stored in the memory128to perform one or more operations on behalf of the network controller102. The memory128of the network controller102stores a bandwidth allocation component130, one or more bandwidth allocation models132, and a learning component134. In at least one example, the bandwidth allocation component130may include functionality to analyze the telemetry data114received from the application service104to determine a current amount of computing resources120of the application service104that are allocated to host the applications110. In this way, the bandwidth allocation component130may determine whether the bandwidth106should be increased or decreased. Additionally, the bandwidth allocation component130may access one of the other components of the memory128(e.g., the bandwidth allocation models132and/or the learning component134) to determine how much bandwidth to allocate between the underlay devices124A and124B. As noted above, the memory128of the network controller102may store one or more bandwidth allocation models132. In some examples, the network controller102may receive one or more of the bandwidth allocations models132from one or more network operator devices136and store the models in the memory128. In additional, or alternative examples, the bandwidth allocation models132may be generated by the network controller102(e.g., using the learning component134). For instance, the learning component134may generate the bandwidth allocations models132based at least in part on determining associations between prior amounts of bandwidth106allocated to serve respective amounts of computing resources120allocated for applications110. The processors126of the network controller102may be any suitable processor capable of executing instructions to process data and perform operations as described herein. By way of example and not limitation, the processors126can comprise one or more Central Processing Units (CPUs), Graphics Processing Units (GPUs), or any other device or portion of a device that processes electronic data to transform that electronic data into other electronic data that can be stored in registers and/or memory. In some examples, integrated circuits (e.g., ASICs, etc.), gate arrays (e.g., FPGAs, etc.), and other hardware devices can also be considered processors in so far as they are configured to implement encoded instructions. The memory128of the network controller102is an example of non-transitory computer-readable media. The memory128can store an operating system and one or more software applications, instructions, programs, and/or data to implement the methods described herein and the functions attributed to the various systems. In various implementations, the memory128can be implemented using any suitable memory technology, such as static random-access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory capable of storing information. The architectures, systems, and individual elements described herein can include many other logical, programmatic, and physical components, of which those shown in the accompanying figures are merely examples that are related to the discussion herein. The application service104may comprise a scalable application service that hosts one or more third-party applications, virtual machines, containers, and the like using infrastructure (e.g., physical devices, such as the computing resources120) of the scalable application service. For instance, the application service104may comprise a container-orchestration system as described herein, such as, for example, Kubemetes, Amazon ECS, Docker Swarm, Apache Mesos, and the like. The one or more computing resources120of the application service104may be used to host the applications110. The computing resources120may comprise hardware servers, software servers that are running on computer hardware, processors, general purpose computers, and the like. In some examples, the application service104may comprise multiple application services that are located in different geographical regions, different networks, etc. That is, although a single instance of the application service104is shown inFIGS.1A and1Bfor illustration purposes, it should be appreciated that the network112may facilitate communication of traffic for multiple instances of the application service104. For instance, the computing resources120and/or the applications110may be distributed between different data centers associated with the application service104, different networks associated with the application service, different zones of a cloud service provider, and the like. The network112may facilitate the communication of traffic between applications100of the application service104and client devices122. The network112may comprise an overlay network and an underlay network. The overlay network may comprise a telecommunications network that is built on top of the underlay network and is supported by its infrastructure (e.g., the underlay network's physical devices, such as the underlay devices124A and124B). The underlay network may comprise a software-defined/API-driven underlay provider (e.g., PacketFabric, Megaport, PCCW Global's ConsoleConnect, etc.). Accordingly, the network112may include the network controller102that communicates with the underlay API118to instruct the underlay API118how much bandwidth should be allocated in the underlay of the network112. It should be appreciated that, although shown as residing in the same network112for simplicity, the network controller102may reside in a different network than the underlay API118and the underlay devices124A and124B. In some instances, the network controller102may additionally, or alternatively, reside in a different geographic location than the underlay API118and/or the underlay devices124A and124B. The underlay devices124A and124B of the network112may comprise routers, switches, general purpose computers, software nodes, gateways, and/or any other networking device capable of forwarding packets through the network112. FIG.2illustrates a data flow diagram of an example process200for tracking compute capacity of an application service to dynamically allocate bandwidth for data flows associated with applications hosted by the application service. The process200may be performed, at least in part, by the network controller102, the underlay API118, an application service API202, other devices described herein, and/or any other networking device capable of communicating packets. It should be understood that the various operations of the process200shown inFIG.2are merely for illustrative purposes, and that, in some examples, more or less operations may be performed, operations may be omitted, operations may be performed in parallel, etc. At operation204, the network controller102may poll the application service API202for the current application load of the application service. For instance, the network controller may poll the application service API202to receive telemetry data114indicating a current amount of computing resources120of the application service104that are allocated to host the applications110, a current load of the computing resources120, a number of applications replicas currently hosted by the application service104, a number of containers currently hosted by the application service104, and the like. At operation206, the application service API202may send telemetry data associated with the application load of the application service. In some examples, sending the telemetry data may be based at least in part on the network controller102polling the application service API. The telemetry data may indicate the current application load of the application service. For instance, the telemetry data may indicate a current amount of computing resources120of the application service104that are allocated to host the applications110, a current load of the computing resources120, a number of applications replicas currently hosted by the application service104, a number of containers currently hosted by the application service104, and the like. At operation208, the network controller102may determine an amount of bandwidth to allocate for application data flows. The data flows may be associated with an application hosted by the application service. In some instances, determining the amount of bandwidth may be based at least in part on analyzing one or more bandwidth allocation tables. The bandwidth allocation tables may indicate associations between amounts of bandwidth and amounts of consumed computing resources of the application service. That is, the bandwidth allocation tables may indicate how much bandwidth106is suggested to be allocated based on a current amount of computing resources120that are being consumed by applications110hosted by the application service104. In some examples, determining the amount of bandwidth may be based at least in part on one or more prior amounts of bandwidth allocated at a time when the same or similar amount of computing resources of the application service were being consumed. At operation210, the network controller102may send a bandwidth allocation request to the underlay API118. The bandwidth allocation request may indicate the amount of bandwidth of the underlay that is to be allocated or de-allocated for data flows associated with applications hosted by the application service. In some examples, the bandwidth allocation request may program or otherwise cause the underlay API118to allocated or de-allocate the bandwidth. At operation212, the underlay API118may cause the underlay devices to allocate or de-allocated the amount of bandwidth. For instance, the underlay API118may cause the underlay devices124A and124B to allocate or de-allocate the amount of bandwidth for the data flows108associated with the applications110hosted by the application service104. In some instances, causing the underlay devices to allocate or de-allocate the amount of bandwidth may comprise sending, from the underlay API118and to the underlay devices, instructions to program or otherwise cause the underlay devices to allocate or de-allocate the amount of bandwidth. FIGS.3and4illustrate flow diagrams of example methods300and400that illustrate aspects of the functions performed at least partly by the network controller102and/or other devices as described inFIGS.1A,1B, and2. The logical operations described herein with respect toFIGS.3and4may be implemented (1) as a sequence of computer-implemented acts or program modules running on a computing system and/or (2) as interconnected machine logic circuits or circuit modules within the computing system. The implementation of the various components described herein is a matter of choice dependent on the performance and other requirements of the computing system. Accordingly, the logical operations described herein are referred to variously as operations, structural devices, acts, or modules. These operations, structural devices, acts, and modules can be implemented in software, in firmware, in special purpose digital logic, and any combination thereof. It should also be appreciated that more or fewer operations might be performed than shown in theFIGS.3and4and described herein. These operations can also be performed in parallel, or in a different order than those described herein. Some or all of these operations can also be performed by components other than those specifically identified. Although the techniques described in this disclosure is with reference to specific components, in other examples, the techniques may be implemented by less components, more components, different components, or any configuration of components. FIG.3illustrates a flow diagram of an example method300for tracking compute capacity of an application service to dynamically allocate bandwidth for data flows associated with applications hosted by the application service. For instance, the network controller102may track the compute capacity of the application service104to dynamically allocated bandwidth106for the data flows108associated with the applications110. At operation302, the method300includes allocating a first amount of bandwidth of a physical underlay of a network for data flows associated with an application. For instance, the network controller102of the network112may send a request to allocate the first amount of bandwidth to the underlay API118. In turn, the API may cause the first amount of bandwidth of the physical underlay to be allocated for the data flows108associated with the application110. In this way, application traffic/data may flow between client devices122and a scalable application service104that is hosting the application via the data flows108. In some instances, the physical underlay may include one or more networking devices and/or components, such as routers, switches, modems, cables, and the like. At operation304, the method300includes receiving, from a scalable application service hosting the application, an indication of an amount of computing resources of the scalable application service that are allocated to host the application. For instance, the indication may comprise telemetry data114that is received by the network controller102from the application service104, an API that is associated with the application service104, a computing resource120of the application service, etc. Additionally, or alternatively, the indication may include a number of replicas of the application generated and/or hosted by the scalable application service, a compute capacity of the scalable application service, a number of containers currently hosted by the scalable application service, and the like. At operation306, the method300includes determining, based at least in part on the indication, a second amount of bandwidth of the physical underlay to allocate for the data flows. In some examples, determining the second amount of bandwidth may comprise determining a second amount of bandwidth of the physical underlay to allocate for the data flows may be in addition to the first amount of bandwidth. Additionally, or alternatively, determining the second amount of bandwidth may comprise determining a second amount of bandwidth of the physical underlay to de-allocate for the data flows (e.g., by de-allocating a portion of the first amount of bandwidth such that less bandwidth is allocated for the data flows). In some instances, determining the second amount of bandwidth of the physical underlay to allocate for the data flows may be based at least in part on a bandwidth-allocation model. The bandwidth-allocation model may be a user-defined bandwidth allocation model or may be determined based at least in part on using machine learning (e.g., by associating prior amounts of allocated bandwidth with prior amounts of allocated computing resources for applications). For example, the network controller may receive a user defined bandwidth-allocation model specifying respective amounts of bandwidth of the physical underlay to allocate for the data flows based at least in part on respective amounts of computing resources of the scalable application service that are allocated to host applications. Additionally, or alternatively, determining the amount of bandwidth that is to be allocated or de-allocated may be based at least in part on a prior amount of bandwidth of the physical underlay that was allocated for the data flows at a time when the amount of the computing resources of the scalable application service were allocated to host the application. That is, a history of prior amounts of bandwidth of the physical underlay that were allocated to serve respective amounts of computing resources of the scalable application may be used to determine how much bandwidth of the physical underlay should be allocated or de-allocated. At operation308, the method300includes allocating the second amount of bandwidth of the physical underlay of the network for the data flows associated with the application. For instance, the network controller102may send, to the underlay API118, a request for the underlay API118to allocate the second amount of bandwidth of the physical underlay for the data flows. In some instances, allocating the second amount of bandwidth may comprise de-allocating the second amount of bandwidth from the first amount of bandwidth (e.g., subtracting the second amount of bandwidth from the first amount of bandwidth). FIG.4illustrates a flow diagram of another example method400for tracking compute capacity of an application service to dynamically allocate bandwidth for data flows associated with applications hosted by the application service. The method400begins at operation402, which includes allocate a first amount of bandwidth of a physical underlay of a network for data flows associated with an application. The data flows may facilitate the communication of data between the application and a client device that may be consuming the application. For instance, the network controller102of the network112may send a request to allocate the first amount of bandwidth to the underlay API118. In turn, the API may cause the first amount of bandwidth of the physical underlay to be allocated for the data flows108associated with the application110. In this way, application traffic/data may flow between client devices122and a scalable application service104that is hosting the application via the data flows108. In some instances, the physical underlay may include one or more networking devices and/or components, such as routers, switches, modems, cables, and the like. At operation404, the method400includes receiving, from a scalable application service hosting the application, an indication of an amount of computing resources of the scalable application service that are allocated to host the application. For instance, the indication may comprise telemetry data114that is received by the network controller102from the application service104, an API that is associated with the application service104, a computing resource120of the application service, etc. Additionally, or alternatively, the indication may include a number of replicas of the application generated and/or hosted by the scalable application service, a compute capacity of the scalable application service, a number of containers currently hosted by the scalable application service, and the like. At operation406, the method400includes determining, based at least in part on the indication, that the first amount of bandwidth of the physical underlay is to be increased or decreased. In some examples, determining that the first amount of bandwidth is to be increased or decreased may comprise determining a second amount of bandwidth of the physical underlay to allocate for the data flows in addition to the first amount of bandwidth. Additionally, or alternatively, determining that the first amount of bandwidth is to be increased or decreased may comprise determining a second amount of bandwidth of the physical underlay to de-allocate for the data flows (e.g., by de-allocating a portion of the first amount of bandwidth such that less bandwidth is allocated for the data flows). In some instances, determining that the first amount of bandwidth is to be increased or decreased may be based at least in part on a bandwidth-allocation model. The bandwidth-allocation model may be a user-defined bandwidth allocation model or may be determined based at least in part on using machine learning (e.g., by associating prior amounts of allocated bandwidth with prior amounts of allocated computing resources for applications). For example, the network controller may receive a user defined bandwidth-allocation model specifying respective amounts of bandwidth of the physical underlay to allocate for the data flows based at least in part on respective amounts of computing resources of the scalable application service that are allocated to host applications. Additionally, or alternatively, determining that the first amount of bandwidth is to be increased or decreased may be based at least in part on a prior amount of bandwidth of the physical underlay that was allocated for the data flows at a time when the amount of the computing resources of the scalable application service were allocated to host the application. That is, a history of prior amounts of bandwidth of the physical underlay that were allocated to serve respective amounts of computing resources of the scalable application may be used to determine how much bandwidth of the physical underlay should be allocated or de-allocated. At operation408, the method400includes causing the first amount of bandwidth of the physical underlay to be increased or decreased such that a second amount of bandwidth of the physical underlay is allocated for the data flows associated with the application, wherein the second amount of bandwidth comprises more bandwidth or less bandwidth than the first amount of bandwidth. For instance, the network controller102may send, to the underlay API118, a request for the underlay API118to increase or decrease the first amount of bandwidth of the physical underlay for the data flows. In some instances, increasing or decreasing the first amount of bandwidth may comprise de-allocating a portion of the first amount of bandwidth. In this way, the de-allocated bandwidth may be re-allocated for use by other data flows associated with other applications FIG.5is a computer architecture diagram showing an illustrative computer hardware architecture for implementing a networking device that can be utilized to implement aspects of the various technologies presented herein. The computer architecture shown inFIG.5illustrates a conventional server computer, network controller102, computing resource120, underlay device124, client device122, network operator device136, workstation, desktop computer, laptop, tablet, network appliance, e-reader, smartphone, or other computing device, and can be utilized to execute any of the software components presented herein. The computer500includes a baseboard502, or “motherboard,” which is a printed circuit board to which a multitude of components or devices can be connected by way of a system bus or other electrical communication paths. In one illustrative configuration, one or more central processing units (“CPUs”)504operate in conjunction with a chipset506. The CPUs504can be standard programmable processors that perform arithmetic and logical operations necessary for the operation of the computer500. The CPUs504perform operations by transitioning from one discrete, physical state to the next through the manipulation of switching elements that differentiate between and change these states. Switching elements generally include electronic circuits that maintain one of two binary states, such as flip-flops, and electronic circuits that provide an output state based on the logical combination of the states of one or more other switching elements, such as logic gates. These basic switching elements can be combined to create more complex logic circuits, including registers, adders-subtractors, arithmetic logic units, floating-point units, and the like. The chipset506provides an interface between the CPUs504and the remainder of the components and devices on the baseboard502. The chipset506can provide an interface to a RAM508, used as the main memory in the computer500. The chipset506can further provide an interface to a computer-readable storage medium such as a read-only memory (“ROM”)510or non-volatile RAM (“NVRAM”) for storing basic routines that help to startup the computer500and to transfer information between the various components and devices. The ROM510or NVRAM can also store other software components necessary for the operation of the computer500in accordance with the configurations described herein. The computer500can operate in a networked environment using logical connections to remote computing devices and computer systems through a network, such as the network106. The chipset506can include functionality for providing network connectivity through a NIC512, such as a gigabit Ethernet adapter. The NIC512is capable of connecting the computer500to other computing devices over the network524and/or network112. It should be appreciated that multiple NICs512can be present in the computer500, connecting the computer to other types of networks and remote computer systems. In some examples, the NIC512may be configured to perform at least some of the techniques described herein. The computer500can be connected to a storage device518that provides non-volatile storage for the computer. The storage device518can store an operating system520, programs522, and data, which have been described in greater detail herein. The storage device518can be connected to the computer500through a storage controller514connected to the chipset506. The storage device518can consist of one or more physical storage units. The storage controller514can interface with the physical storage units through a serial attached SCSI (“SAS”) interface, a serial advanced technology attachment (“SATA”) interface, a fiber channel (“FC”) interface, or other type of interface for physically connecting and transferring data between computers and physical storage units. The computer500can store data on the storage device518by transforming the physical state of the physical storage units to reflect the information being stored. The specific transformation of physical state can depend on various factors, in different embodiments of this description. Examples of such factors can include, but are not limited to, the technology used to implement the physical storage units, whether the storage device518is characterized as primary or secondary storage, and the like. For example, the computer500can store information to the storage device518by issuing instructions through the storage controller514to alter the magnetic characteristics of a particular location within a magnetic disk drive unit, the reflective or refractive characteristics of a particular location in an optical storage unit, or the electrical characteristics of a particular capacitor, transistor, or other discrete component in a solid-state storage unit. Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this description. The computer500can further read information from the storage device518by detecting the physical states or characteristics of one or more particular locations within the physical storage units. In addition to the mass storage device518described above, the computer500can have access to other computer-readable storage media to store and retrieve information, such as program modules, data structures, or other data. It should be appreciated by those skilled in the art that computer-readable storage media is any available media that provides for the non-transitory storage of data and that can be accessed by the computer500. In some examples, the operations performed by the network106and or any components included therein, may be supported by one or more devices similar to computer500. Stated otherwise, some or all of the operations performed by the network106, and or any components included therein, may be performed by one or more computer devices500operating in a scalable arrangement. By way of example, and not limitation, computer-readable storage media can include volatile and non-volatile, removable and non-removable media implemented in any method or technology. Computer-readable storage media includes, but is not limited to, RAM, ROM, erasable programmable ROM (“EPROM”), electrically-erasable programmable ROM (“EEPROM”), flash memory or other solid-state memory technology, compact disc ROM (“CD-ROM”), digital versatile disk (“DVD”), high definition DVD (“HD-DVD”), BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information in a non-transitory fashion. As mentioned briefly above, the storage device518can store an operating system520utilized to control the operation of the computer500. According to one embodiment, the operating system comprises the LINUX operating system. According to another embodiment, the operating system comprises the WINDOWS® SERVER operating system from MICROSOFT Corporation of Redmond, Wash. According to further embodiments, the operating system can comprise the UNIX operating system or one of its variants. It should be appreciated that other operating systems can also be utilized. The storage device518can store other system or application programs and data utilized by the computer500. In one embodiment, the storage device518or other computer-readable storage media is encoded with computer-executable instructions which, when loaded into the computer500, transform the computer from a general-purpose computing system into a special-purpose computer capable of implementing the embodiments described herein. These computer-executable instructions transform the computer500by specifying how the CPUs504transition between states, as described above. According to one embodiment, the computer500has access to computer-readable storage media storing computer-executable instructions which, when executed by the computer500, perform the various processes described above with regard toFIGS.1A-4. The computer500can also include computer-readable storage media having instructions stored thereupon for performing any of the other computer-implemented operations described herein. The computer500can also include one or more input/output controllers516for receiving and processing input from a number of input devices, such as a keyboard, a mouse, a touchpad, a touch screen, an electronic stylus, or other type of input device. Similarly, an input/output controller516can provide output to a display, such as a computer monitor, a flat-panel display, a digital projector, a printer, or other type of output device. It will be appreciated that the computer500might not include all of the components shown inFIG.5, can include other components that are not explicitly shown inFIG.5, or might utilize an architecture completely different than that shown inFIG.5. The computer500may include one or more hardware processors504(processors) configured to execute one or more stored instructions. The processor(s)504may comprise one or more cores. Further, the computer500may include one or more network interfaces configured to provide communications between the computer500and other devices, such as the communications described herein as being performed by the source device106and network device102. The network interfaces may include devices configured to couple to personal area networks (PANs), wired and wireless local area networks (LANs), wired and wireless wide area networks (WANs), and so forth. For example, the network interfaces may include devices compatible with Ethernet, Wi-Fi™, and so forth. The programs522may comprise any type of programs or processes to perform the techniques described in this disclosure for determining connectivity in multi-hop paths using BFD Echo packet(s). The programs522may enable the network device102to perform various operations. While the invention is described with respect to the specific examples, it is to be understood that the scope of the invention is not limited to these specific examples. Since other modifications and changes varied to fit particular operating requirements and environments will be apparent to those skilled in the art, the invention is not considered limited to the example chosen for purposes of disclosure, and covers all changes and modifications which do not constitute departures from the true spirit and scope of this invention. Although the application describes embodiments having specific structural features and/or methodological acts, it is to be understood that the claims are not necessarily limited to the specific features or acts described. Rather, the specific features and acts are merely illustrative some embodiments that fall within the scope of the claims of the application.
50,874
11943151
DETAILED DESCRIPTION FIG.2Ashows an embodiment of cloud-based system200adapted according to concepts of the present invention. System200of the illustrated embodiment includes network210in which cloud-based application211is present. It should be appreciated that although a single cloud-based application is shown for simplicity, embodiments of a cloud-based system may comprise any number of cloud-based applications, wherein such cloud-based applications may be similar and/or different with respect to any other cloud-based application. Cloud-based application211comprises a cloud-based deployment of an application which stored and executed by computing equipment disposed in the “cloud” of network210. Accordingly, cloud-based application211may comprise computer executable code executing on one or more processor-based systems (e.g., web servers) deployed in network210. Cloud-based application211may, for example, comprise a system providing various desired functionality such as in the form of software as a service (SaaS), infrastructure as a service (IaaS), platform as a service (PaaS), unified communications as a service (UCaaS), etc. Cloud based application211may, for example, comprise a shipping management system operable to provide the automation of tasks associated with the shipment of items, including the management of item orders, managing the picking and packing of items for order fulfilment, generation of invoices, packing slips, manifests, shipping labels, and postage or other prepaid shipping indicia, and/or tracking of shipment of items through a shipping service provider. In an exemplary embodiment, a shipping management system configuration of cloud based application211may be operable to provide various functionality associated with the processing and shipment of various items. Such a shipping management system of embodiments may, for example, provide functionality for the processing and shipment of various items by a merchant or other shipper to a purchaser or other recipient as well as for the processing and return shipment of such items by the purchaser or other recipient to the merchant or other original shipper. Such functionality provided according to embodiments of a shipping management system includes for the automation of tasks associated with the shipment of items, including the management of item orders, managing the picking and packing of items for order fulfilment, generation of invoices, packing slips, manifests, shipping labels, and postage or other prepaid shipping indicia, and/or tracking of shipment of items through a shipping service provider. For example, a shipping management system may comprise some or all of the functionality of a shipping management system, such as the SHIPSTATION shipping management system provided by Auctane LLC of Austin Texas, adapted to further provide for the return shipment of items according to the concepts herein. Additionally, embodiments of a shipping management system may provide functionality for managing return requests, processing of return requests, authorizing returns, tracking of the processing of a return request, tracking of return shipments, generating return shipping labels, generating return postage indicia, and/or the like. In particular, embodiments of a shipping management system provide a multi-carrier shipping management system capable of providing shipping functionality (e.g., rating, shipping indicia generation, packing list printing, shipping label printing, tracking, carrier pickup request, manifest printing, account management and payment, etc.) for a plurality of carriers or shipping service providers (e.g., the United States Postal Service (USPS), United Parcel Service (UPS), Federal Express, DHL, local couriers, etc.), thereby providing a multi-carrier shipping management platform. It should be appreciated that the “cloud” environment associated with system200, and network210thereof, may comprise various applications, systems (e.g., server systems232and233), devices (e.g., storage234), etc. in addition to cloud-based application211, as may be utilized by or otherwise operable with cloud-based application211for providing desired functionality. For example, embodiments of system200wherein cloud-based application211comprises a shipping management system may comprise one or more postage system (e.g., as server system232), one or more shipping service provider system (e.g., as server system233), and or one or more databases (e.g., as storage234), such as may be utilized by or otherwise operable with the shipping management system for providing shipping functionality (e.g., multi-carrier (multiple shipping service provider) and/or multi-channel (multiple commerce channel shipping functionality). Network210of the illustrated embodiment provides communication links facilitating the communication between various systems and components of system200, such as storage234, server systems232and233, user devices220A-220D, and cloud-based application211. Accordingly, network210may comprise any number of network configurations, such as the Internet, an intranet, the public switched telephone network (PSTN), a local area network (LAN), a metropolitan area network (MAN), a wide area network (WAN), a cellular network, a wireless network, a cable transmission network, and/or the like. User devices220A-220D of embodiments comprise processor-based systems (e.g., a terminal in the form of one or more computer, workstation, kiosk terminal, portable processor-based device, etc.) operable under control of one or more instruction set (e.g., software, firmware, applet, etc.) adapted to cooperate with cloud-based application210to provide desired functionality, such as the aforementioned exemplary shipping management functionality. User devices220A and220B of the illustrated embodiment, for example, comprise computer systems having a processor, memory, and input/output to enable interaction with cloud-based application211for providing at least some portion of functionality available through operation of system200. User device220C of the illustrated embodiment, however, comprises a tablet device having a processor, memory, and input/output to enable interaction with cloud-based application211for providing at least some portion of functionality available through operation of system200. User device220D of the illustrated embodiment comprises a smart phone having a processor, memory, and input/output to enable interaction with cloud-based application210for providing at least some portion of functionality available through operation of system200. The communication links between the user devices and the other nodes of system200(e.g., cloud-based application211, computing resources230A-230D, and other ones of user devices220A-220D) may comprise wired links (e.g., as illustrated for user devices220A and220B) or wireless links (e.g., as illustrated for user devices220C and220D), as well as combinations thereof. It should be appreciated that, although the illustrated embodiment shows a representative example of the number and configuration of user devices as may be utilized with respect to system200, the concepts of the present invention are not limited to application to the particular number or configuration of user devices shown. A postage system configuration of server system232may comprise one or more processor-based systems (e.g., web servers) operable to provide prepaid postage account and postage indicia generation services (e.g., to generate postage indicia, shipping labels, etc. acceptable by the USPS and/or other shipping service providers). For example, server system232may comprise one or more server of the online postage service offered by Stamps.com Inc., or other third party service providers (e.g., postage service providers, pack and ship service providers, shipping insurance service providers, etc.). Various individuals (shippers, purchasers, etc.) and/or other entities (e.g., business entities, merchants, etc.) may maintain an account with respect to server system232facilitating the generation of postage indicia, whether individually, in batch, in combination with other documentation (e.g., shipping labels, bills of lading, manifests, customs documents, etc.), and whether directly by the account holder or indirectly, such by operation of a shipping management system of cloud-based application111. Prepaid value associated with such accounts may be stored by, or in association with, the postage system, such as within a postage security device (PSD). Additional or alternative information may be stored by, or in association with, the postage system, such as user credentials utilized in validating users for account access, usage logs for accounting and/or fraud prevention/detection, rating information for use in shipping service rate determinations, etc. A shipping service provider system configuration of server system233may comprise one or more processor-based systems (e.g., web servers) operable to provide functionality with respect to shipping services provided by a shipping service provider. For example, server system233may comprise one or more server associated with the USPS providing an interface for accessing tracking data with respect to shipments made via USPS provided shipping services. Accordingly, various individuals (shippers, intended recipients, etc.) and/or other entities (e.g., businesses, merchants, etc.) may utilize tracking numbers and/or other information to access shipment tracking information throughout the shipment of items by the shipping service provider. Functionality in addition to or in the alternative to the aforementioned postage generation services provided by server system232and the aforementioned shipment tracking services provided by server system233may be provided by these and/or one or more other systems of system200(not shown), according to embodiments. For example, functionality may be provided by one or more servers (e.g., manufacturers' systems, distributors' systems, product database systems, etc.) to provide information relevant to shipping various items (e.g., weight information used to determine shipping service rates, item content information used to determine shipping services available for use with respect to particular items, such as in the case of hazardous materials, package size information used to determine shipping service rates and/or compatibility with respect to particular items, etc.). It should be appreciated that the functionality provided by various servers and/or other systems of the cloud environment of system200of embodiments is not limited functionality directly related to the management of the shipment of items. For example, one or more servers may be present in the cloud environment which are operable to provide online electronic commerce sites (e.g., online merchant stores), electronic marketplaces (e.g., eBay and Amazon Marketplace), etc. (collectively referred to herein as commerce channels) Multiple such commerce channel sites may be in communication with a shipping management system configuration of cloud-based application211of embodiments, thereby providing a multi-channel shipping management platform. Storage234shown in the illustrated embodiment disposed in the cloud environment of network210may, for example, comprise various forms of tangible, non-transitory computer readable media. For example, storage234of embodiments may comprise random access memory (RAM), read only memory (ROM), flash memory, disk memory, magnetic memory, optical memory, etc. Storage234may comprise a storage array or other storage system, such as a redundant array of independent disks (RAID) array, a storage area network (SAN), a network attached storage (NAS) array, and/or the like. Such computer readable storage may be utilized to store various information, including program code of cloud-based application211, program code of client applications utilized with respect to cloud-based application211, server system232, and/or server system233, data utilized by and/or with any or all of cloud-based application211, server system232, server system233, and their client applications, etc. Portions of storage234may comprise a part of one or more systems of the cloud (e.g., server system232and/or server system233) and/or may be separate from such systems (e.g., provided as an independent database system). From the foregoing it can be appreciated that user devices220A-220D, server system232, and/or server system233of embodiments comprise processor-based systems operating under control of one or more instruction set (e.g., software, firmware, applet, etc.) to provide operation as described herein. Such processor-based systems may comprise a central processing unit (CPU) (e.g., a processor from the PENTIUM or CORE line of processors available from Intel Corporation), memory (e.g., random access memory (RAM), read only memory (ROM), flash memory, disk memory, optical memory, etc.) storing the aforementioned one or more instruction set and/or other data used herein, and appropriate input/output circuitry and devices (e.g., network interface card (NIC), keyboard, digital pointer, display screen, peripheral interface, microphone, speaker, wireless interface, etc.). Various peripherals and other computing resources useful in providing functionality available through operation of system200may be coupled to or otherwise in communication with user devices of embodiments. For example, computing resources, such as one or more printer (e.g., thermal printer, ink printer, laser printer, etc.), imaging device (e.g., optical scanner, barcode scanner, handheld scanner, camera, biometric sample scanner, etc.), scale (e.g., electronic scale, digital scale), memory device (e.g., optical disk drive, hard disk drive, solid state disk drive, etc.), and/or the like, may be provided for use with any or all of the foregoing processor-based systems according to embodiments herein. Computing resources230A and230B, shown in the illustrated embodiment as printers (e.g., printer for printing proof of payment indicia, bills of lading, manifests, receipts, shipping labels, etc.) are shown as being coupled to user devices220A and220B, respectively. Similarly, computing resource230C, shown as a scale (e.g., an electronic scale for weighing items for shipping), and computing resource230D, shown as an imaging device (e.g., a scanner for obtaining images of shipping items, for scanning barcodes, for scanning addresses or other shipping item information, etc.), are shown as being coupled to user device220A of the illustrated embodiment. Of course, configurations and deployments of computing resources different than that shown may be utilized according to embodiments of the invention. User device220A may, for example, be configured for use by a merchant or other shipper to operate with an electronic commerce or marketplace system of the cloud environment to offer items for sale and/or with shipping management system of cloud-based application211for managing orders of such items, outbound shipments of the items of such orders, and return of items by purchasers or other recipients thereof. For example, user device220A may be configured to operate with a shipping management system implementation of cloud-based application211to import orders from a plurality of marketplace systems of a multi-channel (e.g., Amazon.com, eBay, etc.) ecommerce environment, convert the order data to a unified format for facilitating shipping management functionality of the shipping management system, and provide shipping management control, reporting, and operation under control of a user. Accordingly, the illustrated embodiment of user device220A includes a number of computing resources230A,230C, and230D, as may be utilized in facilitating such functionality. Correspondingly, user device220B may, for example, be configured for use by a purchaser or other acquirer/recipient of an item to operate with an electronic commerce or marketplace system of the cloud environment to purchase items and/or with shipping management system of cloud-based application211for requesting and processing return shipment of various purchased or otherwise acquired items. Accordingly, the illustrated embodiment of user device220B includes computing resource230B, as may be utilized in facilitating such functionality. User device220C of the illustrated embodiment comprises a tablet device having a processor, memory, and input/output useful to enable interaction with an electronic commerce or marketplace system of the cloud environment and/or shipping management system of cloud-based application211, according to embodiments herein. User device220D of the illustrated embodiment comprises a smartphone having a processor, memory, and input/output useful to enable interaction with an electronic commerce or marketplace system of the cloud environment and/or shipping management system of cloud-based application211, according to embodiments herein. Such user device configurations may be particularly well suited for use by purchasers or other recipients of items in an electronic commerce marketplace cloud environment. Of course, any or all such user devices may likewise be utilized by a merchant or other shipper in such an electronic commerce marketplace cloud environment. Additionally, any or all such user devices may comprise or otherwise be in communication with any number of computing resources, as may or may not be utilized through operation of various embodiments herein. Although, in a cloud-based application configuration, a cloud-based application generally does not have direct access to computing resources coupled to the various user devices, system200of the illustrated embodiment is adapted according to concepts of the present invention to facilitate access to computing resources230A-230D by cloud-based application211. Accordingly, system200of embodiments of the invention enables cloud-based application211to provide output to and/or obtain input from any or all of computing resources230A-230D for performing various tasks associated with the functionality provided by the cloud-based application. For example, in a shipping management system embodiment of cloud-based application211, a merchant or other shipper may be enabled to generate shipping labels (e.g., having prepaid postage indicia thereon) and/or other documentation related to the order and/or shipment thereof, such as utilizing computing resources230A and/or230B. Accounts maintained with respect to a postage system embodiment of server system232by various individuals (shippers, purchasers, etc.) and/or other entities (e.g., business entities, merchants, etc.) may be utilized to facilitate such generation of postage indicia, whether individually, in batch, in combination with other documentation (e.g., shipping labels, bills of lading, manifests, customs documents, etc.), whether directly by the account holder or indirectly, by operation of the shipping management system of embodiments. Generation of such shipping documentation may comprise input of various information regarding the item being shipped (e.g., the weight of the item, such as using a scale computing resource such as computing resource230C, hazardous material disclosures or other information relevant to the shipment of the item, such as using a scanner computing resource such as computing resource230D), regarding the recipient and/or shipper of the item (e.g., the origination and/or destination address, such as using a scanner computing resource such as computing resource230D), etc. Additional information (e.g., user credentials, as may include biometric information scanned using a scanner computing resource such as computing resource230D) stored by, or in association with, a shipping management system and/or postage system, may be utilized in facilitating the generation of postage indicia and/or other shipment documents. System200of embodiments of the invention enables cloud-based application211to provide output to and/or obtain input from any or all of computing resources230A-230D for performing various tasks associated with the foregoing shipping management system functionality. As a further example of shipping management system functionality which may implicate the use of various computing resources, such as computing resources230A-230D, the shipping service provider, the shipper, the recipient, and/or others may utilize information regarding the progress of an item through the mail stream for various purposes, including management, monitoring, and tracking of the shipment. For example, a user (e.g., purchaser of the shipped item, merchant shipping the item, etc.) may use a scanner computing resource, such as computing resource230D, to scan a tracking number, such as from shipping documentation. Such tracking number information may be utilized to identify the relevant entry or entries within a shipping service provider system embodiment of server system233for the shipped item. Using such tracking numbers, a user may be provided information regarding the last recorded status of the shipment as it progresses through the mail stream. Various reports regarding the status of the shipment(s) may be generated and printed, such as using a printing computing resource such as computing resources230A and/or230B. As shown in the illustration ofFIG.2B, user devices220(e.g., user devices220A-220D) of system200comprise one or more client applications for interacting with cloud-based application211to provide operation as described herein. For example, user devices220may comprise user interface client application221operable to facilitate user interaction with cloud-based application211. User interface client application221may comprise a web browser (e.g., INTERNET EXPLORER, GOOGLE CHROME, or FIREFOX) hosting a cloud client application or applet associated with cloud-based application211and operable within the web browser. For example, a user may interact with cloud-based application211via cloud client application221A executing within user interface client application221. Continuing with the above example of a shipping management system, the user may provide input to control cloud-based application211to perform functions such as managing item orders, managing the picking and packing of items for order fulfilment, generating invoices, packing slips, manifests, shipping labels, and/or postage indicia, and/or tracking shipment of items using cloud client application221A executing within user interface client application221. However, such cloud client applications executing within a user interface client application generally does not have direct access to computing resource230coupled to or otherwise in communication with user device220. Accordingly, tasks involving the providing of data from cloud-based application211to computing resource230(e.g., printing, storing data, etc.) or obtaining data by cloud-based application211from computing resource230(e.g., scanning, weighing, reading data, etc.) generally require multiple steps in order to invoke operations of the host user interface client application or cannot be performed. For example, such access may allow a cloud-based application to output data to be printed to a user device printer, to place data into and/or obtain data from a clipboard or other memory of a user device (e.g., tracking numbers, customer address data, lists of data for pasting into a spreadsheet application, etc.), to issue commands (e.g., reset or feature selections) to a user device and/or user device peripherals in communication therewith, etc. Similarly, such access may allow a cloud-based application to obtain data from an imaging device such as to read a barcode or optically scan data, to take “screen shots” of a user device screen, to obtain diagnostic information (e.g., operating system version information and/or configuration, screen size information, installed virus protection software information, firewall settings, etc.) from a user device and/or user device peripherals in communication therewith, etc. Of course, any or all such access may be performed at the request or control of a user, according to embodiments. User devices220may, therefore, comprise user agent client application222operable to facilitate access to computing resource230(e.g., any or all of computing resources230A-230D) by cloud-based application211. In operation according to embodiments, user agent client application222(embodiments of which are referred to herein as a “Connect Workstation”) comprises an application which is separate from user interface client application221and which is executed by user device220to implement localized access to computing resource230. Utilizing such localized access to computing resource230, user agent client application222may interact with cloud-based application211and/or cloud client application221A to enable the cloud-based application to output to and/or obtain input from one or more of the computing resource. Accordingly, embodiments of the invention provide a localized application in the form of user agent client application222that has direct or localized access to computing resources, to directly communicate with and control the operation of those resources, whereby through cooperation of the localized application and cloud-based application the cloud-based application is enabled to access and utilize the computing resources. FIG.3shows an implementation wherein a user agent client application is implemented as a Connect Workstation. It should be appreciated that, although a single instance of Connect Workstation322is shown in the illustrated embodiment, any number of such Connect Workstations may be installed according to embodiments. For example, multiple instances of the Connect Workstation may be installed upon a same host system (e.g., user device220A), such as to facilitate operation as described herein with respect to a number of different cloud-based applications, a number of different computing resources, etc. For example, one or more instances of the Connect Workstation may be installed to provide computing resource access (e.g., printing, scanning, data storage, etc.) to a plurality of cloud-based applications providing different services, such as shipping management services, postage indicia generation services, word processing services, accounting services, communication services, calendaring services, etc. Additionally or alternatively, instances of the Connect Workstation may be installed upon a plurality of host systems (e.g., any or all of user devices220A-220D), such as to facilitate operation as described herein with respect to a number of different computing resources, a number of different cloud-based applications, etc. Similarly, it should be appreciated that, although a single instance of computing resource230is shown in the illustrated embodiment, any number of such computing resources may be provided with respect to system200and/or any particular user device thereof. Connect Workstation322of embodiments provides a relatively small localized application that has direct or local access to computing resource230, to directly communicate with and control the operation of that resource. Although embodiments of Connect Workstation provide a localized deployment of an application, disadvantages typically associated with the deployment and maintenance of such localized applications may be avoided or mitigated by configurations of the Connect Workstation according to embodiments. As discussed above, an advantage of a cloud-based deployment is realized in the user or other user entity not having to obtain and maintain computing equipment (e.g., processor capability, memory capacity, etc.) sufficient to store and execute the localized application(s). However, in the case of embodiments of the Connect Workstation, the application is small and requires relatively little in the way of processing resources. Thus, most any user device having capabilities sufficient to utilize a user interface client application and cloud client application will likely already have sufficient capabilities to accommodate the Connect Workstation. In operation according to embodiments, communication is provided between the cloud client applications of system200(e.g., cloud client applications221A-1and221A-2) and cloud-based application211using inter-process communication flows anchored with network sockets (e.g., Internet sockets based on the Berkeley sockets standard) at the respective cloud client applications and cloud-based application. Accordingly communication links301and302of the illustrated embodiment may comprise socket pairs facilitating secure inter-process communications. The cloud client application may, for example, initiate and establish an open channel (e.g., comprising a communication link of communication links301and302) with cloud-based application211, whereby messages can be sent from the cloud client application or the cloud-based application at any time. The user and/or cloud client application may be authenticated, such as by operation of authentication module312using various user name/account and password credentials, biometric credentials, digital signature credentials, cryptographic key credentials, etc., in order to verify the user and/or cloud client application's authorization to access the cloud-based application and/or functions thereof, to establish the secure channel, etc. Operation of the channel may be managed, such as by operation of channel management module313, to maintain/terminate the channel as appropriate, to control communications transmitted via the channel, etc. A plurality of the aforementioned channels may be bundled to provide a grouped channel, such as to facilitate interaction and functionality for a number of affiliated users (e.g., employees of a same company, members of a family, etc.). For example, authentication module312may identify particular users and/or cloud client applications as being associated with a same group and thus provide grouping of the channels provided thereto, such as represented by channel group300. The communications provided via the various channels of the channel group may comprise various directives or notifications for implementing functionality of the cloud-based application by or on behalf of the affiliated users thereof. Connect Workstation322of embodiments of the invention initiates and establish an open channel (e.g., comprising a communication link303) with cloud-based application211, whereby messages can be sent from the Connect Workstation or the cloud-based application at any time. The Connect Workstation may be authenticated, such as by operation of authentication module312, and likewise may be associated with a particular channel group. For example, communication link303for Connect Workstation322may be grouped with one or more communication links for cloud client applications (e.g., communication links301and302), such as represented by channel group300. Channel management module313may operate to arbitrate communications transmitted via a channel of the channel group to direct the appropriate communications to another channel of the channel group, thereby providing virtual communication links between various ones of the client applications (examples of such virtual communication links being shown as virtual communication links304and305). Although embodiments of Connect Workstation provide a localized deployment of an application, disadvantages typically associated with the deployment and maintenance of such localized applications may be avoided or mitigated using the aforementioned open channel. As discussed above, an advantage of a cloud-based deployment is realized in the users of a cloud-based application not having the burden of managing and monitoring the lifecycle of the application, such as to obtain and install updates in order to maintain the functionality and compatibility of the application. Embodiments of a Connect Workstation herein utilize the aforementioned open channel to facilitate server initiated messaging to manage the lifecycle of the Connect Workstation application. For example, as cloud-based application211is updated, possible rendering it incompatible with or otherwise non-optimal for use with Connect Workstation322, commands may be provided using the open channel of communication link303to appropriately patch and/or update the Connect Workstation. It should be appreciated that, although channel group300of the illustrated embodiment shows a plurality of instances of cloud client applications (possibly associated with the same or different users) and a single instance of a Connect Workstation as members of the group, a channel group utilized for facilitating cloud-based application resource access according to embodiments of the present invention may include any number of members so long as at least one Connect Workstation is provided a channel to the cloud-based application. For example, a single instance of a cloud client application and a single instance of a Connect Workstation associated as members of a channel group may be utilized to initiate cloud-based application access with respect to a computing resource. Additionally, a plurality of instances of Connect Workstations may be utilized to provide access to a variety of computing resources, such as may be disposed at different physical locations and/or which may comprise different computing resource types/configurations. It should further be appreciated that, although the illustrated embodiment of channel group300shows communication links with a single cloud-based application, any number of cloud-based applications may be provided communication by a channel group of embodiments, wherein each such cloud-based application may be enabled to access some or all of the computing resources associated with that channel group. FIG.4shows an embodiment of flow400representing operation of system200implementing Connect Workstation322to provide cooperation with cloud-based application211and enable access to and utilization of computing resource230according to embodiments of the invention. At block401of the illustrated embodiment Connect Workstation322is installed and configured. For example, the program code for Connect Workstation322may be downloaded from network210by user device220A and installed in memory thereof. Connect Workstation322is preferably configured at block401for association with one or more appropriate channel groups. For example, having installed Connect Workstation322upon user device220A, a user thereof may provide their credentials (such as by a login dialog box presented by a user interface of the Connect Workstation installation utility) for cloud-based application211to enable Connect Workstation322to establish communication link303with cloud-based application211and to associate a channel thereof with channel group300to which cloud client application221A-1utilized by the user is also associated. The installed instance of Connect Workstation may be associated with the user, and thus instances of the user's cloud client application, automatically, such as through the use of the aforementioned user's credentials during installation/configuration of the Connect Workstation. For example, having provided valid user credentials during installation, a unique identifier for the Connect Workstation (e.g., workstation ID) may be associated with the particular user by the cloud-based application. Thus, when the user accesses the cloud-based application through a cloud client application executing in a user interface client application, the Connect Workstation and computing resources in communication therewith may be automatically associated with, and preferably available to, the user. Moreover, the Connect Workstation may be associated with the user by the cloud-based application, such as for delivery of messages and/or other functionality, even when the user is not logged into the cloud-based application through a cloud client application. Because Connect Workstation322of embodiments facilitates access to computing resources, such computing resources230, configuration of Connect Workstation322at block401may additionally or alternatively include configuration for management of access to computing resources which are in communication with the Connect Workstation. Such configuration may enable/disable access to particular computing resources through Connection Workstation, enable/disable sharing of particular computing resources through Connection Workstation, etc. For example, user device220A may have a plurality of printers coupled thereto, wherein a subset thereof is to be provided access by Connect Workstation according to the concepts herein. Accordingly, having installed Connect Workstation322upon user device220A, a user thereof may select/deselect (such as by radio button or checkbox within a dialog box presented by a user interface of the Connect Workstation configuration utility) particular ones of the plurality of printers to thereby make selected printers, but not deselected printers, available for access to cloud-based application211. Similarly, the user may select/deselect particular ones of the plurality of printers to thereby share selected printers, but not deselected printers, with other users of the channel group. Accordingly, access of the computing resource provided by the Connect Workstation of embodiments may be by or on behalf of various users and/or instances of cloud client applications. Additionally or alternatively, a user may select or otherwise provide configuration information with respect to the computing resources, such as to establish default ways in which computing resources are accessed, identify or select particular computing resources preferred or to be made available with respect to particular tasks, etc. Although the foregoing example has been made with reference to printers, such configuration operations may be implemented with respect to any configuration and/or combination of computing resources. At block402of the illustrated embodiment, Connect Workstation322is executed to provide operation as described herein. In operation according to embodiments, Connect Workstation322comprises a terminate-and-stay-resident (TSR) configuration (e.g., a program executing in the tool tray of MICROSOFT WINDOWS) which is operable as a background process providing operation as described herein. Through cooperation of the Connect Workstation and the cloud-based application, cloud-based application211is enabled to access and utilize computing resource230. Embodiments of the invention preferably implement real-time member awareness with respect to channel group300, wherein both Connect Workstations and cloud client applications are considered as members of the channel group when associated therewith. Accordingly, as Connect Workstation322comes online or is otherwise initialized (e.g., after having established the channel and joined the channel group) the Connect Workstation's presences is announced to the other members of the channel group. For example, Connect Workstation322may broadcast a message providing information useful for implementing computing resource access according to the concepts herein. In accordance with an embodiment of the invention, Connect Workstation322broadcasts a member awareness message including the host system's identification (e.g., the user device computer or terminal name, such as JIM-LAPTOP-1) and/or Connect Workstation's identification (e.g., a unique or substantially unique identification string, such as CD05E0F2-5FF1-45CD-BB41-ADD8C56754AA). It should be appreciated that substantially unique, as used herein, comprises information or other item which is unique in the context of the operation described, although perhaps not unique in the universe at large. The member awareness message broadcast by Connect Workstation322of embodiments may additionally include a list of computing resources (e.g., printers, scanners, scales, memory devices, etc.) that have been configured as enabled or shared, preferably including information regarding the particular computing resources (e.g., indicating whether sharing is enabled for the computing resource, indicating a current operating state of the computing resource, providing information regarding a configuration of the computing resource, providing information regarding features of a computing resource, providing a name, model, and/or version of a computing resource, etc.). Accordingly, all members of the channel group, or at least all cloud client applications of the channel group of embodiments of the invention, may be made aware of the existence and availability of the Connect Workstation and/or the computing resources thereof. It should be appreciated that member awareness messages may be broadcast according to embodiments of the invention at times other than when a Connect Workstation comes online or is otherwise initialized. For example, the forgoing member awareness message, or some portion thereof, may be broadcast to provide updated information. In operation according to embodiments, in order to provide real-time computing resource availability and/or status information, member awareness messages may be broadcast by a Connect Workstation as computing resources thereof are taken offline or are placed online. Likewise, member awareness messages may be broadcast by a Connect Workstation as the Connect Workstation itself is taken offline. Such update member awareness messages may comprise less information than the above described member awareness messages, such as to include only updated information, information for members having updated information, etc. Member awareness messages may be communicated by means other than by broadcast according to embodiments of the invention. Member awareness messages of embodiments herein may be direct messaged or multi-cast in appropriate situations. For example, cloud client applications may come online quietly, without announcing their presence to the channel group despite their having joined the channel group. The joining of the channel group by such a cloud client application may be subsequent to a Connect Workstation coming online and broadcasting its member awareness messages. Accordingly, Connect Workstations of embodiments monitor activity with respect to the channel group and/or interact with channel group control functionality (e.g., authentication module312and/or channel management313) to detect the arrival of other members, such as cloud client applications. In accordance with some embodiments, such channel group control functionality may provide signaling to make members aware of when a new member joins and/or leaves the channel group. In operation according to embodiments, such Connect Workstations proactively transmit a direct message to the newly joined member, to thereby provide a member awareness message to the newly joined member which was not present for the broadcast member awareness message. In operation according to embodiments of the invention, a registry is kept of members of the channel group and their status to thereby facilitate members determining the status of other members at any time. For example, each Connect Workstation and/or each cloud client application may keep a registry of all Connect Workstations and/or all cloud client applications which are currently part of the channel group. Such a registry may be generated and maintained using the aforementioned member awareness messages. Using such a registry, a cloud client application, Connect Workstation, and/or cloud-based application, as well as users thereof, can immediately detect changes in other member's presence and can respond accordingly. At block403of the embodiment illustrated inFIG.4, a cloud client application is employed to interact with the cloud-based application for performing desired functionality. Staying with the foregoing example of shipping management operation, a user may utilize cloud client application221A-1to interact with cloud-based application211and invoke various desired shipping management functionality. Where such functionality employs the use of computing resource230, such as for printing a shipping label or other document, cloud client application221A-1may provide command messages to cloud-based application211via communication link301to cause cloud-based application211to perform a particular task associated with the desired functionality, such as to generate a shipping label. For example, in operation according to the illustrated embodiment, if, while employing the cloud client application, it is determined that a task employs access to a computing resource for which the cloud-based application is not itself provided control at block404, processing proceeds to block405for operation to employ the appropriate Connect Workstation to access the desired computing resource. Where, however, no such computing resource is employed by the task, processing by block404returns to bock403for continued performance of the desired functionality. At block405of the illustrated embodiment, cloud client application221A-1may provide one or more command messages to perform a particular task associated with the desired functionality, which employs operation of computing resource230, such as to print the shipping label. For example, interaction with cloud-based application211which initiates a task employing a computing resource for which the cloud-based application does not otherwise have control of, may result in a dialog box being presented by cloud client application221A-1for selection of a particular resource of the computing resources available through channel group300, such as may be populated using the aforementioned registry maintained by the cloud client application. This information, perhaps as well as other information collected from the user and/or available to the cloud client application, may be utilized to generate a command message (e.g., a print control message in the present example). Because cloud-based application211is not itself provided control of computing resource230, the aforementioned command for printing is provided to Connect Workstation322which is provided control of computing resource230. For example, using virtual communication link304, cloud client application221A-1may provide a print command message to Connect Workstation322. In operation according to embodiments, the command messages provided by a cloud client application to a Connect Workstation for implementing access to a computing resource includes information describing the desired operation. For example, the command message may include identification of the computing resource for which access is desired, the particular access or operation desired, data for facilitating the access or operation, etc. Such information may be provided by the user and/or may be collected or provided automatically, such as by the cloud client application, the cloud-based application, and/or the Connect Workstation. For example, a user may provide default configuration information to cloud-based application211in association with the user's account, such that the default configuration information is available when that user accesses the cloud-based application. This default configuration information may include information regarding a preference of computing resources or computing resource type to be used with respect to particular tasks, the configuration of the computing resources, how the status of particular tasks are to be reported, etc. When initiating a task which accesses a computing resource, the default configuration information may be accessed to populate appropriate information within a command message. Of course, the user may be provided a dialog box or other means by which any or all such information being applied to a particular task may be changed, whether permanently or on an instance by instance basis. Continuing with the foregoing shipping label example of the shipping management operation, the print command message may identify the printer to be used, the print file(s) generated and stored by the cloud-based application to be printed (e.g., the document descriptor(s)), and the identity of the user and/or cloud client application requesting the access. This information may be utilized to select the appropriate computing resource, the appropriate data for the printing operation, and to determine if the user or cloud client application is authorized to invoke access to the computing resource. Assuming that the computing resource is appropriate for the requested task and the user/cloud client application is authorized to utilize the computing resource, Connect Workstation322of embodiments may use the information in the command message to communicate with cloud-based application211and obtain an electronic copy of the document(s) (e.g., one or more shipping labels) to be printed and to identify the particular computing resource (e.g., the particular printer) to be utilized for the printing operation. When obtaining the electronic copy of the document(s) or otherwise communicating with the cloud-based application for implementing access to a computing resource, the Connect Workstation may provide information regarding the particular computing resource selected (e.g., information regarding the computing resource included in the command message, or some portion thereof, or which is otherwise available to the Connect Workstation) to the cloud-based application, such as to facilitate formatting of the electronic document for printing (e.g., to provide appropriate page format, compatible print file, aspect ratio, data format, etc.) or otherwise providing processing to facilitate the desired access of the selected computing resource. The command message may comprise additional or alternative information. The additional or alternative information included in a command message may comprise information such as information to configure the computing resource for the desired access action, to facilitate communications and control between the cloud client application and Connect Workstation, and/or the like. For example, the command message is preferably part of a dialog between the cloud client application and the Connect Workstation. Accordingly, the command message may include unique or substantially unique dialog identification information (e.g., a request identifier) to be included in messages of the dialog for facilitating identification of the messages with the particular dialog. As an example, after having provided the print command message to the Connect Workstation, the cloud client application may receive an acknowledgement/negative-acknowledgement (e.g., pass/fail response) message from the Connect Workstation indicating that the print command has been received and whether the requested action is to be attempted on behalf of the cloud client application. Assuming the action is attempted, one or more status message may be communicated by the Connect Workstation to the cloud client application, such as to provide information regarding the interim status of the action and/or to provide notification of the successful or unsuccessful termination of the action. Having received the aforementioned command message, and assuming the identified computing resource is online and properly accessible by or on behalf of cloud client application221A-1, operation at block405preferably implements the desired access, such as to result in the printing of the shipping label by computing resource230(comprising a printer in this example). Thus, Connect Workstation322of embodiments may operate to arbitrate access to computing resource230by cloud-based application211. It should be appreciated that operation according to embodiments provides for communication of data (e.g., print documents, scan files, scale data, etc.) to/from computing resource230between cloud-based application211and Connect Workstation322without transmission to user interface client application221-1or an intermediate server (e.g., a cloud print server). Moreover, Connect Workstation322of embodiments is configured to utilize data which is optimized for the operation of cloud-based application functionality, such as to use zebra programming language (APL) page description language files which are highly efficient with respect to labeling applications. Additionally, the access to computing resource230appears to the user of cloud client application221A-1as having been initiated and controlled in accordance with operation of a localized application rather than a cloud-based application. Embodiments implementing configurations of a Connect Workstation as described above not only facilitate operation whereby a cloud-based application is enabled to provide output to and/or obtain input from computing resources, but also facilitate remote access to and/or sharing of various computing resources. For example, a user of computing resource230need not be utilizing cloud client application221A-1executing on user device220A local to the computing resource in order to access computing resource230. The user may instead be utilizing cloud client application221A-1executing on user device220D, as may be disposed remotely with respect to user device220A hosting computing resource230. Utilizing the channel group as discussed above, operation of cloud client application221A-2would be essentially as described above for cloud client application221A-1, except that virtual communication link305would be utilized for command and control messages between cloud client application221A-2and Connect Workstation322. Not only does such a configuration facilitate remote access to the computing resource by the user, but also allows the user to utilize user devices which otherwise are not capable of supporting the use of the computing resource. For example, user device220D may comprise a smart phone which itself does not directly support access to computing resource230. Nevertheless, the user may invoke the cooperation of cloud client application221A-2and Connect Workstation322to access computing resource230in performing one or more tasks for the functionality of cloud-based application211. Users might, for example, use such robust access to the computing resources to facilitate their use of highly portable user devices (e.g., smart phones, PDAs, and/or tablet devices) to utilize the cloud-based application “in the field.” Continuing with the above example of shipping management operation, a user utilizing a tablet device may walk the floor of a fulfilment warehouse, interacting with shipping management functionality of the cloud-based application to update order fulfilment records and print associated documents, such as packing slips, bills of lading, shipping labels, etc. Similarly, the access to computing resource230provided by the Connect Workstation of embodiments is not limited to a particular user having ownership or possession of the computing resource. For example, user device220D may be utilized by a user which, although not the owner or possessor of computing resource230, is nevertheless a member of the channel group (e.g., employee of a same company or member of the same family as the owner or possessor of computing resource230). Accordingly, assuming that the computing resource230was selected for sharing in configuration of Connect Workstation322and that the user and/or cloud client application221A-2is otherwise authorized to access computing resource230, cloud client application221A-2may be provided shared access to computing resource230. Such embodiments may be utilized to provide a companywide network of computing resources which are made available to authorized users, irrespective of the physical locations of the computing resources and/or the particular user devices utilized by a user. It should be appreciated that the access to computing devices provided according to embodiments of the invention is not limited to access directly associated with performing tasks of the cloud-based application functionality. For example, embodiments implementing configurations of a Connect Workstation as described above facilitate operation whereby support personnel may be provided robust remote access to computing resources, such as for diagnostic and troubleshooting purposes. By joining a channel group of which a Connect Workstation in communication with a particular computing resource is a member, such support personnel may utilize a support application, having functionality similar to that of a cloud client application for communicating with the Connect Workstation, to control access to the particular computing resource, such as to determine an operational state, currently implemented configuration parameters, to communication command and control instructions to the computing resource, etc. Additionally or alternatively, such support personnel may utilize such a support application to obtain various data, such as capturing “screen shots” from the user's device, to facilitate an understanding of the operation the user is experiencing and to enable efficient diagnostics and problem resolution. Thus, such support personnel may (e.g., with the consent and permission of the user) avoid engaging a user to interact with a problematic computing resource and relay information with respect thereto. As can be appreciated from the foregoing, although embodiments implement a user interface client application and associated cloud client application which is separate from the Connect Workstation, tight integration between a cloud client application and Connect Workstation is provided according to embodiments. Thus, a user may be provided control of various computing resources, whether disposed locally or remotely with respect to the user, through the cloud client application and associated cloud-based application using the aforementioned communication with the Connect Workstation. Additionally or alternatively, a user may be provided information regarding the various computing resources of one or more channel group, such as the availability and/or operational state of a resource (e.g., online/offline, requires attention, toner low, paper jam, etc.), the status of a task (e.g., printing in process/complete, scanning in process/complete, etc.), and/or the like. Embodiments of the invention which implement a user interface client application and associated cloud client application which is separate from the Connect Workstation provide advantages in the separate, possibly continued, operation of the Connect Workstation. For example, a user may not continually utilized the cloud client application to access the corresponding cloud-based application or may otherwise experience periods of not being logged into the cloud-based application. However, embodiments of the separate Connect Workstation operating as a background task may nevertheless remain connected to the cloud-based application and thus in communication therewith, such as to receive update messages, alerts, initiate operation of the cloud client application, etc. Continuing with the foregoing example of shipping management functionality, the cloud-based shipping management application may send out general messages regarding the client's account from time to time. A user of the channel may not be logged into the cloud-based shipping management application when such messages are transmitted and thus, without operation of an embodiment of the Connect Workstation, would not see the notifications of these events. An example of such an event notification is when the cloud-based shipping management application automatically (and intermittently) imports orders to a client's account from their external selling platform (e.g., Amazon.com, eBay.com, etc.). When this occurs, the cloud-based shipping management application may operate to send out a notification to the client's channel that “X new orders were just imported to the shipping management application.” Since the Connect Workstation of embodiments is actively running in the background of a user's device, and since the Connect Workstation is actively listening in on the group channel, the Connect Workstation can show a notification on the user's device (e.g., present a dialog box on a computer desktop). Such message presentation may be in addition to or in the alternative to any messaging that may be provided by the cloud client application (should it also be operating and logged in at a time of the message transmission). For many smaller entities that do not spend extended periods of time logged into a cloud-based application, the forgoing operation facilitates their being notified instantly when new orders are imported (or a similar noteworthy event occurs for their account). In operation according to embodiments, a user may click on the notification to open their browser to the cloud client application. This means that such users are alleviated of a need to intermittently log into the cloud-based application throughout the day to check for new activity. Although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure of the present invention, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present invention. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.
62,564
11943153
DETAILED DESCRIPTION The inventors have recognized significant disadvantages of conventional approaches to supporting telephone calls. Each telephony technology has the potential of introducing brief interruptions in a call. For example, a call supported by a wireless network may be briefly interrupted in both directions when a wireless phone moves to a location where it no longer has a line-of-sight to the wireless network tower to which it has been connected, requiring the phone to negotiate a connection to a tower to which it does now have a line-of-sight, or when the wireless phone moves from one cell of the wireless network to another. Voice Over IP connections may face IP network congestion, and SIP, Public Switched Telephone Networks, Session Initiating Protocol, H.323, and Media Gateway Control Protocol can encounter electromagnetic interference. The inventors have observed that these interruptions—which typically substitute silence or loud, discordant noise for their calling partner's voice—often throw conversations off course, forcing one or more participants to try to understand their partner's later speech after the interruption ends without the benefit of hearing their earlier speech during the interruption. To overcome these disadvantages, the inventors have conceived and reduced to practice a software and/or hardware facility for using buffered audio to overcome lapses in telephony signal (“the facility”). In a telephone call, each “side” of the call has a directional path conveying an audio signal from a microphone near one of the participants to a speaker near the other participant. For example, for a call in which each of the participants is using a mobile phone, each side of the call might have a path that begins in a speaking participant's mobile phone; traverses a radio link from the speaking participant's mobile phone to a wireless tower; the tower will transmit the data to a local data center where core and IMS resides, the data will be then processed and traverses wired links to a switch; traverses wired links to a wireless tower near the listening participant's phone; and traverses a radio link from that wireless tower to the listening participant's phone. In some embodiments, the facility selects one or more of a call's links to protect with buffering. In the above example, the four wireless links may be regarded as particularly vulnerable to lapses, and are therefore each protected by the facility. For each link protected by the facility with buffering, the facility operates a buffer at the upstream end of the link. Audio received at this buffer during a lapse of the downstream link is recorded by the buffer, and played from the buffer through the link when the lapse ends. In some embodiments, the facility continues recording audio newly-received at the buffer while earlier-received audio is being played from the buffer, so that none of this side of the conversation is lost. In some embodiments, the facility plays buffer contents at a faster rate than they were recorded, such as twice as fast, in order to “catch up to live” more quickly. In some embodiments, the facility plays a tone or other brief sound immediately before playing buffer contents, in order to alert the listening participant that cached audio for this side is about to be played. In some embodiments, some or all of the buffers operated by the facility are continuous buffers that are always recording the audio received at the buffer, and are indexed by time of day. At the downstream end of the protected link, a lapse detector and notifier monitors for lapses, and stores their starting time. The lapse detector and notifier is connected to a lapse remediator at the upstream end of the protected link. As soon after the lapse starting time as the lapse detector and notifier monitors can communicate with the lapse remediator (The lapse may interrupt the ability of the lapse detector and notifier monitors to communicate with the lapse remediator.), the lapse detector and notifier sends a lapse message to the lapse remediator containing the lapse starting time. In response to receiving the lapse message, the lapse remediator controls the buffer to begin playback from the starting time index, and the buffer continues to play these buffer contents until it catches up to live. In some embodiments, the facility uses a circular buffer as this continuous buffer. In some embodiments, some or all of the buffers operated by the facility are selective buffers that are controlled to record only during lapses. At the upstream end of the protected link, a lapse detector and remediator monitors for lapses of the link. When the lapse detector and remediator detects the beginning of a lapse, it controls the buffer to begin recording the audio received at the buffer. When the lapse detector and remediator detects the end of a lapse, it controls the buffer to begin playing back the recorded audio, and the buffer continues to play these buffer contents until it catches up to live. In some embodiments, the facility performs voice transcription on audio stored in some or all of its buffers, and transmits the resulting text for display to the listening participant, such as in an SMS message. In some embodiments, the facility operates with respect to calls in which more than two people are simultaneously communicating. In a manner similar to that discussed elsewhere herein, the facility uses buffers to protect some or all of the links used to connect the participants in such larger calls. By performing in some or all of the ways discussed above, the facility provides all of the audio spoken by each call partner to the other, even audio spoken during time periods when a link connecting the call lapsed. This measure of resiliency added by the facility makes calls more straightforward, useful, time-efficient, and comfortable for their participants. Also, the facility improves the functioning of computer or other hardware, such as by reducing the dynamic display area, processing, storage, and/or data transmission resources needed to perform a certain task, thereby enabling the task to be permitted by less capable, capacious, and/or expensive hardware devices, and/or be performed with less latency, and/or preserving more of the conserved resources for use in performing other tasks or additional instances of the same task. For example, the facility can significantly reduce the duration of a call—by eliminating portions of the call during which the participants would discuss the interruption caused by the lapse, and those during which they would repeat the information each provided during the lapse that was not heard—and thus the length of time for which the call occupies hardware resources. This permits the same number of calls to be supported using lower levels of hardware resources, or a greater number of calls to be supported using the same hardware resources. FIG.1is a context diagram of an environment in which the facility operates in some embodiments. Environment100includes a plurality of cells112a-112c, a plurality of wireless phones or other user devices124a-124b, and a communication network110. Illustratively, the cells112a-112ccorrespond to cell sites (e.g., cellular towers) that together implement a 5G cellular communications network, or a wireless communications network using another standard. The cells112a-112cmay include or be in communication with base stations, radio back haul equipment, antennas, or other devices, which are not illustrated for ease of discussion. Each cell112a-112cprovides cellular communications over a coverage area. The coverage area of each cell112a-112cmay vary depending on the elevation antenna of the cell, the height of the antenna of the cell above the ground, the electrical tilt of the antenna, the transmit power utilized by the cell, or other capabilities that can be different from one type of cell to another or from one type of hardware to another. Although embodiments are directed to 5G cellular communications, embodiments are not so limited and other types of cellular communications technology may also be utilized or implemented. In various embodiments, the cells112a-112cmay communicate with each other via communication network110. Communication network110includes one or more wired or wireless networks, which may include a series of smaller or private connected networks that carry information between the cells112a-112c. The user devices124a-124care computing devices that receive and transmit cellular communication messages with the cells112a-112c, e.g., via antennas or other means. Examples of user devices124a-124cmay include, but are not limited to, mobile devices, smartphones, tablets, cellular-enabled laptop computers, or other UE or computing devices that can communicate with a cellular network. FIG.2is a block diagram showing some of the components typically incorporated in at least some of the computer systems and other devices on which the facility operates. In various embodiments, these computer systems and other devices200can include server computer systems, cloud computing platforms or virtual machines in other configurations, desktop computer systems, laptop computer systems, netbooks, mobile phones, personal digital assistants, televisions, cameras, automobile computers, electronic media players, etc. In various embodiments, the computer systems and devices include zero or more of each of the following: a processor201for executing computer programs and/or training or applying machine learning models, such as a CPU, GPU, TPU, NNP, FPGA, or ASIC; a computer memory202for storing programs and data while they are being used, including the facility and associated data, an operating system including a kernel, and device drivers; a persistent storage device203, such as a hard drive or flash drive for persistently storing programs and data; a computer-readable media drive204, such as a floppy, CD-ROM, or DVD drive, for reading programs and data stored on a computer-readable medium; and a network connection205for connecting the computer system to other computer systems to send and/or receive data, such as via the Internet or another network and its networking hardware, such as switches, routers, repeaters, electrical cables and optical fibers, light emitters and receivers, radio transmitters and receivers, and the like. In some embodiments, devices that are wireless phones or other devices capable of placing and conducting audio calls include additional components, such as a microphone for capturing speech; a speaker for outputting speech; one or more radios and antennas for wireless communication; data encoding, compression, encryption, and routing mechanisms; etc. While computer systems configured as described above are typically used to support the operation of the facility, those skilled in the art will appreciate that the facility may be implemented using devices of various types and configurations, and having various components. FIG.3is a data flow diagram showing operation of the facility in some embodiments in a generalized environment using a continuous buffer. The diagram shows a call between participant A310and participant B390. The call is made up of two paths. The first is from participant A's microphone311, through link313to node320, through link330to node340, and through link334to participant B's speaker392. This path conveys a first “side” of the call, in which participant A's speech is conveyed to participant B. The second depicted path corresponds to a second side of the call, in which participant B's speech is conveyed to participant A. In this path, data travels from participant B's microphone391through link393to node340, through link350to node320, and through link316to participant A's speaker312. As shown in this generalized environment, the intermediate nodes are nodes320and340, and the links are links313,330,334,393,350, and316. The nodes may variously be mobile phones, landline phones, VOIP phones, phones of other types, or devices of other types capable of operating as a telephony terminal and making and/or receiving phone or other voice calls. The nodes may also be nodes that are intermediate to phones or other telephony devices, mobile base stations, routers, switches, servers, etc. Similarly, the links can be any of a wide variety of links capable of conveying telephony data, including wireless links, wired links, optical fiber links, unguided laser or light links, etc. The telephony data may be encoded in any of a wide variety of ways on these links, including any kind and/or number of layers for compression, encryption, routing, data integrity, billing, routing security, etc. The selection of nodes in which the facility implements lapse detector and notifiers and lapse remediators defines the logical links that are protected. In some embodiments, these logical links can be compound links made up of two or more physical links, which may be of the same or different types, and which may be joined by nodes in which the facility does not implement lapse detector and notifiers or lapse remediators. The diagram shows the facility being used to protect two links with voice data buffering: protected link330from node320to node340in the first path, and protected link350from node340to node320in the second path. With respect to protected link330, the facility causes a continuous buffer324outfitted in node320to continuously record the audio data from participant A's microphone311received by node320in link313. In some embodiments, continuous buffer324and the other continuous buffers discussed herein are circular buffers. A circular buffer retains the last m minutes or s seconds of received audio, and discards audio that is older. In various embodiments, the facility selects this buffer residency time in a manner that balances the ability to fully store audio for the length of expected lapses against the amount of memory consumed by the buffer and the fidelity of the audio stored in the buffer. In various embodiments, the facility uses a continuous buffer length of five seconds, 10 seconds, 15 seconds, 20 seconds, 30 seconds, 45 seconds, 60 seconds, 90 seconds, two minutes, five minutes, etc. As participant A's audio signal continues on through link330toward node340, a lapse detector and notifier341of node340monitors receipt of audio data via link330. If the lapse detector and notifier detects a lapse—such as the failure to receive any signal from node320, receiving data from node320that contains no discernable audio, or audio determined by the detector to be of low quality—then the lapse detector and notifier stores the time at which the beginning of the lapse is detected. The lapse detector and notifier continues to monitor link320, seeking to identify the time at which the lapse ends, such as when a signal via the link is restored, or when audio received via the link is determined to be of an adequate quality. At this time, the lapse detector and notifier stores the lapse ending time, and notifies node320of the just-ended lapse. It does this by sending an A-to-B lapse signal342from the lapse detector and notifier to a lapse remediator323of node320. In various embodiments, the lapse signal is sent by the same or a different means than is link330. When the lapse remediator receives the A-to-B lapse signal, it controls buffer324in order to begin playing the buffer's contents starting at the lapsed beginning time contained by the lapse signal. In some embodiments, before beginning the playing of the buffer's contents, the facility plays a distinctive tone or other short sound, or a recorded or synthesized voice message, indicating that buffered call audio will follow. As the buffer plays this audio to replace the corresponding audio that was lost during the lapse, the buffer continues to record participant A's audio received via link313, without immediately passing it through to link330. In some embodiments, the buffer plays its contents at a higher rate than they were recorded, such as 1.25 times as fast, 1.5 times as fast, 1.75 times as fast, twice as fast, 2.5 times as fast, three times as fast, four times as fast, etc. This acceleration of the played-back audio permits the first side of the call to catch up with participant A's present speech. The facility chooses a playback rate that optimizes between catchup time and intelligibility. In order to boost intelligibility, in some embodiments the facility processes the played-back audio to reduce its frequency, via techniques such as frequency filtering, frequency reduction, pitch scaling, audio time stretching, etc. In various embodiments, the facility uses additional techniques in order to hasten catchup with participant A, including deletion or shortening of periods of silence in the played-back audio. This audio played back from buffer324is sent via link330to node340, and through link334to participant B's speaker392. When the buffer playback catches up in the sense that the end of the recorded audio is reached—the last thing participant A said having just been replayed—then the lapse remediator causes audio received from the participant A microphone via link313to be routed again to link330toward node340. Bracket331shows the extent of protection for A-to-B audio, from a transmitter in node320through link330. A similar range of protection351is provided by the facility to the second side of the call. It can be seen that the facility operates in an analogous way along the second path, conveying participant B's speech from the participant B microphone391to the participant A speaker312, protecting this side of the call from lapses that occur traveling from node340to node320. In particular: lapse detector and notifier321behaves in a way analogous to lapse detector and notifier341, sending B-to-A lapse signal322in a way analogous to A-to-B lapse signal342; lapse remediator343behaves in a way analogous to lapse remediator323; and continuous buffer344behaves in a way analogous to continuous buffer324. FIG.4is a flow diagram showing a process performed by the facility in some embodiments in order to operate the lapse detector and notifier in a node downstream from a link that is protected by the facility with a continuous buffer. In act401, the facility determines whether a lapse has begun, in some or all of the manners discussed above in connection withFIG.3. If a lapse has not begun, then the facility continues in act401to continue monitoring, else the facility continues in act402. In act402, the facility stores the time at which the lapse began. In act403, the facility monitors to identify the end of the lapse in some or all of the ways discussed above in connection withFIG.3. If the lapse has not ended, then the facility continues in act403to continue monitoring, else the facility continues in act404. In act404, the facility stores the time at which the lapse ended. In act405, the facility notifies the upstream lapse remediator of the lapse, including in the lapse message the lapse beginning time stored in act402and the lapse ending time stored in act404. After act405, the facility continues in act401to monitor for the beginning of the next lapse. Those skilled in the art will appreciate that the acts shown inFIG.4and in each of the flow diagrams discussed below may be altered in a variety of ways. For example, the order of the acts may be rearranged; some acts may be performed in parallel; shown acts may be omitted, or other acts may be included; a shown act may be divided into subacts, or multiple shown acts may be combined into a single act, etc. FIG.5is a flow diagram showing a process performed by the facility in some embodiments in order to operate the lapse remediator used by the facility in connection with continuous buffers. In act501, the facility determines whether a lapse notification has been received; if so, the facility continues in act502, else the facility continues in act501to continue monitoring. In act502, the facility causes an alert sound to be sent via the protected link. In act503, the facility causes the contents of the buffer between the beginning time included in the received lapse notification and catch-up time to be sent via the protected link. As discussed above, in various embodiments, the facility performs various approaches to rate acceleration, silence shortening, and frequency correction to the played buffer contents before sending them across the protected link. After act503, the facility continues in act501. FIG.6is a data flow diagram showing operation of the facility in some embodiments using a continuous buffer with respect to a call in which both participants are using wireless phones. The diagram shows a wireless phone610used by participant A, as well as a wireless phone660used by participant B. The diagram shows two nodes630and640intermediate to the wireless phones. A link620to wireless phone A to node630is protected, as is a link650from node640to wireless phone B660. In the second path, conveying the second side of the call, a link670from wireless phone B to node640is protected, as is a link680from node630to wireless phone A. In some embodiments, these four protected links each contain the wireless communications between a wireless phone and a base station. In some embodiments, the intermediate node is in the base station, such that the protected link contains only the wireless communications to the base station. In some embodiments, the node is somewhere in a wired network between the base station and the other wireless phone, such as a switch, a server, etc.; in these cases, the protected link covers the entire span from the wireless phone to this other node. As shown, links635and645between nodes630and640are not protected. It can be seen by comparingFIG.6toFIG.3that the facility protects these four protected links in the same way as is shown inFIG.3and described in connection therewith, using lapse detector and notifiers611,631,641, and661, which can send lapse signals612,632,642, and662; lapse remediators613,633,643, and663; and continuous buffers614,634,644, and664. Those skilled in the art will appreciate that a particular side of the call can include any number of protected links. Also, though not shown, the two sides of a single call may traverse different paths, which may be protected similarly or differently by the facility. FIG.7is a data flow diagram showing operation of the facility in some embodiments in a generalized environment where a selective buffer is used by the facility rather than a continuous buffer. When audio from the microphone711in participant A's wireless phone710is received by node720via link713, it is passed forward to protected link730, and not initially recorded by selective buffer724. Audio from node720via link730is received at node740, which passes it via link734to participant B's wireless phone790, where it is played over speaker791. A lapse detector and remediator723in node720monitors the status of protected link730to detect any lapses. In some embodiments, the lapse detector and remediator does this by monitoring a stream of acknowledgment messages that would be received from node740if link730was intact and operating properly. In various embodiments, the facility uses various other approaches to monitoring for monitoring for lapses in the lapse detector and remediator723. When the lapse detector and remediator identifies the beginning of a lapse, it controls selective buffer724to begin recording participant A's audio received via link713. The lapse detector and remediator proceeds to monitor for the end of the present lapse. When the lapse detector and remediator detects the end of the present lapse, it directs the selective buffer to play its complete contents for transmission in link730, continuing to record participant A's audio received via link713without immediately passing it to link730. This playing proceeds in the various manners discussed above. When playing catches up to participant A's live audio—that is, the buffer is emptied—the lapse detector and remediator routes participant A's audio received via link713directly to link730. It can be seen by comparing the bottom half of the diagram to the top half that the facility protects protected link750in a similar manner. In particular: microphone792behaves analogously to microphone711; links793,750, and716behave analogously to links713,730, and734, respectively; lapse detector and remediator743behaves analogously to lapse detector and remediator723; and selective buffer744behaves analogously to selective buffer724. Bracket731shows the extent to which the facility's protection extends for the first side of the call, as does bracket751with respect to second side of the call. FIG.8is a flow diagram showing a process performed by the facility in some embodiments in order to operate the lapse detector and remediator. In act801, if the downstream protected connection is operating, then the facility continues in act801, else the facility continues in act802. In act802, the facility begins recording upstream audio in the selective buffer. In act803, if the downstream connection is operating, then the facility continues in act804, else the facility continues in act803. In act804, the facility inserts the alert sound into the downstream connection. In act805, the facility inserts the entire contents of the buffer. In act806, after the entire contents of the buffer have been inserted, the facility stops recording. After act806, the facility continues in act801. FIG.9is a data flow diagram showing operation of the facility in some embodiments using a selective buffer for a call in which both participants are using wireless phones.FIG.9is similar toFIG.6, in the sense that nodes930and940are each at some point intermediate to the two wireless phones910and960, which may be either at the wireless base station that is communicating directly with the adjacent wireless phone, or nodes in the path that are further from the corresponding wireless phone. By comparingFIG.9toFIG.7, it can be seen that the facility protects protected links920,950,970, and980in a similar manner to links730and750shown inFIG.7and discussed above. In particular: links935and945pass audio in a manner analogous to links635and645; lapse detector and remediators913,933,943, and963operate in a manner similar to lapse detector and remediators723and743; and selective buffers914,934,944, and964behave in a manner analogous to selective buffers724and744. In various embodiments, the facility detects and remediates call lapses of a variety of types, including some or all of the following, among others 1) Suddenly many SIP errors are observed on the IMS nodes.A burst of call quality issues or call failures would start to occur as a result of RFC 3261 defined SIP errors from different IMS (IP Multimedia Subsystem) nodes. 2) Link flaps suddenly occur.There will be a high level of audio/RTP/RTCP/Video quality issues that would be noticeable as packets would be lost/dropped. 3) Suddenly Sev 1 connectivity alarms are raised.There will be a high level of audio/RTP/RTCP/Video quality issues that would be noticeable as packets would be lost/dropped. 4) Suddenly a large Spike in Memory/CPU is seen.Due to Memory/CPU spike there will be added delay in processing of packets at various elements resulting in out of order, delayed packets which would result in audio quality. 5) Signaling Storm is seen.A large amount of Signaling traffic would hit the nodes, Signaling traffic gets priority over media as a result the media packets would take more time to process at each hop in the network causing noticeable audio buffering and out of order issues. 6) K8s Worker node detects a hardware issue.Due to Memory/CPU spike there will be added delay in processing of packets at various elements resulting in out of order, delayed packets which would result in audio quality. 7) K8s master node detects a connectivity/hardware issue.There will be a high level of audio/RTP/RTCP/Video quality issues that would be noticeable as packets would be lost/dropped. 8) Audio Becomes bad as we move into a tunnel.When you travel through tunnels or underground because of the nature of Wireless communication there would be a reduction of RSRP/RSRQ/SNR for the device, mechanisms can be put in place to detect such behavior which causes audio buffer/choppy/out of sync behavior. 9) Device has low battery.Due to Memory/CPU spike there will be added delay in processing of packets at various elements resulting in out of order, delayed packets which would result in audio quality. 10) We are in a coverage with lot of Signal to noise ratio.When you travel through tunnels or underground because of the nature of Wireless communication there would be a reduction of RSRP/RSRQ/SNR for the device, mechanisms can be put in place to detect such behavior which causes audio buffer/choppy/out of sync behavior. 11) We are in a Bad RSRP/RSRQ.When you travel through tunnels or underground because of the nature of Wireless communication there would be a reduction of RSRP/RSRQ/SNR for the device, mechanisms can be put in place to detect such behavior which causes audio buffer/choppy/out of sync behavior. 12) We are in situation where we have Radio Link failure when you go to a coverage gap.When you travel through tunnels or underground because of the nature of Wireless communication there would be a reduction of RSRP/RSRQ/SNR for the device, mechanisms can be put in place to detect such behavior which causes audio buffer/choppy/out of sync behavior. 13) Use AI/ML detections for advance notice for these faults.Data gets collected continuously around the performance of network elements as result AI/ML could be utilized to detect various fault conditions in the network. The various embodiments described above can be combined to provide further embodiments. All of the U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in the Application Data Sheet are incorporated herein by reference, in their entirety. Aspects of the embodiments can be modified, if necessary to employ concepts of the various patents, applications and publications to provide yet further embodiments. These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure.
30,976
11943154
DETAILED DESCRIPTION OF EMBODIMENTS Example embodiments of the present invention enable operation of multiple physical uplink control channel (PUCCH) groups. Embodiments of the technology disclosed herein may be employed in the technical field of multicarrier communication systems. More particularly, the embodiments of the technology disclosed herein may relate to operation of PUCCH groups. The following Acronyms are used throughout the present disclosure:ASIC application-specific integrated circuitBPSK binary phase shift keyingCA carrier aggregationCSI channel state informationCDMA code division multiple accessCSS common search spaceCPLD complex programmable logic devicesCC component carrierDL downlinkDCI downlink control informationDC dual connectivityEPC evolved packet coreE-UTRAN evolved-universal terrestrial radio access networkFPGA field programmable gate arraysFDD frequency division multiplexingHDL hardware description languagesHARQ hybrid automatic repeat requestIE information elementLTE long term evolutionMCG master cell groupMeNB master evolved node BMIB master information blockMAC media access controlMME mobility management entityNAS non-access stratumOFDM orthogonal frequency division multiplexingPDCP packet data convergence protocolPDU packet data unitPHY physicalPDCCH physical downlink control channelPHICH physical HARQ indicator channelPUCCH physical uplink control channelPUSCH physical uplink shared channelPCell primary cellPCC primary component carrierPSCell primary secondary cellpTAG primary timing advance groupQAM quadrature amplitude modulationQPSK quadrature phase shift keyingRBG Resource Block GroupsRLC radio link controlRRC radio resource controlRA random accessRB resource blocksSCC secondary component carrierSCell secondary cellScell secondary cellsSCG secondary cell groupSeNB secondary evolved node BsTAGs secondary timing advance groupSDU service data unitS-GW serving gatewaySRB signaling radio bearerSC-OFDM single carrier-OFDMSFN system frame numberSIB system information blockTAI tracking area identifierTAT time alignment timerTDD time division duplexingTDMA time division multiple accessTA timing advanceTAG timing advance groupTB transport blockUL uplinkUE user equipmentVHDL VHSIC hardware description language Example embodiments of the invention may be implemented using various physical layer modulation and transmission mechanisms. Example transmission mechanisms may include, but are not limited to: CDMA, OFDM, TDMA, Wavelet technologies, and/or the like. Hybrid transmission mechanisms such as TDMA/CDMA, and OFDM/CDMA may also be employed. Various modulation schemes may be applied for signal transmission in the physical layer. Examples of modulation schemes include, but are not limited to: phase, amplitude, code, a combination of these, and/or the like. An example radio transmission method may implement QAM using BPSK, QPSK, 16-QAM, 64-QAM, 256-QAM, and/or the like. Physical radio transmission may be enhanced by dynamically or semi-dynamically changing the modulation and coding scheme depending on transmission requirements and radio conditions. FIG.1is a diagram depicting example sets of OFDM subcarriers as per an aspect of an embodiment of the present invention. As illustrated in this example, arrow(s) in the diagram may depict a subcarrier in a multicarrier OFDM system. The OFDM system may use technology such as OFDM technology, SC-OFDM technology, or the like. For example, arrow101shows a subcarrier transmitting information symbols.FIG.1is for illustration purposes, and a typical multicarrier OFDM system may include more subcarriers in a carrier. For example, the number of subcarriers in a carrier may be in the range of 10 to 10,000 subcarriers.FIG.1shows two guard bands106and107in a transmission band. As illustrated inFIG.1, guard band106is between subcarriers103and subcarriers104. The example set of subcarriers A102includes subcarriers103and subcarriers104.FIG.1also illustrates an example set of subcarriers B105. As illustrated, there is no guard band between any two subcarriers in the example set of subcarriers B105. Carriers in a multicarrier OFDM communication system may be contiguous carriers, non-contiguous carriers, or a combination of both contiguous and non-contiguous carriers. FIG.2is a diagram depicting an example transmission time and reception time for two carriers as per an aspect of an embodiment of the present invention. A multicarrier OFDM communication system may include one or more carriers, for example, ranging from 1 to 10 carriers. Carrier A204and carrier B205may have the same or different timing structures. AlthoughFIG.2shows two synchronized carriers, carrier A204and carrier B205may or may not be synchronized with each other. Different radio frame structures may be supported for FDD and TDD duplex mechanisms.FIG.2shows an example FDD frame timing. Downlink and uplink transmissions may be organized into radio frames201. In this example, radio frame duration is 10 msec. Other frame durations, for example, in the range of 1 to 100 msec may also be supported. In this example, each 10 msec radio frame201may be divided into ten equally sized subframes202. Other subframe durations such as including 0.5 msec, 1 msec, 2 msec, and 5 msec may also be supported. Subframe(s) may consist of two or more slots (e.g. slots206and207). For the example of FDD, 10 subframes may be available for downlink transmission and 10 subframes may be available for uplink transmissions in each 10 msec interval. Uplink and downlink transmissions may be separated in the frequency domain. Slot(s) may include a plurality of OFDM symbols203. The number of OFDM symbols203in a slot206may depend on the cyclic prefix length and subcarrier spacing. FIG.3is a diagram depicting OFDM radio resources as per an aspect of an embodiment of the present invention. The resource grid structure in time304and frequency305is illustrated inFIG.3. The quantity of downlink subcarriers or RBs (in this example 6 to 100 RBs) may depend, at least in part, on the downlink transmission bandwidth306configured in the cell. The smallest radio resource unit may be called a resource element (e.g.301). Resource elements may be grouped into resource blocks (e.g.302). Resource blocks may be grouped into larger radio resources called Resource Block Groups (RBG) (e.g.303). The transmitted signal in slot206may be described by one or several resource grids of a plurality of subcarriers and a plurality of OFDM symbols. Resource blocks may be used to describe the mapping of certain physical channels to resource elements. Other pre-defined groupings of physical resource elements may be implemented in the system depending on the radio technology. For example, 24 subcarriers may be grouped as a radio block for a duration of 5 msec. In an illustrative example, a resource block may correspond to one slot in the time domain and 180 kHz in the frequency domain (for 15 KHz subcarrier bandwidth and 12 subcarriers). FIG.5A,FIG.5B,FIG.5CandFIG.5Dare example diagrams for uplink and downlink signal transmission as per an aspect of an embodiment of the present invention.FIG.5Ashows an example uplink physical channel. The baseband signal representing the physical uplink shared channel may perform the following processes. These functions are illustrated as examples and it is anticipated that other mechanisms may be implemented in various embodiments. The functions may comprise scrambling, modulation of scrambled bits to generate complex-valued symbols, mapping of the complex-valued modulation symbols onto one or several transmission layers, transform precoding to generate complex-valued symbols, precoding of the complex-valued symbols, mapping of precoded complex-valued symbols to resource elements, generation of complex-valued time-domain SC-FDMA signal for each antenna port, and/or the like. Example modulation and up-conversion to the carrier frequency of the complex-valued SC-FDMA baseband signal for each antenna port and/or the complex-valued PRACH baseband signal is shown inFIG.5B. Filtering may be employed prior to transmission. An example structure for Downlink Transmissions is shown inFIG.5C. The baseband signal representing a downlink physical channel may perform the following processes. These functions are illustrated as examples and it is anticipated that other mechanisms may be implemented in various embodiments. The functions include scrambling of coded bits in each of the codewords to be transmitted on a physical channel; modulation of scrambled bits to generate complex-valued modulation symbols; mapping of the complex-valued modulation symbols onto one or several transmission layers; precoding of the complex-valued modulation symbols on each layer for transmission on the antenna ports; mapping of complex-valued modulation symbols for each antenna port to resource elements; generation of complex-valued time-domain OFDM signal for each antenna port, and/or the like. Example modulation and up-conversion to the carrier frequency of the complex-valued OFDM baseband signal for each antenna port is shown inFIG.5D. Filtering may be employed prior to transmission. FIG.4is an example block diagram of a base station401and a wireless device406, as per an aspect of an embodiment of the present invention. A communication network400may include at least one base station401and at least one wireless device406. The base station401may include at least one communication interface402, at least one processor403, and at least one set of program code instructions405stored in non-transitory memory404and executable by the at least one processor403. The wireless device406may include at least one communication interface407, at least one processor408, and at least one set of program code instructions410stored in non-transitory memory409and executable by the at least one processor408. Communication interface402in base station401may be configured to engage in communication with communication interface407in wireless device406via a communication path that includes at least one wireless link411. Wireless link411may be a bi-directional link. Communication interface407in wireless device406may also be configured to engage in a communication with communication interface402in base station401. Base station401and wireless device406may be configured to send and receive data over wireless link411using multiple frequency carriers. According to some of the various aspects of embodiments, transceiver(s) may be employed. A transceiver is a device that includes both a transmitter and receiver. Transceivers may be employed in devices such as wireless devices, base stations, relay nodes, and/or the like. Example embodiments for radio technology implemented in communication interface402,407and wireless link411are illustrated areFIG.1,FIG.2,FIG.3,FIG.5, and associated text. An interface may be a hardware interface, a firmware interface, a software interface, and/or a combination thereof. The hardware interface may include connectors, wires, electronic devices such as drivers, amplifiers, and/or the like. A software interface may include code stored in a memory device to implement protocol(s), protocol layers, communication drivers, device drivers, combinations thereof, and/or the like. A firmware interface may include a combination of embedded hardware and code stored in and/or in communication with a memory device to implement connections, electronic device operations, protocol(s), protocol layers, communication drivers, device drivers, hardware operations, combinations thereof, and/or the like. The term configured may relate to the capacity of a device whether the device is in an operational or non-operational state. Configured may also refer to specific settings in a device that effect the operational characteristics of the device whether the device is in an operational or non-operational state. In other words, the hardware, software, firmware, registers, memory values, and/or the like may be “configured” within a device, whether the device is in an operational or nonoperational state, to provide the device with specific characteristics. Terms such as “a control message to cause in a device” may mean that a control message has parameters that may be used to configure specific characteristics in the device, whether the device is in an operational or non-operational state. According to some of the various aspects of embodiments, an LTE network may include a multitude of base stations, providing a user plane PDCP/RLC/MAC/PHY and control plane (RRC) protocol terminations towards the wireless device. The base station(s) may be interconnected with other base station(s) (e.g. employing an X2 interface). The base stations may also be connected employing, for example, an S1 interface to an EPC. For example, the base stations may be interconnected to the MME employing the S1-MME interface and to the S-G) employing the S1-U interface. The S1 interface may support a many-to-many relation between MMEs/Serving Gateways and base stations. A base station may include many sectors for example: 1, 2, 3, 4, or 6 sectors. A base station may include many cells, for example, ranging from 1 to 50 cells or more. A cell may be categorized, for example, as a primary cell or secondary cell. At RRC connection establishment/re-establishment/handover, one serving cell may provide the NAS (non-access stratum) mobility information (e.g. TAI), and at RRC connection re-establishment/handover, one serving cell may provide the security input. This cell may be referred to as the Primary Cell (PCell). In the downlink, the carrier corresponding to the PCell may be the Downlink Primary Component Carrier (DL PCC), while in the uplink, it may be the Uplink Primary Component Carrier (UL PCC). Depending on wireless device capabilities, Secondary Cells (SCells) may be configured to form together with the PCell a set of serving cells. In the downlink, the carrier corresponding to an SCell may be a Downlink Secondary Component Carrier (DL SCC), while in the uplink, it may be an Uplink Secondary Component Carrier (UL SCC). An SCell may or may not have an uplink carrier. A cell, comprising a downlink carrier and optionally an uplink carrier, may be assigned a physical cell ID and a cell index. A carrier (downlink or uplink) may belong to only one cell. The cell ID or Cell index may also identify the downlink carrier or uplink carrier of the cell (depending on the context it is used). In the specification, cell ID may be equally referred to a carrier ID, and cell index may be referred to carrier index. In implementation, the physical cell ID or cell index may be assigned to a cell. A cell ID may be determined using a synchronization signal transmitted on a downlink carrier. A cell index may be determined using RRC messages. For example, when the specification refers to a first physical cell ID for a first downlink carrier, the specification may mean the first physical cell ID is for a cell comprising the first downlink carrier. The same concept may apply to, for example, carrier activation. When the specification indicates that a first carrier is activated, the specification may equally mean that the cell comprising the first carrier is activated. Embodiments may be configured to operate as needed. The disclosed mechanism may be performed when certain criteria are met, for example, in a wireless device, a base station, a radio environment, a network, a combination of the above, and/or the like. Example criteria may be based, at least in part, on for example, traffic load, initial system set up, packet sizes, traffic characteristics, a combination of the above, and/or the like. When the one or more criteria are met, various example embodiments may be applied. Therefore, it may be possible to implement example embodiments that selectively implement disclosed protocols. A base station may communicate with a mix of wireless devices. Wireless devices may support multiple technologies, and/or multiple releases of the same technology. Wireless devices may have some specific capability(ies) depending on its wireless device category and/or capability(ies). A base station may comprise multiple sectors. When this disclosure refers to a base station communicating with a plurality of wireless devices, this disclosure may refer to a subset of the total wireless devices in a coverage area. This disclosure may refer to, for example, a plurality of wireless devices of a given LTE release with a given capability and in a given sector of the base station. The plurality of wireless devices in this disclosure may refer to a selected plurality of wireless devices, and/or a subset of total wireless devices in a coverage area which perform according to disclosed methods, and/or the like. There may be a plurality of wireless devices in a coverage area that may not comply with the disclosed methods, for example, because those wireless devices perform based on older releases of LTE technology. FIG.6andFIG.7are example diagrams for protocol structure with CA and DC as per an aspect of an embodiment of the present invention. E-UTRAN may support Dual Connectivity (DC) operation whereby a multiple RX/TX UE in RRC_CONNECTED may be configured to utilize radio resources provided by two schedulers located in two eNBs connected via a non-ideal backhaul over the X2 interface. eNBs involved in DC for a certain UE may assume two different roles: an eNB may either act as an MeNB or as an SeNB. In DC a UE may be connected to one MeNB and one SeNB. Mechanisms implemented in DC may be extended to cover more than two eNBs.FIG.7illustrates one example structure for the UE side MAC entities when a Master Cell Group (MCG) and a Secondary Cell Group (SCG) are configured, and it may not restrict implementation. Media Broadcast Multicast Service (MBMS) reception is not shown in this figure for simplicity. In DC, the radio protocol architecture that a particular bearer uses may depend on how the bearer is setup. Three alternatives may exist, an MCG bearer, an SCG bearer and a split bearer as shown inFIG.6. RRC may be located in MeNB and SRBs may be configured as a MCG bearer type and may use the radio resources of the MeNB. DC may also be described as having at least one bearer configured to use radio resources provided by the SeNB. DC may or may not be configured/implemented in example embodiments of the invention. In the case of DC, the UE may be configured with two MAC entities: one MAC entity for MeNB, and one MAC entity for SeNB. In DC, the configured set of serving cells for a UE may comprise of two subsets: the Master Cell Group (MCG) containing the serving cells of the MeNB, and the Secondary Cell Group (SCG) containing the serving cells of the SeNB. For a SCG, one or more of the following may be applied: at least one cell in the SCG has a configured UL CC and one of them, named PSCell (or PCell of SCG, or sometimes called PCell), is configured with PUCCH resources; when the SCG is configured, there may be at least one SCG bearer or one Split bearer; upon detection of a physical layer problem or a random access problem on a PSCell, or the maximum number of RLC retransmissions has been reached associated with the SCG, or upon detection of an access problem on a PSCell during a SCG addition or a SCG change: a RRC connection re-establishment procedure may not be triggered, UL transmissions towards cells of the SCG are stopped, a MeNB may be informed by the UE of a SCG failure type, for split bearer, the DL data transfer over the MeNB is maintained; the RLC AM bearer may be configured for the split bearer; like PCell, PSCell may not be de-activated; PSCell may be changed with a SCG change (e.g. with security key change and a RACH procedure); and/or neither a direct bearer type change between a Split bearer and a SCG bearer nor simultaneous configuration of a SCG and a Split bearer are supported. With respect to the interaction between a MeNB and a SeNB, one or more of the following principles may be applied: the MeNB may maintain the RRM measurement configuration of the UE and may, (e.g., based on received measurement reports or traffic conditions or bearer types), decide to ask a SeNB to provide additional resources (serving cells) for a UE; upon receiving a request from the MeNB, a SeNB may create a container that may result in the configuration of additional serving cells for the UE (or decide that it has no resource available to do so); for UE capability coordination, the MeNB may provide (part of) the AS configuration and the UE capabilities to the SeNB; the MeNB and the SeNB may exchange information about a UE configuration by employing of RRC containers (inter-node messages) carried in X2 messages; the SeNB may initiate a reconfiguration of its existing serving cells (e.g., PUCCH towards the SeNB); the SeNB may decide which cell is the PSCell within the SCG; the MeNB may not change the content of the RRC configuration provided by the SeNB; in the case of a SCG addition and a SCG SCell addition, the MeNB may provide the latest measurement results for the SCG cell(s); both a MeNB and a SeNB may know the SFN and subframe offset of each other by OAM, (e.g., for the purpose of DRX alignment and identification of a measurement gap). In an example, when adding a new SCG SCell, dedicated RRC signalling may be used for sending required system information of the cell as for CA, except for the SFN acquired from a MIB of the PSCell of a SCG. According to some of the various aspects of embodiments, serving cells having an uplink to which the same time alignment (TA) applies may be grouped in a TA group (TAG). Serving cells in one TAG may use the same timing reference. For a given TAG, user equipment (UE) may use one downlink carrier as a timing reference at a given time. The UE may use a downlink carrier in a TAG as a timing reference for that TAG. For a given TAG, a UE may synchronize uplink subframe and frame transmission timing of uplink carriers belonging to the same TAG. According to some of the various aspects of embodiments, serving cells having an uplink to which the same TA applies may correspond to serving cells hosted by the same receiver. A TA group may comprise at least one serving cell with a configured uplink. A UE supporting multiple TAs may support two or more TA groups. One TA group may contain the PCell and may be called a primary TAG (pTAG). In a multiple TAG configuration, at least one TA group may not contain the PCell and may be called a secondary TAG (sTAG). Carriers within the same TA group may use the same TA value and the same timing reference. When DC is configured, cells belonging to a cell group (MCG or SCG) may be grouped into multiple TAGs including a pTAG and one or more sTAGs. FIG.8shows example TAG configurations as per an aspect of an embodiment of the present invention. In Example 1, pTAG comprises PCell, and an sTAG comprises SCell1. In Example 2, a pTAG comprises a PCell and SCell1, and an sTAG comprises SCell2 and SCell3. In Example 3, pTAG comprises PCell and SCell1, and an sTAG1 includes SCell2 and SCell3, and sTAG2 comprises SCell4. Up to four TAGs may be supported in a cell group (MCG or SCG) and other example TAG configurations may also be provided. In various examples in this disclosure, example mechanisms are described for a pTAG and an sTAG. The operation with one example sTAG is described, and the same operation may be applicable to other sTAGs. The example mechanisms may be applied to configurations with multiple sTAGs. According to some of the various aspects of embodiments, TA maintenance, pathloss reference handling and a timing reference for a pTAG may follow LTE release 10 principles in the MCG and/or SCG. The UE may need to measure downlink pathloss to calculate uplink transmit power. A pathloss reference may be used for uplink power control and/or transmission of random access preamble(s). UE may measure downlink pathloss using signals received on a pathloss reference cell. For SCell(s) in a pTAG, the choice of a pathloss reference for cells may be selected from and/or be limited to the following two options: a) the downlink SCell linked to an uplink SCell using system information block 2 (SIB2), and b) the downlink pCell. The pathloss reference for SCells in a pTAG may be configurable using RRC message(s) as a part of an SCell initial configuration and/or reconfiguration. According to some of the various aspects of embodiments, a PhysicalConfigDedicatedSCell information element (IE) of an SCell configuration may include a pathloss reference SCell (downlink carrier) for an SCell in a pTAG. The downlink SCell linked to an uplink SCell using system information block 2 (SIB2) may be referred to as the SIB2 linked downlink of the SCell. Different TAGs may operate in different bands. For an uplink carrier in an sTAG, the pathloss reference may be only configurable to the downlink SCell linked to an uplink SCell using the system information block 2 (SIB2) of the SCell. To obtain initial uplink (UL) time alignment for an sTAG, an eNB may initiate an RA procedure. In an sTAG, a UE may use one of any activated SCells from this sTAG as a timing reference cell. In an example embodiment, the timing reference for SCells in an sTAG may be the SIB2 linked downlink of the SCell on which the preamble for the latest RA procedure was sent. There may be one timing reference and one-time alignment timer (TAT) per TA group. A TAT for TAGs may be configured with different values. In a MAC entity, when a TAT associated with a pTAG expires: all TATs may be considered as expired, the UE may flush HARQ buffers of serving cells, the UE may clear any configured downlink assignment/uplink grants, and the RRC in the UE may release PUCCH/SRS for all configured serving cells. When the pTAG TAT is not running, an sTAG TAT may not be running. When the TAT associated with an sTAG expires: a) SRS transmissions may be stopped on the corresponding SCells, b) SRS RRC configuration may be released, c) CSI reporting configuration for corresponding SCells may be maintained, and/or d) the MAC in the UE may flush the uplink HARQ buffers of the corresponding SCells. An eNB may initiate an RA procedure via a PDCCH order for an activated SCell. This PDCCH order may be sent on a scheduling cell of this SCell. When cross carrier scheduling is configured for a cell, the scheduling cell may be different than the cell that is employed for preamble transmission, and the PDCCH order may include an SCell index. At least a non-contention based RA procedure may be supported for SCell(s) assigned to sTAG(s). FIG.9is an example message flow in a random access process in a secondary TAG as per an aspect of an embodiment of the present invention. An eNB transmits an activation command600to activate an SCell. A preamble602(Msg1) may be sent by a UE in response to a PDCCH order601on an SCell belonging to an sTAG. In an example embodiment, preamble transmission for SCells may be controlled by the network using PDCCH format 1A. Msg2 message603(RAR: random access response) in response to the preamble transmission on the SCell may be addressed to RA-RNTI in a PCell common search space (CSS). Uplink packets604may be transmitted on the SCell in which the preamble was transmitted. According to some of the various aspects of embodiments, initial timing alignment may be achieved through a random access procedure. This may involve a UE transmitting a random access preamble and an eNB responding with an initial TA command NTA(amount of timing advance) within a random access response window. The start of the random access preamble may be aligned with the start of a corresponding uplink subframe at the UE assuming NTA=0. The eNB may estimate the uplink timing from the random access preamble transmitted by the UE. The TA command may be derived by the eNB based on the estimation of the difference between the desired UL timing and the actual UL timing. The UE may determine the initial uplink transmission timing relative to the corresponding downlink of the sTAG on which the preamble is transmitted. The mapping of a serving cell to a TAG may be configured by a serving eNB with RRC signaling. The mechanism for TAG configuration and reconfiguration may be based on RRC signaling. According to some of the various aspects of embodiments, when an eNB performs an SCell addition configuration, the related TAG configuration may be configured for the SCell. In an example embodiment, an eNB may modify the TAG configuration of an SCell by removing (releasing) the SCell and adding(configuring) a new SCell (with the same physical cell ID and frequency) with an updated TAG ID. The new SCell with the updated TAG ID may initially be inactive subsequent to being assigned the updated TAG ID. The eNB may activate the updated new SCell and start scheduling packets on the activated SCell. In an example implementation, it may not be possible to change the TAG associated with an SCell, but rather, the SCell may need to be removed and a new SCell may need to be added with another TAG. For example, if there is a need to move an SCell from an sTAG to a pTAG, at least one RRC message, for example, at least one RRC reconfiguration message, may be send to the UE to reconfigure TAG configurations by releasing the SCell and then configuring the SCell as a part of the pTAG (when an SCell is added/configured without a TAG index, the SCell may be explicitly assigned to the pTAG). The PCell may not change its TA group and may always be a member of the pTAG. The purpose of an RRC connection reconfiguration procedure may be to modify an RRC connection, (e.g. to establish, modify and/or release RBs, to perform handover, to setup, modify, and/or release measurements, to add, modify, and/or release SCells). If the received RRC Connection Reconfiguration message includes the sCellToReleaseList, the UE may perform an SCell release. If the received RRC Connection Reconfiguration message includes the sCellToAddModList, the UE may perform SCell additions or modification. In LTE Release-10 and Release-11 CA, a PUCCH is only transmitted on the PCell (PSCell) to an eNB. In LTE-Release 12 and earlier, a UE may transmit PUCCH information on one cell (PCell or PSCell) to a given eNB. As the number of CA capable UEs and also the number of aggregated carriers increase, the number of PUCCHs and also the PUCCH payload size may increase. Accommodating the PUCCH transmissions on the PCell may lead to a high PUCCH load on the PCell. A PUCCH on an SCell may be introduced to offload the PUCCH resource from the PCell. More than one PUCCH may be configured for example, a PUCCH on a PCell and another PUCCH on an SCell.FIG.10is an example grouping of cells into PUCCH groups as per an aspect of an embodiment of the present invention. In the example embodiments, one, two or more cells may be configured with PUCCH resources for transmitting CSI/ACK/NACK to a base station. Cells may be grouped into multiple PUCCH groups, and one or more cell within a group may be configured with a PUCCH. In an example configuration, one SCell may belong to one PUCCH group. SCells with a configured PUCCH transmitted to a base station may be called a PUCCH SCell, and a cell group with a common PUCCH resource transmitted to the same base station may be called a PUCCH group. In Release-12, a PUCCH can be configured on a PCell and/or a PSCell, but cannot be configured on other SCells. In an example embodiment, a UE may transmit a message indicating that the UE supports PUCCH configuration on a PCell and SCell. Such an indication may be separate from an indication of dual connectivity support by the UE. In an example embodiment, a UE may support both DC and PUCCH groups. In an example embodiment, either DC or PUCCH groups may be configured, but not both. In another example embodiment, more complicated configurations comprising both DC and PUCCH groups may be supported. When a UE is capable of configuring PUCCH groups, and if a UE indicates that it supports simultaneous PUCCH/PUSCH transmission capability, it may imply that the UE supports simultaneous PUCCH/PUSCH transmission on both PCell and SCell. When multiple PUCCH groups are configured, a PUCCH may be configured or not configured with simultaneous PUCCH/PUSCH transmission. In an example embodiment, PUCCH transmission to a base station on two serving cells may be realized as shown inFIG.10. A first group of cells may employ a PUCCH on the PCell and may be called PUCCH group 1 or a primary PUCCH group. A second group of cells may employ a PUCCH on an SCell and may be called PUCCH group 2 or a secondary PUCCH group. One, two or more PUCCH groups may be configured. In an example, cells may be grouped into two PUCCH groups, and each PUCCH group may include a cell with PUCCH resources. A PCell may provide PUCCH resources for the primary PUCCH group and an SCell in the secondary PUCCH group may provide PUCCH resources for the cells in the secondary PUCCH group. In an example embodiment, no cross-carrier scheduling between cells in different PUCCH groups may be configured. When cross-carrier scheduling between cells in different PUCCH groups is not configured, ACK/NACK on PHICH channel may be limited within a PUCCH group. Both downlink and uplink scheduling activity may be separate between cells belonging to different PUCCH groups. A PUCCH on an SCell may carry HARQ-ACK and CSI information. A PCell may be configured with PUCCH resources. In an example embodiment, RRC parameters for an SCell PUCCH Power Control for a PUCCH on an SCell may be different from those of a PCell PUCCH. A Transmit Power Control command for a PUCCH on an SCell may be transmitted in DCI(s) on the SCell carrying the PUCCH. UE procedures on a PUCCH transmission may be different and/or independent between PUCCH groups. For example, determination of DL HARQ-ACK timing, PUCCH resource determination for HARQ-ACK and/or CSI, Higher-layer configuration of simultaneous HARQ-ACK+CSI on a PUCCH, Higher-layer configuration of simultaneous HARQ-ACK+SRS in one subframe may be configured differently for a PUCCH PCell and a PUCCH SCell. A PUCCH group may be a group of serving cells configured by a RRC and use the same serving cell in the group for transmission of a PUCCH. A Primary PUCCH group may be a PUCCH group containing a PCell. A secondary PUCCH group may be a PUCCH cell group not containing the PCell. In an example embodiment, an SCell may belong to one PUCCH group. When one SCell belongs to a PUCCH group, ACK/NACK or CSI for that SCell may be transmitted over the PUCCH in that PUCCH group (over PUCCH SCell or PUCCH PCell). A PUCCH on an SCell may reduce the PUCCH load on the PCell. A PUCCH SCell may be employed for UCI transmission of SCells in the corresponding PUCCH group. In an example embodiment, a flexible PUCCH configuration in which control signalling is sent on one, two or more PUCCHs may be possible. Beside the PCell, it may be possible to configure a selected number of SCells for PUCCH transmission (herein called PUCCH SCells). Control signalling information conveyed in a certain PUCCH SCell may be related to a set of SCells in a corresponding PUCCH group that are configured by the network via RRC signalling. PUCCH control signalling carried by a PUCCH channel may be distributed between a PCell and SCells for off-loading or robustness purposes. By enabling a PUCCH in an SCell, it may be possible to distribute the overall CSI reports for a given UE between a PCell and a selected number of SCells (e.g. PUCCH SCells), thereby limiting PUCCH CSI resource consumption by a given UE on a certain cell. It may be possible to map CSI reports for a certain SCell to a selected PUCCH SCell. An SCell may be assigned a certain periodicity and time-offset for transmission of control information. Periodic CSI for a serving cell may be mapped on a PUCCH (on the PCell or on a PUCCH-SCell) via RRC signalling. The possibility of distributing CSI reports, HARQ feedbacks, and/or Scheduling Requests across PUCCH SCells may provide flexibility and capacity improvements. HARQ feedback for a serving cell may be mapped on a PUCCH (on the PCell or on a PUCCH SCell) via RRC signalling. In example embodiments, PUCCH transmission may be configured on a PCell, as well as one SCell in CA. An SCell PUCCH may be realized using the concept of PUCCH groups, where aggregated cells are grouped into two or more PUCCH groups. One cell from a PUCCH group may be configured to carry a PUCCH. More than 5 carriers may be configured. In the example embodiments, up to n carriers may be aggregated. For example, n may be 16, 32, or 64. Some CCs may have non-backward compatible configurations supporting only advanced UEs (e.g. support licensed assisted access SCells). In an example embodiment, one SCell PUCCH (e.g. two PUCCH groups) may be supported. In another example embodiment, a PUCCH group concept with multiple (more than one) SCells carrying PUCCH may be employed (e.g., there can be more than two PUCCH groups). In an example embodiment, a given PUCCH group may not comprise serving cells of both MCG and SCG. One of the PUCCHs may be configured on the PCell. In an example embodiment, PUCCH mapping of serving cells may be configured by RRC messages. In an example embodiment, a maximum value of an SCellIndex and a ServCellIndex may be 31 (ranging from 0 to 31). In an example, a maximum value of stag-Id may be 3. The CIF for a scheduled cell may be configured explicitly. A PUCCH SCell may be configured by giving a PUCCH configuration for an SCell. A HARQ feedback and CSI report of a PUCCH SCell may be sent on the PUCCH of that PUCCH SCell. The HARQ feedback and CSI report of a SCell may sent on a PUCCH of a PCell if no PUCCH SCell is signalled for that SCell. The HARQ feedback and CSI report of an SCell may be sent on the PUCCH of one PUCCH SCell; hence they may not be sent on the PUCCH of different PUCCH SCell. The UE may report a Type 2 PH for serving cells configured with a PUCCH. In an example embodiment, a MAC activation/deactivation may be supported for a PUCCH SCell. An eNB may manage the activation/deactivation status for SCells. A newly added PUCCH SCell may be initially deactivated. In an example embodiment, independent configuration of PUCCH groups and TAGs may be supported.FIG.11andFIG.12show example configurations of TAGs and PUCCH groups. For example, one TAG may contain multiple serving cells with a PUCCH. For example, each TAG may only comprise cells of one PUCCH group. For example, a TAG may comprise the serving cells (without a PUCCH) which belong to different PUCCH groups. There may not be a one-to-one mapping between TAGs and PUCCH groups. For example, in a configuration, a PUCCH SCell may belong to primary TAG. In an example implementation, the serving cells of one PUCCH group may be in different TAGs and serving cells of one TAG may be in different PUCCH groups. Configuration of PUCCH groups and TAGs may be left to eNB implementation. In another example implementation, restriction(s) on the configuration of a PUCCH cell may be specified. For example, in an example embodiment, cells in a given PUCCH group may belong to the same TAG. In an example, an sTAG may only comprise cells of one PUCCH group. In an example, one-to-one mapping between TAGs and PUCCH groups may be implemented. In implementation, cell configurations may be limited to some of the examples. In other implementations, some or all the below configurations may be allowed. In an example embodiment, for an SCell in a pTAG, the timing reference may be a PCell. For an SCell in an sTAG, the timing reference may be any activated SCell in the sTAG. For an SCell (configured with PUCCH or not) in a pTAG, a pathloss reference may be configured to be a PCell or an SIB-2 linked SCell. For an SCell in a sTAG, the pathloss reference may be the SIB-2 linked SCell. When a TAT associated with a pTAG is expired, the TAT associated with sTAGs may be considered as expired. When a TAT of an sTAG containing PUCCH SCell expires, the MAC may indicate to an RRC to release PUCCH resource for the PUCCH group. When the TAT of an sTAG containing a PUCCH SCell is not running, the uplink transmission (PUSCH) for SCells in the secondary PUCCH group not belonging to the sTAG including the PUCCH SCell may not be impacted. The TAT expiry of an sTAG containing a PUCCH SCell may not trigger TAT expiry of other TAGs to which other SCells in the same PUCCH group belong. When the TAT associated with sTAG not containing a PUCCH SCell is not running, the wireless device may stop the uplink transmission for the SCell in the sTAG and may not impact other TAGs. In an example embodiment, a MAC entity may have a configurable timer timeAlignmentTimer per TAG. The timeAlignmentTimer may be used to control how long the MAC entity considers the Serving Cells belonging to the associated TAG to be uplink time aligned. The MAC entity may, when a Timing Advance Command MAC control element is received, apply the Timing Advance Command for the indicated TAG; start or restart the timeAlignmentTimer associated with the indicated TAG. The MAC entity may, when a Timing Advance Command is received in a Random Access Response message for a serving cell belonging to a TAG and/or if the Random Access Preamble was not selected by the MAC entity, apply the Timing Advance Command for this TAG and start or restart the timeAlignmentTimer associated with this TAG. Otherwise, if the timeAlignmentTimer associated with this TAG is not running, the Timing Advance Command for this TAG may be applied and the timeAlignmentTimer associated with this TAG started. When the contention resolution is considered not successful, a timeAlignmentTimer associated with this TAG may be stopped. Otherwise, the MAC entity may ignore the received Timing Advance Command. Example embodiments of the invention may enable operation of multiple PUCCH groups. Other example embodiments may comprise a non-transitory tangible computer readable media comprising instructions executable by one or more processors to cause operation of PUCCH groups. Yet other example embodiments may comprise an article of manufacture that comprises a non-transitory tangible computer readable machine-accessible medium having instructions encoded thereon for enabling programmable hardware to cause a device (e.g. wireless communicator, UE, base station, etc.) to enable operation of PUCCH groups. The device may include processors, memory, interfaces, and/or the like. Other example embodiments may comprise communication networks comprising devices such as base stations, wireless devices (or user equipment: UE), servers, switches, antennas, and/or the like. In an example embodiment one or more TAGs may be configured along with PUCCH group configuration. FIG.13is an example MAC PDU as per an aspect of an embodiment of the present invention. In an example embodiment, a MAC PDU may comprise of a MAC header, zero or more MAC Service Data Units (MAC SDU), zero or more MAC control elements, and optionally padding. The MAC header and the MAC SDUs may be of variable sizes. A MAC PDU header may comprise one or more MAC PDU subheaders. A subheader may correspond to either a MAC SDU, a MAC control element or padding. A MAC PDU subheader may comprise header fields R, F2, E, LCID, F, and/or L. The last subheader in the MAC PDU and subheaders for fixed sized MAC control elements may comprise the four header fields R, F2, E, and/or LCID. A MAC PDU subheader corresponding to padding may comprise the four header fields R, F2, E, and/or LCID. In an example embodiment, LCID or Logical Channel ID field may identify the logical channel instance of the corresponding MAC SDU or the type of the corresponding MAC control element or padding. There may be one LCID field for a MAC SDU, MAC control element or padding included in the MAC PDU. In addition to that, one or two additional LCID fields may be included in the MAC PDU when single-byte or two-byte padding is required but cannot be achieved by padding at the end of the MAC PDU. The LCID field size may be, e.g. 5 bits. L or the Length field may indicate the length of the corresponding MAC SDU or variable-sized MAC control element in bytes. There may be one L field per MAC PDU subheader except for the last subheader and subheaders corresponding to fixed-sized MAC control elements. The size of the L field may be indicated by the F field and F2 field. The F or the Format field may indicate the size of the Length field. There may be one F field per MAC PDU subheader except for the last subheader and subheaders corresponding to fixed-sized MAC control elements and expect for when F2 is set to 1. The size of the F field may be 1 bit. In an example, if the F field is included, and/or if the size of the MAC SDU or variable-sized MAC control element is less than 128 bytes, the value of the F field is set to 0, otherwise it is set to 1. The F2 or the Format2 field may indicate the size of the Length field. There may be one F2 field per MAC PDU subheader. The size of the F2 field may be 1 bit. In an example, if the size of the MAC SDU or variable-sized MAC control element is larger than 32767 bytes and if the corresponding subheader is not the last subheader, the value of the F2 field may be set to 1, otherwise it is set to 0. The E or the Extension field may be a flag indicating if more fields are present in the MAC header or not. The E field may be set to “1” to indicate another set of at least R/F2/E/LCID fields. The E field may be set to “0” to indicate that either a MAC SDU, a MAC control element or padding starts at the next byte. R or reserved bit, set to “0”. MAC PDU subheaders may have the same order as the corresponding MAC SDUs, MAC control elements and padding. MAC control elements may be placed before any MAC SDU. Padding may occur at the end of the MAC PDU, except when single-byte or two-byte padding is required. Padding may have any value and the MAC entity may ignore it. When padding is performed at the end of the MAC PDU, zero or more padding bytes may be allowed. When single-byte or two-byte padding is required, one or two MAC PDU subheaders corresponding to padding may be placed at the beginning of the MAC PDU before any other MAC PDU subheader. In an example, a maximum of one MAC PDU may be transmitted per TB per MAC entity, a maximum of one MCH MAC PDU can be transmitted per TTI. At least one RRC message may provide configuration parameters for at least one cell and configuration parameters for PUCCH groups. The information elements in one or more RRC messages may provide mapping between configured cells and PUCCH SCells. Cells may be grouped into a plurality of cell groups and a cell may be assigned to one of the configured PUCCH groups. There may be a one-to-one relationship between PUCCH groups and cells with configured PUCCH resources. At least one RRC message may provide mapping between an SCell and a PUCCH group, and PUCCH configuration on PUCCH SCell. System information (common parameters) for an SCell may be carried in a RadioResourceConfigCommonSCell in a dedicated RRC message. Some of the PUCCH related information may be included in common information of an SCell (e.g. in the RadioResourceConfigCommonSCell). Dedicated configuration parameters of SCell and PUCCH resources may be configured by dedicated RRC signaling using, for example, RadioResourceConfigDedicatedSCell. The IE PUCCH-ConfigCommon and IE PUCCH-ConfigDedicated may be used to specify the common and the UE specific PUCCH configuration respectively. In an example, PUCCH-ConfigCommon may include: deltaPUCCH-Shift: ENUMERATED {ds1, ds2, ds3}; nRB-CQI: INTEGER (0 . . . 98); nCS-AN: INTEGER (0 . . . 7); and/or n1PUCCH-AN: INTEGER (0 . . . 2047). The parameter deltaPUCCH-Shift (ΔshiftPUCCH), nRB-CQI (NRB(2), nCS-An (Ncs(1)), and n1PUCCH-AN (NPUCCH(1)) may be physical layer parameters of PUCCH. PUCCH-ConfigDedicated may be employed. PUCCH-ConfigDedicated may include: ackNackRepetition CHOICE{release: NULL, setup: SEQUENCE {repetitionFactor: ENUMERATED {n2, n4, n6, spare1},n1PUCCH-AN-Rep: INTEGER (0 . . . 2047)}}, tdd-AckNackFeedbackMode: ENUMERATED {bundling, multiplexing} OPTIONAL}. ackNackRepetitionj parameter indicates whether ACK/NACK repetition is configured. n2 corresponds to repetition factor 2, n4 to 4 for repetitionFactor parameter (NANRep). n1PUCCH-AN-Rep parameter may be nPUCCH,ANRep(1,p)for antenna port P0 and for antenna port P1. dd-AckNackFeedbackMode parameter may indicate one of the TDD ACK/NACK feedback modes used. The value bundling may correspond to use of ACK/NACK bundling whereas, the value multiplexing may correspond to ACK/NACK multiplexing. The same value may apply to both ACK/NACK feedback modes on PUCCH as well as on PUSCH. The parameter PUCCH-ConfigDedicated may include simultaneous PUCCH-PUSCH parameter indicating whether simultaneous PUCCH and PUSCH transmissions is configured. An E-UTRAN may configure this field for the PCell when the nonContiguousUL-RA-WithinCC-Info is set to supported in the band on which PCell is configured. The E-UTRAN may configure this field for the PSCell when the nonContiguousUL-RA-WithinCC-Info is set to supported in the band on which PSCell is configured. The E-UTRAN may configure this field for the PUCCH SCell when the nonContiguousUL-RA-WithinCC-Info is set to supported in the band on which PUCCH SCell is configured. A UE may transmit radio capabilities to an eNB to indicate whether UE support the configuration of PUCCH groups. The simultaneous PUCCH-PUSCH in the UE capability message may be applied to both a PCell and an SCell. Simultaneous PUCCH+PUSCH may be configured separately (using separate IEs) for a PCell and a PUCCH SCell. For example, a PCell and a PUCCH SCell may have different or the same configurations related to simultaneous PUCCH+PUSCH. The eNB may select the PUCCH SCell among current SCells or candidate SCells considering cell loading, carrier quality (e.g. using measurement reports), carrier configuration, and/or other parameters. From a functionality perspective, a PUCCH Cell group management procedure may include a PUCCH Cell group addition, a PUCCH cell group release, a PUCCH cell group change and/or a PUCCH cell group reconfiguration. The PUCCH cell group addition procedure may be used to add a secondary PUCCH cell group (e.g., to add PUCCH SCell and one or more SCells in the secondary PUCCH cell group). In an example embodiment, cells may be released and added employing one or more RRC messages. In another example embodiment, cells may be released employing a first RRC message and then added employing a second RRC messages. SCells including PUCCH SCell may be in a deactivated state when they are configured. A PUCCH SCell may be activated after an RRC configuration procedure by an activation MAC CE. An eNB may transmit a MAC CE activation command to a UE. The UE may activate an SCell in response to receiving the MAC CE activation command. In example embodiments, a timer is running once it is started, until it is stopped or until it expires; otherwise it may not be running. A timer can be started if it is not running or restarted if it is running. For example, a timer may be started or restarted from its initial value. The scheduling request (SR) is used for requesting UL-SCH resources for new transmission(s). In DC, scheduling request (SR) may be directly transmitted from UE to an SeNB via a PSCell. This may reduce scheduling delay and related signaling load. When PUCCH groups are configured, SR resources may be configured on PCell, PUCCH SCell, or both. The possibility to have SR resources in PUCCH SCell(s) may allow better distribution of SR load among the serving cells. In an example configuration, an SR for a UE may be transmitted on a serving cell, e.g. either on the PCell or on a given PUCCH SCell. In some scenarios, there may be more capacity available on the SCell, and this may be a reason to allocate more SR resources on an PUCCH SCell. If PUCCH on an SCell carries SR signals, the chance of a UE initiated RA on the PCell due to a scheduling request may be reduced and signalling overhead and RACH resource usage may be reduced. In an example implementation, SR resources may be configured on PUCCH SCell and no SR resources may be configured on PCell. In an example implementation, an SR load may be shared among a PUCCH SCell and a PCell. SR resources may be configured on both PCell and PUCCH SCell. Whether to configure SR resources on PCell, on the PUCCH SCell, or on both PCell and the PUCCH SCell may be up to eNB and/or UE implementation. SR resources may be configured on both PCell and PUCCH SCell. An SR_COUNTER may be increased when SR is sent on either PUCCH SCell or PCell and sr-ProhibitTimer may be implemented to control the timing of SR transmission. An SR process may employ SR resources on both a PCell and a PUCCH SCell, when both resources are configured. In an example implementation, SR resources may be interleaved in time domain, for example, some subframes (TTIs) may include a valid SR resource on PCell, and some other subframes may include a valid SR resource on the PUCCH SCell. In an example, as shown inFIG.15, some TTIs may include a valid SR resource on the PCell, some TTIs may include a valid SR resource on the PUCCH SCell. In an example implementation, some TTIs may include a valid SR resource on both PCell and PUCCH SCell. When SR is configured on both an activated PUCCH SCell and a PCell, the MAC entity uses whichever SR resources comes first. When SR is triggered, it may be transmitted on the first valid SR resource available, regardless of whether SR resources is on PCell or SCell. When SR is on PUCCH SCell, there may be gain in terms of load balancing by allowing transmission of SR on an SCell. There may be some latency gain since there may be more SR resources available on the SCell. The UE may choose the first SR resources available for transmission of an SR. In an example, a valid SR resource on PCell and PUCCH SCell may overlap in time. A TTI may not include any valid SR resource or include more than one valid SR resources (on both PCell and PUCCH SCell). An eNB may employ different IEs for configuration of SR resources on PCell and PUCCH SCell. Example embodiments may be applicable to various SR configuration implementations on PCell and PUCCH SCell. In an example embodiment, SR resources may be configured by one or more information elements in an RRC message. For example, SchedulingRequestConfig IE may be employed for configuration of PUCCH resources on the PCell and/or on a PUCCH SCell. The SchedulingRequestConfig IE may be used to specify some of the scheduling request related parameters. The SchedulingRequestConfig IE may be included in a dedicated physical layer configuration IE of a UE configuration. The SchedulingRequestConfig IE may comprise an information element to set up or release scheduling resources and other parameters. SchedulingRequestConfig IE may comprise PUCCH resource Index (sr-ConfigIndex), SR configuration index (sr-ConfigIndex), and SR maximum transmission (dsr-TransMax) IEs. At least one RRC message may include a first SchedulingRequestConfig IE for configuration of SR resources on PCell, and a second SchedulingRequestConfig IE for configuration of SR resources on PUCCH SCell. sr-ConfigIndex may be configured and sr-PUCCH-ResourceIndex (e.g. sr-PUCCH-ResourceIndex, sr-PUCCH-ResourceIndexP1) may be configured. sr-PUCCH-ResourceIndex, sr-PUCCH-ResourceIndexP1 may be nPUCCH,SRI(1)for antenna port P0 and for antenna port P1 respectively. E-UTRAN may configure sr-PUCCH-ResourceIndexP1 if sr-PUCCHResourceIndex is configured. At least one RRC message configuring SR configuration may also include sr-ProhibitTimer IE to be employed to determine a timer value for scheduling request processes. When an SR is triggered, the corresponding SR process may be considered as pending until it is cancelled. Pending SR(s) may be cancelled and sr-ProhibitTimer may be stopped when a MAC PDU is assembled and this PDU includes a BSR (Buffer Status Report) which contains buffer status up to (and including) the last event that triggered a BSR, or when the UL grant(s) can accommodate pending data available for transmission. If an SR is triggered and there is no other SR pending, the MAC entity may set the SR_COUNTER to 0. As long as one SR is pending, the MAC entity and if no UL-SCH resources are available for a transmission in this TTI, and if the MAC entity has no valid PUCCH resource for SR configured in any TTI. UE (e.g. MAC entity) may initiate a Random Access procedure on the SpCell and cancel pending SRs. In an example embodiment, if SR resources are configured on a PUCCH SCell and the PUCCH SCell is deactivated, the MAC entity may not have a valid PUCCH resource for transmitting an SR signal on a deactivated PUCCH SCell. If SR resources is not configured on a PUCCH SCell, the MAC entity may not have a valid PUCCH resource for SR on the PUCCH SCell. In an example embodiment, a UE may receive at least one RRC message comprising configuration parameters of one or more cells, the RRC message may comprise configuration parameters of scheduling request resources and processes. At least one RRC message may comprise a first SR maximum transmission information element (IE) for the PCell and a second SR maximum transmission information element for the PUCCH SCell. The at least one message may comprise a common SR prohibit timer information element which is used for both PCell and PUCCH SCell. The at least one message may comprise a first scheduling request configuration index for scheduling request resources on the primary PUCCH, if SR resources on PCell is configured. The first scheduling request configuration index may indicate a first scheduling request period and a first offset as shown in exampleFIG.14. The at least one message may further comprise a second scheduling request configuration index for scheduling request resources on the secondary PUCCH, if SR resources are configured for a PUCCH SCell. The second scheduling request configuration index may indicate a second scheduling request period and a second offset as shown in exampleFIG.14. In an example embodiment, an RRC message may comprise configuration parameters of SR resources on both a PCell and an SCell. In another example embodiment, a first RRC message may configuration parameters of SR resources on the PCell and a second RRC message may configuration parameters of SR resources on an SCell. The at least one RRC message may comprise the first RRC message and the second RRC message. At least one RRC message configuring SR configuration may also include sr-ProhibitTimer information element comprising a timer value for scheduling request processes. The value of IE sr-ProhibitTimer may be in number of SR period(s). Value 0 means no timer for SR transmission on PUCCH is configured. Value 1 corresponds to one SR period, Value 2 corresponds to 2*SR periods and so on. At least one RRC message configuring SR configuration may also include dsr-TransMax IE in SchedulingRequestConfig IE. In an example embodiment, dsr-TransMax may take the value of n4, n8, n16, n32, or n64. The value n4 corresponds to 4 transmissions, n8 corresponds to 8 transmissions and so on. A UE may be configured by higher layers to transmit the SR on one antenna port or two antenna ports of the serving cell with configured PUCCH. The scheduling request may be transmitted on the PUCCH resource(s) nPUCCH(1,{tilde over (p)})=nPUCCH,SRI(1,{tilde over (p)})for {tilde over (p)} mapped to antenna port p, where nPUCCH,SRI(1,{tilde over (p)})may be configured by higher layers unless the SR coincides in time with the transmission of HARQ-ACK using PUCCH Format 3 in which case the SR may be multiplexed with HARQ-ACK. The SR configuration for SR transmission periodicity SRPERIODICITYand SR subframe offset NOFFSET,SRmay be defined, for example, as shown inFIG.14by the parameter sr-ConfigIndex ISRgiven by higher layers. SR transmission instances in a serving cell configured with SR are the uplink subframes satisfying (10×nf└+ns/2┘−NOFFSET,SR)mod SRPERIODICITY=0. In an example embodiment, SR resources may be configured by eNB in a way that TTIs with available SR resources in a PCell and an SCell do not overlap. The time difference between two subsequent subframes with SR resources may be reduced when SR resources are configured on both PCell and PUCCH SCell. When an SR is triggered, it may be considered as pending until it is cancelled. Pending SR(s) may be cancelled and sr-ProhibitTimer may be stopped when a MAC PDU is assembled and this PDU includes a BSR (Buffer Status Report) which contains buffer status up to (and including) the last event that triggered a BSR, or when the UL grant(s) can accommodate pending data available for transmission. If an SR is triggered and there is no other SR pending, the MAC entity may set the SR_COUNTER to 0. In an example embodiment, whether to configure scheduling request only on PCell, only on the PUCCH SCell, or on both PCell and PUCCH SCell is up to eNB implementation. When SR is configured on both activated PUCCH SCell and PCell, the MAC entity may use whichever SR opportunity comes first for SR transmission. Based on the UE implementation, the MAC entity may choose one of SRs when SRs are configured on PUCCH SCell(s) and PCell in the same TTI. In a MAC entity, there may be only one scheduling request procedure regardless of whether scheduling request is configured on multiple cells, e.g. one SR_COUNTER which is increased when SR is sent on either PCell or PUCCH SCell and one sr-ProhibitTimer. In a wireless device, as long as one SR is pending, and if no UL-SCH resources are available for a transmission in this TTI, and if the MAC entity has no valid PUCCH resource for SR configured in any TTI. initiate a Random Access procedure and cancel pending SRs. In an example embodiment, if SR resources are configured on a PUCCH SCell and the PUCCH SCell is deactivated, the MAC entity may not have a valid PUCCH resource for SR on a deactivated PUCCH SCell. If SR is not configured on a PUCCH SCell, the MAC entity may not have a valid PUCCH resource for SR on the PUCCH SCell. If SR resources are configured on a PUCCH SCell and the TAT associated with the TAG of the PUCCH SCell is not running, the MAC entity may not have a valid PUCCH resource for transmitting SR on the PUCCH SCell. In an example embodiment, a PUCCH SCell has valid SR resources in a subframe, if SR is configured for the SCell in the subframe, the PUCCH SCell is activated in the subframe, and the TAT associated with the TAG of PUCCH SCell is running in the subframe. If TAT of a PUCCH SCell is expired, then PUCCH resources of the SCell is released and the PUCCH SCell is no longer considered an SCell with configured PUCCH and SR resources. SR resources may be configured for an SCell that is in a TAG that its TAT is not running. In such a case, the SCell does not have valid SR resources until the TAG is uplink synchronized. When SR resources are not configured for a serving cell, that serving cell does not have valid SR resources. Various example scenarios when an SCell has valid or invalid SR resources are illustrated inFIGS.16,17,18, and19. FIG.16shows example events and states corresponding to a secondary cell as per an aspect of an embodiment of the present invention. When an SCell with configured PUCCH resources is deactivated, the SCell does not include valid SR resources. For example, subframe A does not have valid SR resources. In an example embodiment, a PUCCH SCell has valid SR resources in a subframe, if SR is configured for the SCell in the subframe, the PUCCH SCell is activated in the subframe, and the TAT associated with the TAG of PUCCH SCell is running in the subframe. For example, subframe B is configured with SR resources and include valid SR resources. FIG.17shows example events and states corresponding to a secondary cell as per an aspect of an embodiment of the present invention. When an SCell with configured PUCCH resources is deactivated, the SCell does not include valid SR resources. For example, subframe C does not have valid SR resources. When an SCell with configured PUCCH resources is activated but is in a TAG that its TAT is not running, the SCell does not include valid SR resources. For example, subframe D does not have valid SR resources. In an example embodiment, a PUCCH SCell has valid SR resources in a subframe, if SR is configured for the SCell in the subframe, the PUCCH SCell is activated in the subframe, and the TAT associated with the TAG of PUCCH SCell is running in the subframe. For example, subframe E is configured with SR resources and include valid SR resources. In an example embodiment, a wireless device may receive at least one message comprising: i) first configuration parameters of a primary cell in a plurality of cells. The plurality of cells are grouped into a plurality of timing advance groups (TAGs) comprising a primary TAG and a secondary TAG. ii) second configuration parameters of a secondary cell, the secondary cell having configured second SR resources in a first plurality of subframes. The wireless device may receive a MAC activation command for activation of the secondary cell in the secondary TAG. The secondary TAG may have a timing alignment timer (TAT) that is not running. The wireless device may determine that the secondary cell has invalid SR resource in a subframe in the first plurality of subframes when: an SR is pending in the subframe; the TAT is not running in the subframe; and the second SR resources are configured in the subframe and the wireless device does not have any uplink data channel resources are available for transmission in the subframe. In an example embodiment, a wireless device may receive at least one message comprising: i) first configuration parameters of a primary cell in a plurality of cells. The plurality of cells are grouped into a plurality of timing advance groups (TAGs) comprising a secondary TAG. ii) second configuration parameters of a secondary cell. The secondary cell has a configured second SR resources in a first plurality of subframes. The wireless device may receive a MAC activation command for activation of the secondary cell in the secondary TAG. The secondary TAG may have a timing alignment timer (TAT) that is not running. The wireless device may determine that the secondary cell has invalid SR resource in a subframe in the first plurality of subframes when: an SR is pending in the subframe; the TAT is not running in the subframe; and the second SR resources are configured in the subframe. In an example embodiment, a UE may receive at least one RRC message comprising configuration parameters of scheduling request resources and processes. At least one RRC message may comprise a first Scheduling Request Configuration IE for PCell and a second Scheduling Request Configuration IE for an SCell. Scheduling Request Configuration IE(s) may comprise parameters (IEs) such as sr-PUCCH-ResourceIndex, sr-ConfigIndex, dsr-TransMax, and/or sr-PUCCH-ResourceIndexP1. The at least one RRC message may also comprise a MAC-Main configuration IE comprising a SR prohibit timer information element. In an example embodiment, if the MAC entity has at least one valid PUCCH resource for SR configured for this TTI and if this TTI is not part of a measurement gap and if sr-ProhibitTimer is not running→If SR_COUNTER<dsr-TransMax, then UE may perform one or more of the following: UE may increment SR_COUNTER by 1; UE may instruct the physical layer to signal SR on one valid SR resource; and/or UE may start the sr-ProhibitTimer; Else UE may perform one, more than one, or all the following (this may be considered an SR failure): UE may notify RRC to release PUCCH/SRS for one or more first serving cells; UE may clear configured downlink assignments and uplink grants; and/or UE may initiate a Random Access procedure (e.g. on the PCell) and cancel pending SRs. In an example embodiment, in a MAC entity, there may be only one scheduling request procedure regardless of whether scheduling request is configured on multiple cells, e.g., one SR_COUNTER which is increased when SR is sent on either PCell or PUCCH SCell and one sr-ProhibitTimer. In an example embodiment, a UE may transmit an SR signal on a valid PUCCH resource for SR. A UE may initiate a Random Access procedure (e.g. on the PCell) and cancel all pending SRs, if the UE has no valid SR resource for SR configured in any TTI. A valid SR resource may be available for SR on a PCell or SPCell, when SR is configured on the cell. A valid SR resource may be available for SR on an SCell if SR resources are configured on the SCell, and if the SCell is activated and if TAT of the TAG associated with PUCCH SCell is running. SR signal transmissions may not be allowed on a deactivated SCell. The SR resource on a deactivated SCell may be considered as an invalid SR resource. The SR resource on an SCell of a TAG that its TAT is not running may not be considered as a valid SR resource. The SR configuration, on SCell of a TAG when its TAT is expired, may be released. When SR configuration on a cell is released, SR resource is not available on the cell. In an example embodiment, valid SR resources for SR may include the SR resources configured on PCell that belong to a TAG that its TAT is running. Valid SR resources for SR may include the SR resources configured on an activated PUCCH SCell that belong to a TAG that its TAT is running. In an example embodiment, SR resources may be configured on an SCell but not on the PCell. If SR is triggered and PUCCH SCell is deactivated, Random Access may be triggered (e.g. on the PCell). SR resources on an SCell may not be released upon PUCCH SCell deactivation by a UE. In an example embodiment, SR resources may be configured on an SCell but not on the PCell. If SR is triggered and PUCCH SCell is activated in a TAG that its TAT is not running, Random Access may be triggered (e.g. on the PCell). In an example embodiment, a wireless device may receive at least one RRC message comprising: i) first configuration parameters of a primary cell in a plurality of cells. The plurality of cells are grouped into a plurality of timing advance groups (TAGs) comprising a primary and a secondary TAG. The primary cell may have no scheduling request (SR) resources. ii) second configuration parameters of a secondary cell. The secondary cell may have configured second SR resources in a first plurality of subframes. The wireless device may receive a MAC activation command for activation of the secondary cell in the secondary TAG. The secondary TAG may have a timing alignment timer (TAT) that is not running. The wireless device may initiate a random access procedure when: an SR is pending in a subframe in the first plurality of subframes; the TAT is not running in the subframe; and no uplink data channel resources are available for transmission in the subframe. The second SR resources may be configured in the subframe. In an example embodiment, a wireless device may receive at least one first RRC message comprising first configuration parameters of a primary cell in a plurality of cells. The plurality of cells being grouped into a plurality of timing advance groups (TAGs) comprising a secondary TAG. The primary cell has no scheduling request (SR) resources. The wireless device may receive at least one second message comprising second configuration parameters of a secondary cell in the secondary TAG. The secondary cell has configured second SR resources and is in activated state. The secondary TAG has a timing alignment timer (TAT) that is not running. The wireless device may initiate a random access procedure when: the secondary cell is activated; an SR is pending in a subframe; the TAT is not running in the subframe; and no uplink data channel resources are available for transmission in the subframe. Various example scenarios when an SCell has valid or invalid SR resources are illustrated inFIGS.16,17,18, and19. FIG.18shows example events and states corresponding to a secondary cell as per an aspect of an embodiment of the present invention. When an SCell with configured PUCCH/SR resources is activated and uplink synchronized (corresponding TAT is running), the SCell includes valid SR resources. For example, subframe F may include valid SR resources. In an example embodiment, when the TAT of a TAG including the SCell expires, the UE/eNG may release PUCCH and SR resources on the SCell. The SCell may not have any valid SR resources. For example, subframe G may not have valid SR resources. The eNB may transmit one or more RRC messages to configure the SCell with PUCCH/resources. The SCell may then be activated and have configured PUCCH/SR resources. The TAT of the TAG including the SCell may not be running, and the SCell may not include any valid SR resources. For example, subframe H may not have any valid SR resources. The eNB may initiate a random access process on the TAG. The PUCCH SCell has valid SR resources in a subframe, if SR is configured for the SCell in the subframe, the PUCCH SCell is activated in the subframe, and the TAT associated with the TAG of PUCCH SCell is running in the subframe. For example, subframe I is configured with SR resources and include valid SR resources. FIG.19shows example events and states corresponding to a secondary cell as per an aspect of an embodiment of the present invention. The PUCCH SCell has valid SR resources in a subframe, if SR is configured for the SCell in the subframe, the PUCCH SCell is activated in the subframe, and the TAT associated with the TAG of PUCCH SCell is running in the subframe. For example, subframe J is configured with SR resources and include valid SR resources. A deactivated PUCCH SCell may not include a valid SR resources. For example, subframe J1 may not include any valid SR resources. FIG.20shows example events and states corresponding to a secondary cell as per an aspect of an embodiment of the present invention. The SCell with configured SR resources may belong to a TAG that its TAT is uplink synchronized. The PUCCH SCell has valid SR resources in a subframe, if SR is configured for the SCell in the subframe, the PUCCH SCell is activated in the subframe, and the TAT associated with the TAG of PUCCH SCell is running in the subframe. For example, subframe K is configured with SR resources and include valid SR resources. When TAT of the TAG expires, PUCCH of the SCell may be released and the SCell may not have valid SR resources. A PUCCH SCell that is not uplink synchronized (is out of sync) may not include a valid SR resources. For example, subframe L may not include any valid SR resources. In exampleFIGS.16,17,18, and19show various example events and states corresponding to a secondary cell as per an aspect of an embodiment of the present invention. In an example, in subframe A inFIG.16, a wireless device may initiate a random access process when an SR is pending in subframe A and no uplink shared channel resources are available in subframe A. The same applies to subframes C, D, G, and H inFIG.17andFIG.18. The wireless device may transmit an SR request when the PUCCH SCell has valid SR resources, for example, in subframe B, E, F, and I, when these subframes are configured with SR resources on the PUCCH SCell. In an example embodiment, consider a scenario when SR resources are not configured on PCell and is configured on PUCCH SCell. The wireless device may transmit an SR on a valid SR resource of an SCell in a first subframe as a part of an SR process. In an example, in a second subframe after the first subframe, the PUCCH SCell may be deactivated for example because the wireless device received a MAC activation/deactivation command deactivating the PUCCH SCell. In another example, in a second subframe after the first subframe, a TAT of a timing advance group associated with the PUCCH SCell may expire. The wireless device may cancel the SR and initiate a random access process, if the SR process is pending in the second subframe and no uplink data channel resources are available for transmission in the second subframe. In an example embodiment, the wireless device may not wait for SR prohibit timer to expire before initiating the random access procedure. In an example embodiment, the wireless device may not wait for an uplink grant in response to the SR and may initiate the random access procedure. For example, SR prohibit timer may be 80 msec. The wireless device may transmit an SR in subframe n. In subframe n+2, the wireless device may cancel SR process and initiate a random access process if the PUCCH SCell is deactivated and/or the TAT of the TAG associated with PUCCH SCell is expired. This process may reduce the delay for receiving an uplink grant. In exampleFIG.19andFIG.20, the wireless device may transmit an SR in subframes J and K. InFIG.19, if an SR process is pending and SCell is deactivated and no uplink shared channel was not available to the wireless device, then the wireless device may initiate a random access process. InFIG.20, if SR is pending and the TAT of the TAG associated with PUCCH SCell is expired and no uplink shared channel was not available to the wireless device, then the wireless device may initiate a random access process. An eNB may or may not have received the SR request. The wireless device may not count on the eNB to the pending SR and may initiate the random access process. In some example scenarios, the eNB may receive both SR and the random access preamble. In an example embodiment, a wireless device may receive at least one message comprising: a) first configuration parameters of a primary cell in a plurality of cells. The primary cell has no scheduling request (SR) resources. b) second configuration parameters of a secondary cell. The secondary cell has configured second SR resources. The wireless device may transmit an SR associated with an SR process in the second SR resources in a first subframe. The wireless device may monitor at least one downlink control channel for a grant for uplink data channel resources. The wireless device may cancel the SR process and initiate a random access procedure when: the SR process is pending; a TAT of a timing advance group associated with the secondary cell expires (the TAT is not running); and no uplink data channel resources are available for transmission. In an example embodiment, a wireless device may receive at least one at least one message comprising: a) first configuration parameters of a primary cell in a plurality of cells. The primary cell has no scheduling request (SR) resources. b) second configuration parameters of a secondary cell. The secondary cell has configured second SR resources. The wireless device may transmit an SR associated with an SR process in the second SR resources in a first subframe. The wireless device may monitor at least one downlink control channel for a grant for uplink data channel resources. The wireless device may cancel the SR process and initiate a random access procedure when: the SR process is pending; said secondary cell is deactivated; no uplink data channel resources are available for transmission. The random access procedures may be initiated by the UE on the PCell and may be a contention based random access procedure. SR resources may remain configured on the SCell during the random access procedures until and after the random access process is successful and UE is granted uplink resources. If a TAT of the sTAG including the SCell is not running, the eNB may trigger a random access process to uplink synchronize the secondary TAG. When the secondary TAG is uplink synchronized (its TAT is running), then UE may be able to employ the SR resources of the SCell to transmit SR request. An eNB may configure SR of a UE on PCell or PUCCH SCell or both depending on many factors, e.g. the resource availability, reliability, and/or other implementation-related inputs. In an example embodiment, MAC entity may consider that it has valid PUCCH resource in a subframe for SR in SCell, if PUCCH resource is configured in SCell in the subframe; and if the SCell is activated in the subframe and if TAT of the TAG associated with the SCell is running in the subframe. MAC entity considers it has valid PUCCH resource for SR in PCell/SPCell, if PUCCH resource is configured in PCell/SPCell (PCell/SPCell is always activated when configured). MAC entity may not consider that it has valid PUCCH resource for SR in an SCell, if PUCCH resource is not configured in the SCell. In a subframe, MAC entity may not consider that it has valid PUCCH resource for SR in an SCell, if PUCCH resource is configured in the SCell in the subframe and the SCell is deactivated in the subframe. In a subframe, MAC entity may not consider that it has valid PUCCH resource for SR in an SCell, if PUCCH resource is configured in the SCell in the subframe and TAT of the TAG associated with the SCell is running in the subframe. A PUCCH SCell may be configured and then activated in a TAG that is not yet synchronized or is out-of-sync (its TAT is expired). In such scenarios, TAT of the TAG associated with PUCCH SCell is not running, while PUCCH resources and SR resources are configured. The UE (e.g. MAC entity) may not consider that the PUCCH SCell has a valid SR resources on the PUCCH SCell. FIG.21is an example flow diagram as per an aspect of an embodiment of the present invention. A wireless device may receive at least one message from a base station at2110. The wireless device may comprise one or more processors and memory storing instructions that, when executed, cause the wireless device to perform a method. The message(s) may comprise configuration parameters of a plurality of cells. According to an embodiment, the plurality of cells may be grouped into a plurality of physical uplink control channel (PUCCH) groups. The PUCCH groups may comprise a primary PUCCH group comprising the primary cell with a primary PUCCH transmitted to a base station. The PUCCH groups may comprise a secondary PUCCH group comprising the secondary cell with a secondary PUCCH transmitted to the base station. According to an embodiment, the configuration parameters may comprise: first configuration parameters of a primary PUCCH for the primary cell; and second configuration parameters of a secondary PUCCH for the secondary cell. The plurality of cells may comprise a primary cell and a secondary cell. The primary cell may have no configured scheduling request (SR) resources. The secondary cell may have configured SR resources. The secondary cell may be in a secondary timing advance group (sTAG). According to an embodiment, the SR resources may be configured on the secondary cell in the subframe. According to an embodiment, the plurality of cells may be grouped into a plurality of timing advance groups (TAGs). The TAGs may comprise a primary TAG and an sTAG. The primary TAG may comprise the primary cell. The sTAG may comprise the secondary cell. According to an embodiment, the at least one message may comprise a time alignment timer parameter for the sTAG and a time alignment timer for the pTAG. An activation command indicating activation of the secondary cell may be received by the wireless device from the base station at2120. The secondary cell may be in an activated state in the subframe. According to an embodiment, the method may further comprise initiating the SR process when the wireless device has one or more uplink packets for transmission. A random access procedure may be initiated at2130. The random access procedure may be initiated when in the subframe an SR process is pending; no uplink data channel resources are available for transmission; and a time alignment timer of the sTAG is not running. According to an embodiment, the method may further comprise transmitting a random access preamble in random access resources of the primary cell. According to an embodiment, the method may further comprise receiving a random access response comprising an uplink grant. FIG.22is an example flow diagram as per an aspect of an embodiment of the present invention. A wireless device may receive at least one message at2210. The wireless device may comprise one or more processors and memory storing instructions that, when executed, cause the wireless device to perform a method. The at least one message may comprise configuration parameters of a secondary cell configured with scheduling request (SR) resources. The secondary cell may be in a secondary timing advance group (sTAG). According to an embodiment, the at least one message may comprise configuration parameters of a plurality of cells. The plurality of cells may be grouped into a plurality of physical uplink control channel (PUCCH) groups. The PUCCH groups may comprise a primary PUCCH group and a secondary PUCCH group. The primary PUCCH group may comprise a primary cell with a primary PUCCH transmitted to a base station. The secondary PUCCH group may comprise the secondary cell with a secondary PUCCH transmitted to the base station. According to an embodiment, the at least one message may comprise first configuration parameters of a primary PUCCH for a primary cell, and second configuration parameters of a secondary PUCCH for the secondary cell. An activation command may be received at2220. The activation command may indicate activation of the secondary cell. According to an embodiment, the secondary cell may be in an activated state in the subframe. According to an embodiment, the at least one message may comprise second configuration parameters of a plurality of cells. The plurality of cells may be grouped into a plurality of timing advance groups (TAGs). The plurality of TAGs may comprise: a primary TAG comprising a primary cell; and the sTAG comprising the secondary cell. According to an embodiment, the at least one message may comprise a time alignment timer parameter for the sTAG and a time alignment timer for the pTAG. According to an embodiment, the method may further comprise initiating the SR process when the wireless device has one or more uplink packets for transmission. At2230, the secondary cell may be determined to have an invalid SR resource when first conditions are met. For example, at2230, the secondary cell may be determined to have an invalid SR resource when in a subframe: an SR process is pending; the SR resources are configured in the subframe; and a time alignment timer of the sTAG is not running in the subframe. According to an embodiment, the SR resources may be configured on the secondary cell in the subframe. According to an embodiment, no uplink data channel resources may be available for transmission in the subframe. According to an embodiment, the method may further comprise initiating a random access process on the primary cell when no uplink data channel resources are available for transmission in the subframe. FIG.23is an example flow diagram as per an aspect of an embodiment of the present invention. A wireless device may receive at least one message at2310. The wireless device may comprise one or more processors and memory storing instructions that, when executed, cause the wireless device to perform a method. The at least one message may comprise configuration parameters of a plurality of cells. The plurality of cells may comprise: a primary cell with no configured scheduling request (SR) resources; and a secondary cell with configured SR resources. The secondary cell may be in a secondary timing advance group (sTAG). According to an embodiment, the secondary cell may be in an activated state in the subframe. According to an embodiment, the plurality of cells may be grouped into a plurality of physical uplink control channel (PUCCH) groups. The PUCCH groups may comprise a primary PUCCH group and a secondary PUCCH group. The primary PUCCH group may comprise the primary cell with a primary PUCCH transmitted to a base station. The secondary PUCCH group may comprise the secondary cell with a secondary PUCCH transmitted to the base station. According to an embodiment, the at least one message may comprise: first configuration parameters of a primary PUCCH for the primary cell; and second configuration parameters of a secondary PUCCH for the secondary cell. According to an embodiment, the method may further comprise the wireless device initiating the SR process when the wireless device has one or more uplink packets for transmission. At2320, an SR associated with an SR process may be transmitted in the SR resources. An SR prohibit timer may not be running. At2330, at least one downlink control channel may be monitored for uplink grants. At2340, a random access procedure may be initiated and the SR process cancelled when in a subframe: the SR process is pending; no uplink data channel resources are available for transmission; and a time alignment timer of the sTAG is not running. According to an embodiment, the plurality of cells may be grouped into a plurality of timing advance groups (TAGs). The TAGs may comprise a primary TAG and the sTAG. The primary TAG may comprise the primary cell. The sTAG may comprise the secondary cell. According to an embodiment, the at least one message may comprise a time alignment timer parameter for the sTAG and a time alignment timer for the pTAG. According to an embodiment, the method may further comprise transmitting a random access preamble in random access resources of the primary cell. According to an embodiment, the method may further comprise receiving a random access response comprising an uplink grant. The configured set of serving cells for a UE may, according to some embodiments, comprise of one PCell and one or more SCells. According to an embodiment, an IE PhysicalConfigDedicated element may be employed to specify UE specific physical channel configuration(s). According to an embodiment: PhysicalConfigDedicated may comprise SEQUENCE {schedulingRequestConfig SchedulingRequestConfig OPTIONAL, —Need ON [ . . . ]}. According to an embodiment: PhysicalConfigDedicatedSCell may comprise SEQUENCE {schedulingRequestConfig SchedulingRequestConfig OPTIONAL, —Need ON [ . . . ]}. According to an embodiment, a Timing Advance Group may comprise a group of serving cells that may be configured by RRC and that, for the cells with an UL configured, use the same timing reference cell and the same Timing Advance value. A Primary Timing Advance Group may comprise a Timing Advance Group comprising the PCell. A Secondary Timing Advance Group may comprise a Timing Advance Group not containing the PCell. According to an embodiment, the Activation/Deactivation MAC control element may be identified by a MAC PDU subheader with an LCID. The Scheduling Request (SR) may be employed for requesting UL-SCH resources for new transmission. When an SR is triggered, it may be considered as pending until it is cancelled. Pending SR(s) may be cancelled and sr-ProhibitTimer may be stopped when a MAC PDU is assembled and this PDU comprises a BSR which contains buffer status up to (and including) the last event that triggered a BSR. Pending SR(s) may be cancelled and sr-ProhibitTimer may be stopped if pending SR(s) are triggered by Sidelink BSR, when a MAC PDU is assembled and this PDU comprises a Sidelink BSR which comprises buffer status up to (and including) the last event that triggered a Sidelink BSR. Pending SR(s) may be cancelled and sr-ProhibitTimer may be stopped if pending SR(s) are triggered by Sidelink BSR, when upper layers configure autonomous resource selection. Pending SR(s) may be cancelled and sr-ProhibitTimer may be stopped when the UL grant(s) may accommodate pending data available for transmission. According to an embodiment, if an SR is triggered and there is no other SR pending, the MAC entity may set an SR_COUNTER to 0. According to an embodiment, as long as one SR is pending, the MAC entity may for each TTI, if no UL-SCH resources are available for a transmission in this TTI, if the MAC entity has no valid PUCCH resource for SR configured in any TTI: initiate a Random Access procedure on the SpCell and cancel pending SRs. Else if the MAC entity has at least one valid PUCCH resource for SR configured for this TTI and if this TTI is not part of a measurement gap and if sr-ProhibitTimer is not running: if SR_COUNTER<dsr-TransMax: increment SR_COUNTER by 1; instruct the physical layer to signal the SR on one valid PUCCH resource for SR; and start the sr-ProhibitTimer. Else: notify RRC to release PUCCH for serving cells; notify RRC to release SRS for serving cells; clear any configured downlink assignments and uplink grants; and initiate a Random Access procedure on the SpCell and cancel pending SRs. The selection of which valid PUCCH resource for SR to signal SR on when the MAC entity has more than one valid PUCCH resource for SR in one TTI may be left to a UE implementation. In this specification, “a” and “an” and similar phrases are to be interpreted as “at least one” and “one or more.” In this specification, the term “may” is to be interpreted as “may, for example.” In other words, the term “may” is indicative that the phrase following the term “may” is an example of one of a multitude of suitable possibilities that may, or may not, be employed to one or more of the various embodiments. If A and B are sets and every element of A is also an element of B, A is called a subset of B. In this specification, only non-empty sets and subsets are considered. For example, possible subsets of B={cell1, cell2} are: {cell1}, {cell2}, and {cell1, cell2}. In this specification, parameters (Information elements: IEs) may comprise one or more objects, and each of those objects may comprise one or more other objects. For example, if parameter (IE) N comprises parameter (IE) M, and parameter (IE) M comprises parameter (IE) K, and parameter (IE) K comprises parameter (information element) J, then, for example, N comprises K, and N comprises J. In an example embodiment, when one or more messages comprise a plurality of parameters, it implies that a parameter in the plurality of parameters is in at least one of the one or more messages, but does not have to be in each of the one or more messages. Many of the elements described in the disclosed embodiments may be implemented as modules. A module is defined here as an isolatable element that performs a defined function and has a defined interface to other elements. The modules described in this disclosure may be implemented in hardware, software in combination with hardware, firmware, wetware (i.e. hardware with a biological element) or a combination thereof, all of which are behaviorally equivalent. For example, modules may be implemented as a software routine written in a computer language configured to be executed by a hardware machine (such as C, C++, Fortran, Java, Basic, Matlab or the like) or a modeling/simulation program such as Simulink, Stateflow, GNU Octave, or LabVIEWMathScript. Additionally, it may be possible to implement modules using physical hardware that incorporates discrete or programmable analog, digital and/or quantum hardware. Examples of programmable hardware comprise: computers, microcontrollers, microprocessors, application-specific integrated circuits (ASICs); field programmable gate arrays (FPGAs); and complex programmable logic devices (CPLDs). Computers, microcontrollers and microprocessors are programmed using languages such as assembly, C, C++ or the like. FPGAs, ASICs and CPLDs are often programmed using hardware description languages (HDL) such as VHSIC hardware description language (VHDL) or Verilog that configure connections between internal hardware modules with lesser functionality on a programmable device. Finally, it needs to be emphasized that the above mentioned technologies are often used in combination to achieve the result of a functional module. The disclosure of this patent document incorporates material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, for the limited purposes required by law, but otherwise reserves all copyright rights whatsoever. While various embodiments have been described above, it should be understood that they have been presented by way of example, and not limitation. It will be apparent to persons skilled in the relevant art(s) that various changes in form and detail can be made therein without departing from the spirit and scope. In fact, after reading the above description, it will be apparent to one skilled in the relevant art(s) how to implement alternative embodiments. Thus, the present embodiments should not be limited by any of the above described exemplary embodiments. In particular, it should be noted that, for example purposes, the above explanation has focused on the example(s) using FDD communication systems. However, one skilled in the art will recognize that embodiments of the invention may also be implemented in a system comprising one or more TDD cells (e.g. frame structure 2 and/or frame structure 3-licensed assisted access). The disclosed methods and systems may be implemented in wireless or wireline systems. The features of various embodiments presented in this invention may be combined. One or many features (method or system) of one embodiment may be implemented in other embodiments. Only a limited number of example combinations are shown to indicate to one skilled in the art the possibility of features that may be combined in various embodiments to create enhanced transmission and reception systems and methods. In addition, it should be understood that any figures which highlight the functionality and advantages, are presented for example purposes only. The disclosed architecture is sufficiently flexible and configurable, such that it may be utilized in ways other than that shown. For example, the actions listed in any flowchart may be re-ordered or only optionally used in some embodiments. Further, the purpose of the Abstract of the Disclosure is to enable the U.S. Patent and Trademark Office and the public generally, and especially the scientists, engineers and practitioners in the art who are not familiar with patent or legal terms or phraseology, to determine quickly from a cursory inspection the nature and essence of the technical disclosure of the application. The Abstract of the Disclosure is not intended to be limiting as to the scope in any way. Finally, it is the applicant's intent that only claims that include the express language “means for” or “step for” be interpreted under 35 U.S.C. 112, paragraph 6. Claims that do not expressly include the phrase “means for” or “step for” are not to be interpreted under 35 U.S.C. 112.
99,589
11943155
DETAILED DESCRIPTION Overview This disclosure encompasses numerous inventive principles relating to cross-carrier scheduling (CCS). These principles may have independent utility and may be embodied individually, and not every embodiment may utilize every principle. Moreover, the principles may also be embodied in various combinations, some of which may amplify the benefits of the individual principles in a synergistic manner. In some embodiments, a primary cell may be scheduled by a secondary cell using CCS, for example, to reduce the burden of control signaling on the primary cell (e.g., to reduce physical downlink control channel (PDCCH) transmissions on the primary cell). Since a PDCCH may not be needed on a scheduled cell, a user equipment (UE) may not monitor a common search space for a PDCCH on a scheduled cell. However, for initial access and/or random access procedures, a UE may need certain broadcast and/or system information that may only be obtained by monitoring a common search space on the primary cell. Therefore, in some embodiments in accordance with the disclosure, a UE may monitor a common search space for a primary cell when the primary cell is configured as a scheduled cell. Some embodiments may only allow PDCCH overbooking on primary cells. However, when secondary cells are used to schedule primary cells, a greater load may be placed on the secondary cells. Thus, in some embodiments in accordance with the disclosure, PDCCH overbooking may be allowed on secondary cells that are configured as scheduling cells. In some embodiments, if a secondary cell that schedules a primary cell is deactivated before another scheduling mechanism is configured for the primary cell, it may be problematic for the scheduling of the primary cell. Therefore, in some embodiments in accordance with the disclosure, one or more techniques may be used to prevent the deactivation of a scheduling secondary cell from interfering with the operation of a scheduled primary cell. For example, the deactivation of the secondary cell may be coordinated with the reconfiguration of another secondary cell to schedule the primary cell, the primary cell may be configured for self-scheduling, another secondary cell may be configured as a fallback cell to schedule the primary cell, and/or other techniques may be implemented. Some embodiments may allow cross-carrier scheduling of downlink transmissions (e.g., physical downlink shared channel (PDSCH) transmissions) on multiple cells by a single downlink control information (DCI) in a PDCCH. However, a UE may need to provide a separate acknowledgment for the downlink transmission on each cell, and therefore, the DCI may configure uplink resources for transmitting the acknowledgments, in some embodiments in accordance with the disclosure, a single physical uplink control channel (PUCCH) may be used for all of the acknowledgments for the downlink transmissions scheduled by a single DCI. In such embodiments, the DCI may include a single entry for one or more parameters for the PUCCH. In some other embodiments in accordance with the disclosure, multiple PUCCH may be used for the acknowledgments, and the DCI may include multiple entries for the PUCCHs. In some embodiments, a UE may use a counter index (e.g., a counter downlink assignment index (C-DAI)) provided in a DCI to detect missed transmissions (e.g., lost scheduling messages) when constructing an acknowledgment codebook. However, in embodiments that allow a single DCI to schedule multiple downlink transmissions on multiple cells, it may be difficult to determine the value of C-DAI to use in the DCI. In some embodiments in accordance with the disclosure, a DCI that schedules multiple cells may include multiple C-DAIs that may be used to construct a codebook as if each C-DAI was provided in a separate DCI for each cell, in some other embodiments in accordance with the disclosure, a DCI that schedules multiple cells may include a single C-DAI. In such embodiments, a UE may substitute a manipulated serving cell index (e.g., a minimum or maximum index) into an algorithm for constructing the codebook, in such embodiments, the algorithm may reserve (e.g., skip) one or more codebook entries for certain serving cells. The reserved entries may be replaced with actual acknowledgment values for the serving cells. EXAMPLE EMBODIMENTS Some example embodiments of systems, apparatus, devices, processes, methods, and/or the like illustrating some possible implementation details according to the disclosure are described herein. These examples are provided for purposes of illustrating the principles of this disclosure, but the principles are not limited to or defined by these embodiments, implementation details, and/or the like. For example, some embodiments may be described in the context of 5G and/or New Radio (NR) wireless communication systems, but the principles may also be applied to any other types of wired and/or wireless systems including 3G, 4G and/or future generations of wireless networks, and/or any other communication systems that may implement cross-carrier scheduling for carrier aggregation, dual connectively, and/or the like. Carrier Aggregation Wireless networks may use carrier aggregation (CA) to increase the rate at which data may be exchanged between devices. In a CA scheme, two or more component carriers (CCs) may be combined into one data channel that may have a greater bandwidth than a single CC. To implement CA, a device such as a user equipment (UE) may use CCs from multiple cells, which may be referred to as serving cells. A UE may initially find and connect to a first cell which may be referred to as a primary cell or PCell. The primary cell may communicate with the UE using a first CC. One or more additional cells, which may be referred to as secondary cells or SCells, may then be configured to communicate with the UE using an additional CC per secondary cell. Thus, the UE may simultaneously exchange data with the network using multiple CCs on multiple cells, thereby aggregating the bandwidth of the component carriers. Carrier aggregation may be used for both uplink (UL) and/or downlink (DL) transmissions. Carrier aggregation may be implemented with one or more of the following modes: intra-band frequency aggregation with contiguous CCs; intra-band frequency aggregation with non-contiguous CCs; and/or inter-band frequency aggregation with non-contiguous CCs. In some embodiments, the categorization of CA modes may be dependent on a collection of bands containing the CCs which may be referred to as a band combination. Carrier aggregation techniques may also be implemented in dual connectivity (DC) systems. Thus, any of the principles disclosed herein in the context of CA deployment scenarios may also be applied to DC deployment scenarios. Dual connectivity systems may provide higher per-user throughput by offloading data transfers from a master node to a secondary node, for example, if the master node becomes overloaded. An example use case of DC may be offloading data from a macro cell to a small cell. In some embodiments of DC, a UE may first connect to a master node then to a secondary node. Some examples of DC scenarios may include EN-DC, NE-DC and NN-DC in which the master node and secondary nodes may be implemented as (eNB, gNB), (gNB, eNB) and (gNB, gNB), respectively, where eNB may refer to an eNodeB, and gNB may refer to a gNodeB. Nodes such as eNB and gNB may also be referred to as base stations. Deployment scenarios where the nodes are of different radio access technologies may be referred to as multi-radio access technology (Multi-RAT or MR) dual connectivity (MR-DC). Examples of MR-DC may include NR-E-UTRA dual connectivity (NE-DC) where E-UTRA may refer to Evolved Universal Terrestrial Radio Access, E-UTRA Radio dual connectivity (EN-DC), and NR dual connectivity (NN-DC). FIG.1illustrates an example embodiment of an NN-DC deployment in accordance with the disclosure. In the embodiment illustrated inFIG.1, UE-3may be simultaneously connected to two NR nodes (e.g., gNBs). The master gNB (MgNB) may configure a set of serving cells within a master cell group (MCG) and the secondary gNB (SgNB) may configure a set of serving ceils within a secondary cell group (SCG). The primary cell of the MCG may be referred to as a PCell while the secondary cells of the MCG may be referred to as SCells. The primary cell of the SCG may be referred to as a PSCell. The PCell and PSCells may also be referred to as special ceils (SpCells). Cross-Carrier Scheduling When using aggregated carriers, control information for scheduling a data transfer may be sent on the same CC that transfers the data. This may be referred to as self-scheduling. Alternatively, a data transfer on one CC may be scheduled by control information sent on a different CC. This may be referred to as cross-carrier scheduling (CCS). A cell that schedules another cell may be referred to as a scheduling cell. A cell that is scheduled by another cell may be referred to as a scheduled cell. In some embodiments of wireless networks, a UE may receive downlink control information (DCI) from a base station (e.g., a gNB) in a physical downlink control channel (PDCCH). The DCI may include scheduling information for a physical downlink shared channel (PDSCH) and/or a physical uplink shared channel (PUSCH). A physical uplink control channel (PUCCH) may include various information such as hybrid automatic repeat request acknowledgments (HARQ-ACKs) that may be used to implement error correction of data sent on PDSCHs. The DCI may also include scheduling information for PUCCHs that include acknowledgments for the PDSCHs scheduled by the DCI. Thus, in some embodiments, the shared channels (PDSCH and PUSCH) may generally transfer data, while the control channels (PDCCH and PUCCH) may generally provide control signaling for the shared channels. In a wireless network that implements carrier aggregation, some cells may be configured for self-scheduling such that the PDCCH that that schedules a PDSCH and/or a PUSCH on a cell is transmitted on the same cell. Some cells, however, may be configured for cross-carrier scheduling such that a PDCCH transmitted on a first cell may schedule one or more PDSCHs and/or PUSCHs on one or more other cells. For purposes of illustration, a scheduling cell may have a numerology μ1and a scheduled cell may have a numerology μ2. In some embodiments, cross-carrier scheduling may be performed by a scheduling cell having the same numerology as the scheduled cell (μ1=μ2) or a different numerology from the scheduled cell (μ1≠μ2). FIG.2illustrates an example embodiment of cross-carrier scheduling with cells having different numerologies in accordance with the disclosure. In the example illustrated inFIG.2, the scheduling cell has a subcarrier spacing (SCS) of 15 KHz, and the scheduled cell has an SCS of 30 KHz. A PDCCH may be transmitted on the first three symbols of slot n of the scheduling cell which schedules a PDSCH on slot m+1 of the scheduled cell. In some embodiments, cross-carrier scheduling of cells with different numerologies (μ1≠μ2) may facilitate using a cell in frequency range 1 (FR1) to schedule a cell in FR2. For example, in some embodiments, cells in FR1 (e.g., below 6 GHz) may provide better coverage and/or may be more reliable for delivering downlink control information. Thus, cross-carrier scheduling may be an effective technique for using cells in FR1 to deliver delivering DL control information for FR2 on FR1. Search Space Monitoring In some embodiments, monitoring of DCI to decode PDCCH may be performed on a search space (SS) of a scheduling cell. A search space may be implemented, for example, as a common SS (CSS) and/or a UE-specific SS (USS). In some embodiments, search spaces may be implemented as different types of search spaces as described in Appendix 1 which may be based, for example, on Technical Specification TS 38.213 by the 3rd Generation Partnership Project (3GPP). In some embodiments, CSS (except for type3 group common (GC) PDCCH SS) may be monitored only on primary cells whereas USS and type3 CSS may be monitored in all ceils. In some embodiments, where cross-carrier scheduling is used, no SS may be monitored on a scheduled cell. UE Processing Times in some embodiments, the processing by a UE of DCI to receive PDSCH or DCI to transmit PUSCH may be subject to a processing time. In some embodiments, UE may have two different processing capabilities which may be referred to as capability 1 (cap #1) and capability 2 (cap #2). In some embodiments, the capability may be determined in terms of the number of OFDM symbols (e.g., N1 or N2) a UE may use to process PDSCH or PUSCH. These processing capabilities depend, for example, on various parameters including a subcarrier spacing (SCS), numerology, and/or the like. In some embodiments, N1 or N2 may be smaller for cap #2 (shortened processing time) than for cap #1. Some example embodiments of UE processing capabilities are described in Appendix 2 which may be based, for example, on Technical Specification TS 38.213 by the 3GPP. Dynamic Spectrum Sharing Dynamic spectrum sharing (DSS) is a technique in which spectrum space may be dynamically shared by different types of networks. For example, DSS may enable 4G long-term evolution (LTE) devices (e.g., UEs and eNBs) to operate in the same frequency bands and/or channels as 5G NR devices (e.g., UEs. and gNBs). Because the time required to completely refarm 4G frequency bands to 5G frequency bands may delay the deployment of 5G NR systems, DSS may enable 5G NR systems to be deployed while 4G LTE systems are still in operation. In some embodiments, DSS may exploit the fact that 4G LTE and 5G NR systems are both based on OFDM technology. Thus, available resources may be dynamically shared between 4G and 5G users. In some embodiments, DSS may be deployed at relatively low frequencies (e.g., below 6 GHz) which may tend to provide better coverage for 5G systems. In wireless networks that use carrier aggregation, primary cells may be deployed at lower frequencies because of the better coverage lower frequencies may provide. Thus, primary cells may be used in the shared frequency bands of DSS deployments. However, in DSS deployments, 4G and 5G systems may both use the same frequency resources, and therefore, the shared frequency bands and/or channels may become crowded and cause bottlenecks that may reduce bandwidth and/or increase latency for both 4G and 5G systems. Thus, there is a motivation to reduce or eliminate any sources of overhead in the cells that share spectrum space in DSS systems. FIG.3illustrates an example embodiment of a scheduling operation in which a primary ceil (PCell) is self-scheduling in accordance with the disclosure. In the embodiment illustrated inFIG.3, time may be represented in the horizontal direction (with time progressing from left to right) and frequency (spectrum space) may be represented in the vertical direction. Because it is self-scheduled the primary cell (PCell) may receive a PDCCH including a DCI which may schedule a PDSCH in the PCell as shown by the arrow300. Thus, the time (horizontal space) taken up by the PDCCH represents resources on the primary cell that may not be used by other devices, processes, and/or the like. Primary Cell Scheduled By a Secondary Cell FIG.4illustrates an example embodiment of a cross-carrier scheduling operation in which a primary cell (PCell) is scheduled by a secondary cell (SCell) in accordance with the disclosure. As shown inFIG.4, the PDSCH on the primary cell may be scheduled by a DCI in the PDCCH on the secondary cell. Thus, the PDCCH used by the primary cell inFIG.4(shown by the dashed lines) may be available for use by other devices, processes, and/or the like on the primary cell, in some embodiments, moving the scheduling PDCCH to the secondary cell may be facilitated by the fact that, some or all of the information transmitted in a PDCCH may not need to be transmitted on a primary cell. In some embodiments, the cross-carrier scheduling operation inFIG.4may be used in a DSS system in which the primary cell may operate in a relatively crowded shared 4G/5G frequency space, in such an embodiment, the PDCCH may be relocated to the secondary cell which may operate in a relatively uncrowded dedicated 5G frequency space. Depending on the implementation details, this may reduce the scheduling overhead of the primary cell, thereby reducing crowding on the primary cell. In some embodiments of CCS procedures, PDCCH may not be monitored in a scheduled ceil. Thus, no PDCCH may be monitored on the primary cell if it is configured as a scheduled cell in CCS. However, a primary cell may be used to obtain broadcast and/or system information that may be used during an initial access procedure and/or to support random access procedures. In some embodiments, this broadcast and/or system information may only be available by monitoring a CSS for the primary cell. Moreover, because other UEs in the network may still need to access the broadcast and/or system information, the CSS through which the broadcast and/or system information is provided for the primary cell may still be available on the network. In some embodiments in accordance with the disclosure, a UE may monitor a CSS for a primary cell if the primary cell is configured as a scheduled cell. For example, a UE may monitor a CSS for a scheduled primary cell to obtain broadcast and/or system information that may be used during an initial access procedure and/or to support random access procedures. In some embodiments in accordance with the disclosure, a UE that monitors a CSS for a scheduled primary cell may exclude certain types of CSS such as type-3 CSS (e.g., CSS that may be decoded using cell-radio network temporary identifier (C-RNTI)). For example, in some embodiments, a UE may monitor DCI for unicast communications by decoding the CSS using C-RNTI. However unicast communications may not be relevant to acquiring broadcast and/or system information for initial access and/or random access procedures. Therefore, as a refinement to monitoring a CSS for a scheduled primary cell, in some embodiments in accordance with the disclosure, a UE may not monitor a CSS of a primary cell using C-RNTI. Overbooking Monitoring PDCCH may require a UE to perform blind decoding of PDCCH candidates that may or may not contain valid DCI. This may place a relatively large processing burden on the UE. To limit or reduce this burden, some networks may use one or more techniques to limit the amount of processing a UE may perform on decoding. For example, in some embodiments, a network may include no more than a fixed number of PDCCH candidates that a UE may be expected to decode per slot. However, implementing this type of blind decoding limit may place a processing burden on the network (e.g., on a gNB). Therefore, in some embodiments, a network may allow overbooking in search spaces which may be configured with more PDCCH candidates than a UE may be expected to decode. However, to limit the processing burden on the UE, the UE may only be expected to monitor the search space for a limited number of PDCCH candidates by selecting certain possible candidates. These selected candidates may be coordinated with the network through a set of selection rules that may ensure that the UE and the network have the same understanding of which candidates are selected. The UE may drop the PDCCH candidates and/or search spaces that are not selected. Example embodiments of such selection rules are described in Appendix 3 which may be based, for example, on Technical Specification TS 38.213 by 3GPP. In some embodiments, such selection rules may only be applied to allow overbooking of primary cells. The network may still implement a blind decoding limit on PDCCH candidates for secondary cells because the burden on the network to enforce a limit for secondary cells may be substantially less than the burden for primary cells. Thus, in some embodiments, a UE may perform a selection procedure to limit the number of blind decoding candidates in search spaces of primary cells, but the UE may assume that it should decode all candidates in search spaces of secondary cells because the network will limit the number of candidates. However, allowing secondary cells to schedule primary cells for CCS may increase the load on the secondary cells. For example, referring toFIG.3, because the secondary cell may schedule the primary cell, it may include an additional PDCCH to obtain the DCI to schedule the primary cell. This, in turn, may increase the burden on the network to implement a blind decoding limit for secondary cells. In some embodiments in accordance with the disclosure, overbooking may be allowed on secondary cells that schedule other cells, for example, primary cells. Depending on the implementation details, this may reduce or eliminate the burden on a network to implement a blind decoding limit for secondary cells. In some embodiments, the network may still implement a blind decoding limit for secondary cells that do not schedule other cells such as primary cells, in such embodiments, a UE may not expect a number of PDCCH candidates, and/or a number of corresponding non-overlapping control channel elements (CCEs) per slot on a secondary cell to be larger than the corresponding numbers that the UE is capable of monitoring on the secondary cell per slot, except for a secondary cell that schedules a primary cell. In some embodiments, when overbooking is allowed on secondary cells that schedule other cells, a set of selection rules similar to those for overbooking of primary cells may be applied to overbooking of secondary cells. In some embodiments, when overbooking is allowed on scheduling secondary cells, overbooking may or may not be allowed on primary cells. For example, if overbooking is allowed on secondary cells that schedule primary cells for CCS, this may result in multiple secondary ceils being overbooked. However, in some embodiments, a UE may use the same parallel processing resources for primary cells and secondary cells, so additional overbooking on secondary cells may affect the amount for processing resources available for overbooking on primary cells. Thus, the processing burden from overbooking on secondary cells may be considered jointly with the processing burden from overbooking on primary cells. In some embodiments, when overbooking is allowed on secondary cells that schedule other cells, overbooking may not be allowed on primary cells that are scheduled by secondary ceils. Depending on the implementation details, this may counteract the increased processing burden on secondary cells that are allowed to be overbooked because they are scheduling cells. Thus, in some embodiments, a UE may not expect a number of PDCCH candidates, and/or a number of corresponding non-overlapped CCEs per slot on a primary cell to be larger than the corresponding numbers that the UE is capable of monitoring on the primary cell per slot if the primary cell is a scheduled cell in CCS (or if the primary cell does not have a UE specific search space (USS)). In some embodiments, CSS for broadcast and/or system information for initial access and/or random access may remain in a scheduled primary cell, for example, because it may be used for other purposes within the network. However, from the perspective of a UE, such CSS may not necessarily need to remain on a scheduled primary cell. Hence, in some embodiments, PDCCH may not be monitored on a primary cell. A secondary cell overbooking scheme as described above may be applied. Thus, overbooking may be allowed on a secondary cell that schedules a primary cell, and the scheduling secondary cell may contain CSS. In some embodiments, a secondary cell containing CSS may be different from a secondary cell that schedules a primary cell, and overbooking may be allowed one or both of the types of secondary cells. In some embodiments, a potential motivation for allowing overbooking on a primary cell and having a UE perform a selection procedure is that CSS except for type-3 CSS may only be monitored only on a primary cell, which may cause the primary cell to become more crowded. However, if C-RNTI is not monitored on a primary cell, then such motivation may not apply, and thus, overbooking may be allowed on secondary cells that schedule primary cells. Secondary Ceil Deactivation In some embodiments, CCS configuration may be performed by radio resource control (RRC) signaling, whereas secondary cell activation and/or deactivation may be performed by a medium access control (MAC) control element (MAC-CE). In some embodiments, a MAG-CE may be used for secondary cell activation because it may provide faster secondary cell activation and/or deactivation than RRC reconfiguration. An example embodiment of a procedure for deactivating a secondary cell is described in Appendix 4 which may be based, for example, on Technical Specification TS 38.213 by the 3GPP. In some embodiments, the configuration of a secondary cell that schedules a primary ceil may be performed by RRC signaling. However, if the network decides to deactivate the secondary cell using MAC-CE before a new scheduling configuration is provided by RRC, the primary cell may be left in a state without scheduling. In some embodiments in accordance with the disclosure, one or more techniques may be used to prevent the deactivation of a scheduling secondary cell from interfering with the operation of a primary cell that is scheduled by the secondary cell. For example, in some embodiments, the network may ensure that a secondary cell that schedules a primary cell may not be deactivated. The secondary cell may be deactivated after RRC reconfigures the primary ceil to be scheduled by another secondary cell. This may be accomplished, for example, by coordinating the MAC-CE with the RRC. As another example, in some embodiments, a MAC-CE may be configured to change the CCS configuration of the primary cell such that a different secondary cell schedules the primary ceil before the secondary cell is deactivated. Alternatively, or additionally, the MAC-CE may be configured to change the CCS configuration of the primary cell to self-scheduling, in this example, a search space may be configured for the primary cell, e.g., using search space configuration information contained in the MAC-CE. As a further example, the primary cell may be configured to automatically become self-scheduled if the scheduling secondary cell is deactivated, in this example, if a CSS is already monitored in the primary cell, the UE may continue to use the CSS for the primary cell, if the CSS that was already being monitored was not being monitored with C-RNTI, the UE may begin monitoring the CSS with C-RNTI when the scheduling secondary cell is deactivated. As yet another example, the RRC may configure another secondary cell as a default cell that may begin scheduling the primary cell if the secondary cell currently scheduling the primary cell is deactivated. This may be implemented, for example, by the RRC providing the UE a list of secondary cells that may be used as a fallback for scheduling a primary cell if a scheduling secondary cell is deactivated. DCI Scheduling Multiple Cells Another approach to reducing overhead on crowded primary cells may be to allow a single DCI to schedule transport blocks (TBs) in multiple cells using carrier aggregation. For example, in some embodiments, a baseline scheme for multi-CC scheduling by a DCI may involve allowing a DCI to schedule one TB corresponding to PDSCH or PUSCH for each of multiple scheduled cells. In some embodiments, the baseline scheme may be enhanced, for example, by allowing a single DCI to schedule multiple TBs for one or more of the multiple cells. FIG.5illustrates an example embodiment of a cross-carrier scheduling operation in which a single DCI schedules multiple cells in accordance with the disclosure. As shown inFIG.5, a single DCI in a PDCCH on a primary cell (PCell) may schedule a PDSCH on a first secondary cell (SCell1) as shown by arrow500. The DCI may also schedule two PDSCHs on a second secondary cell (SCell2) as shown by arrows502and504. In some embodiments in which a DCI schedules multiple cells, one or more parameters in the DCI related to such allocation may be duplicated to provide multiple copies. Some examples of such allocation parameters may include time domain resource allocation (TDRA), frequency domain resource allocation (FDRA), redundancy version (RV), modulation and coding scheme (MGS), PDCCH-to-PDSCH timing parameter K0, PDSCH-to-PUCCH timing parameter K1, PDCCH-to-PUSCH timing parameter K2, downlink assignment index (DAI), and/or the like. In some embodiments, such duplication may increase the size of the DCI and/or degrade efficiency. Alternatively, or additionally, RRC may provide a list of allocation parameters and/or groups of allocation parameters for all cells, and the DCI may provide an index into the list. Alternatively, or additionally, one or more parameters may be shared by multiple cells. Depending on the implementation details, these alternative/additional techniques may reduce the size and/or improve the efficiency of the DCI. The use of a single DCI to schedule multiple cells may affect the handling of the PDSCH-to-PUCCH timing parameter K1 and PUCCH resource indicator (PRI), especially when multiple cells belong to the same PUCCH group. In some embodiments in which multiple cells belong to the same PUCCH group, if may be beneficial to use a single PUCCH. In some embodiments, a single parameter for K1 and a single parameter for the PRI may be provided, and the actual PUCCH may be determined based on the latest PUCCH among hypothetically constructed PUCCHs corresponding to the PDSCH numerology and/or allocation parameter for each cell. In some other embodiments, a single parameter for K1 and a single parameter for the PRI may be provided, and the actual PUCCH may be determined based on the earliest PUCCH satisfying a PDSCH processing time for ail cells among hypothetically constructed PUCCHs corresponding to the PDSCH numerology and/or allocation parameter of each cell, in some other embodiments, one or more PDSCH cells may be used as a reference cell to determine the actual PUCCH. PUCCH for Acknowledgments In some embodiments in which a DCI schedules multiple cells, one PUCCH resource may be used to transmit the acknowledgments (e.g., HARQ-ACK bits) for the PDSCHs scheduled by the DCI. In some other embodiments, multiple PUCCH resources may be used to transmit the acknowledgments for the PDSCHs scheduled by a DCI. The following are example embodiments, including some alternatives, of techniques for determining the PUCCH resources in accordance with the disclosure. In these examples, a PDCCH on a scheduling cell may schedule N PDSCHs on N serving cells. Individual acknowledgment bits may also be referred to as ACK/NACK or A/N bits. Example 1 may provide some techniques for using a single PUCCH resource. Example 2 may provide some techniques for using multiple PUCCH resources. Example 1 (Using One PUCCH) Alternative 1.1 (Single field for TDRA, K1, and PRI): One field may be used for all N cells. The field may include TDRA, PDSCH-to-HARQ_feedback timing indicator K1, and PRI. Sub-Alternative 1.1-1: The network may ensure that applying the single field for all cells indicates the same PUCCH slot. The PUCCH resource may be determined according to the PRI field. For example, the network may configure the N cells to have the same SCS numerology. Sub-Alternative 1.1-2: N possibly different PUCCH slots and PUCCH resources may be determined according to the single DCI field. The UE may then select a single PUCCH slot and resource from the N slots and resources according to one or more of the following: (1) a reference cell may be used to obtain the PUCCH slot and resource; (2) the UE may select the PUCCH slot and resource as the earliest PUCCH resource that satisfies the PDSCH processing time of all the cells; and/or (3) the UE may select the PUCCH slot and resource as the latest PUCCH resource among the N PUCCH slots and PUCCH resources. Alternative 1.2 (N fields for TDRA, K1, and PRI): N different fields may be used for TDRA, PDSCH-to-HARQ_feedback timing indicator K1, and PRI for the N cells. The network may ensure that applying the N fields for ail cells indicate the same PUCCH slot. In this case, the PUCCH resource may be determined according to the PRI field corresponding to the serving cell with the largest or smallest cell index among the N cells. Alternative 1.3 (N fields for TDRA, K1, and 1 field for PRI): N different fields may be used for TDRA and the PDSCH-to-HARQ_feedback timing indicator K1 for the N cells. The network may ensure that applying the N fields for all cells indicate the same PUCCH slot. The PUCCH resource may be determined according to the single PRI field. Example 2 (Using Multiple PUCCHs) Alternative 2.1 (Single field for TDRA, K1, and PRI): One field may be used for TDRA, PDSCH-to-HARQ_feedback timing indicator K1, and the PRI for all N cells, N possibly different PUCCH slots and PUCCH resources may be determined according to the single DCI field for the N cells. PUCCH resource overriding may only be applied for the serving cells with the same PUCCH slot. The actual number M of PUCCH transmissions may be less than N because some of the fields may indicate the same PUCCH slot. Alternative 2.2 (N fields for TDRA, K1, and PRI): N different fields may be used for TDRA, PDSCH-to-HARQ_feedback timing indicator K1, and the PRI for the N cells, N possibly different PUCCH slots and PUCCH resources may be determined according to the N DCI fields for the N ceils. PUCCH resource overriding may only be applied for the serving cells with the same PUCCH slot. The actual number M of PUCCH transmissions may be less than N because some of the fields may indicate the same PUCCH slot. Codebook Construction In some embodiments in which a single DCI may be used to schedule multiple cells, multiple PUCCHs may be used to transmit the acknowledgments for the PDSCHs scheduled by a DCI. For example, a single parameter for K1 and a single parameter for PRI may be utilized, and multiple PUCCHs may be constructed based on the single parameters, in some embodiments, multiple DAI fields may be used since a DAI may relate to one reference PUCCH slot. In some embodiments in which a single DCI schedules multiple cells, a single PUCCH may be used to transmit the acknowledgments for the PDSCHs scheduled by a DCI. When using a single PUCCH, one or multiple DAI fields may be provided. If only one DAI field is provided, a procedure of constructing an acknowledgment codebook (e.g., a type-2 HARQ-ACK codebook) may be implemented in a manner that may accommodate the presence of only one DAI field. For example, a codebook construction procedure such as that provided in TS 38.213 by 3GPP may be modified such that an A/N bit location in the codebook may be generated as “N” consecutive positions where the starting position may correspond to the position of the lowest scheduled cell index, where “N” may be the number of scheduled cells in DCI. In this case, a DAI related operation in the codebook may be skipped for all other scheduled cell indices, and a DAI increment of one may be used by this DCI. An example embodiment of a codebook construction algorithm is described in Appendix 5 which may be based, for example, on Technical Specification TS 38.213 by the 3GPP. FIG.6illustrates an example embodiment of a cross-carrier scheduling method using multiple DAI fields in accordance with the disclosure. The PDSCH on CC #Q may be scheduled by the PDCCH on CC #Q having a DAI field (1,4) in which the DAI field may include a (C-DAI,T-DAI) pair. The PDSCH on CC #1 may be scheduled by one of the PDCCHs on CC #3 having a DAI field (2,4). The PDSCH on CC #2 may be scheduled by the PDCCH on CC #2 having a DAI field (3,4). The PDSCH on CC #4 may be scheduled by the other PDCCH on CC #4 having a DAI field (4,4). In the embodiment illustrated inFIG.6, one PUCCH slot may be assumed as each DAI field may be with respect to one PUCCH slot. The embodiment illustrated inFIG.6may use “N” DAI fields to construct a Type-2 codebook. The UE may treat the detected DCI as N detected DCI, each with corresponding DAI fields. The following DAI definition may be used: a value of the counter downlink assignment indicator (DAI) field in DCI formats denotes the accumulative number of {serving cell, PDCCH monitoring occasion}-pair(s) in which PDSCH reception(s) or SPS PDSCH release associated with the DCI formats is present up to the current serving cell and current PDCCH monitoring occasion, first in ascending order of serving cell index and then in ascending order of PDCCH monitoring occasion index m, where 0≤m<M, and where the serving cell may refer to the scheduled cell. However, in some embodiments in accordance with the disclosure, if only one DAI field is present in the scheduling DCI, the DAI field may be redefined. For example, referring toFIG.6, if a single DCI replaces the two DCIs in CC #3 scheduling the PDSCHs CC #1 and CC #3, it may be difficult to determine the value to use for the single C-DAI in the PDCCH on CC #3. If C-DAI is to provide the accumulative number of {serving cell, PDCCH monitoring occasion}-pair(s) up to CC #1, the value may be 2. However, if C-DAI is to provide the accumulative number up to CC #3, the value may be 4. In some embodiments with a codebook construction algorithm in accordance with the disclosure, either value may work properly in terms of the HARQ-ACK payload size determination as described below, FIG.7illustrates an example embodiment of a cross-carrier scheduling method using a single DAI field in accordance with the disclosure. The method illustrated inFIG.7may be similar to that illustrated inFIG.6, however, the two DCIs in CC #3 scheduling the PDSCHs CC #1 and CC #3 shown inFIG.6may be replaced by a single DCI as shown inFIG.7. Some embodiments may construct a codebook according to the following method. For a PDCCH scheduling N different cells, a single field for (C-DAI, T-DAT) may be present in the DCI. The value of C-DAI on a PDCCH scheduling serving cells with indices i1, i2, . . . , iNmay denote the accumulative number of {serving cell, PDCCH monitoring occasion}-pairs in which PDSCH reception or semi-persistent scheduling (SPS) PDSCH release associated with a DCI format up to the current serving cell and current PDCCH monitoring occasion, first in ascending order of serving cell index and then in ascending order of PDCCH monitoring occasion index m, where 0≤m<M, where the current serving cell is the serving cell with largest or smallest index among i1, . . . , iN. This may be the C-DAI is associated with the cell index c′=max(i1, . . . , iN) or c′=min(i1, . . . , iN). In some embodiments, the value of T-DAI may be determined, for example, using the definition provided by Release 15 or Release 16 of the 3GPP standards. In some embodiments, the codebook operation may be implemented in a manner similar to that described in Appendix 5, however with the following modifications: (a) in the “while c<NcellsDL” loop all the cell indices in the set {i1, . . . , iN}\c′ may be skipped (which may also be referred to as being reserved); (a) the HACK values used for the skipped indices may not be included; and (c) for the valid A/N bits for the skipped indices, positions in the codebook may be replaced by the original positions of NACK values or the new positions consecutively following the A/N value of the largest or smallest index. Applying the method described above to the example illustrated inFIG.7, if c′=max(1,3)=3, a=4, and cell index c=1 in the while loop may be skipped. By skipping cell index c=1, a NACK value may be generated for the PDSCH on CC #1. With the modifications (a), (b), and/or (c) described above, the NACK bit may be replaced by a valid A/N bit for the PDSCH on CC #1. If c′=min (1,3)=1, a=2, and cell index c=3 may be skipped. By skipping cell index c=1, a NACK value may be generated for the PDSCH on CC #3. With the modifications (a), (b), and/or (c) described above, the NACK bit may be replaced by a valid A/N bit for the PDSCH on CC #3. User Equipment FIG.8illustrates an example embodiment of a user equipment (UE) in accordance with the disclosure. The embodiment800illustrated inFIG.8may include a radio transceiver802and a controller804which may control the operation of the transceiver802and/or any other components in the UE800. The UE800may be used, for example, to implement any of the functionality described in this disclosure. The transceiver802may transmit/receive one or more signals to/from a base station, and may include an interface unit for such transmissions/receptions. The controller804may include, for example, one or more processors806and a memory808which may store instructions for the one or more processors806to execute code to implement any of the functionality described in this disclosure. For example, the UE800and/or the controller804may be used to implement functionality relating to monitoring CSS in a primary cell that may be scheduled by a secondary cell, overbooking of a secondary cell that schedules a primary cell, determining PUCCH resources for multiple cells scheduled by a single DCI, constructing a codebook for multiple cells scheduled by a single DCI, and/or the like. Base Station FIG.9illustrates an example embodiment of a base station in accordance with the disclosure. The embodiment900illustrated inFIG.9may include a radio transceiver902and a controller904which may control the operation of the transceiver902and/or any other components in the base station900. The base station900may be used, for example, to implement any of the functionality described in this disclosure. The transceiver902may transmit/receive one or more signals to/from a user equipment, and may include an interface unit for such transmissions/receptions. The controller904may include, for example, one or more processors906and a memory908which may store instructions for the one or more processors906to execute code to implement any of the base station functionality described in this disclosure. For example, the base station900and/or the controller904may be used to implement functionality relating to overbooking of secondary cells that may schedule primary cells, coordinating the deactivation of a secondary cell that may schedule a primary cell, and/or the like. In the embodiments illustrated inFIGS.8and9, the transceivers802and902may be implemented with various components to receive and/or transmit RF signals such as amplifiers, filters, modulators and/or demodulators, A/D and/or DA converters, antennas, switches, phase shifters, detectors, couplers, conductors, transmission lines, and/or the like. The controllers804and904may be implemented with hardware, software, and/or any combination thereof. For example, full or partial hardware implementations may include combinational logic, sequential logic, timers, counters, registers, gate arrays, amplifiers, synthesizers, multiplexers, modulators, demodulators, filters, vector processors, complex programmable logic devices (CPLDs), field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), systems on chip (SOC), state machines, data converters such as ADCs and DACs, and/or the like. Full or partial software implementations may include one or more processor cores, memories, program and/or data storage, and/or the like, which may be located locally and/or remotely, and which may be programmed to execute instructions to perform one or more functions of the controllers. Some embodiments may include one or more CPUs such as complex instruction set computer (CISC) processors such as x86 processors and/or reduced instruction set computer (RISC) processors such as ARM processors, and/or the like, executing instructions stored in any type of memory, graphics processing units (GPUs), neural processing units (NPUs), tensor processing units (TPUs), and/or the like. Additional Embodiments FIG.10illustrates an embodiment of a method for cross-carrier scheduling in a communication network in accordance with the disclosure. The method may begin at operation1002. At operation1004, the method may schedule, by a first scheduling mechanism, a primary ceil, wherein the first scheduling mechanism may include scheduling the primary cell by a secondary cell. At operation1006, the method may schedule, by a second scheduling mechanism, the primary cell. At operation1008, the method may deactivate the secondary cell based on switching from the first scheduling mechanism to the second scheduling mechanism. The method may end at operation1010. FIG.11illustrates another embodiment of a method for cross-carrier scheduling in a communication network in accordance with the disclosure. The method may begin at operation1102. At operation1104, the method may schedule a first downlink channel for a first cell based on a downlink control information (DCI). At operation1106, the method may schedule a second downlink channel for a second cell based on the DCI. At operation1108, the method may schedule an uplink channel for a first acknowledgment for the first downlink channel. The method may end at operation1110. FIG.12illustrates a further embodiment of a method for cross-carrier scheduling in a communication network in accordance with the disclosure. The method may begin at operation1202. At operation1204, the method may schedule a first downlink channel for a first cell based on a downlink control information (DCI). At operation1206, the method may schedule a second downlink channel for a second cell based on the DCI. At operation1208, the method may construct an acknowledgment codebook for the first cell and the second cell based on a counter index in the DCI, a first serving cell index for the first cell, and a second serving cell index for the second cell. The method may end at operation1210. In the embodiment illustrated inFIGS.10-12, as well as the other embodiments illustrated herein, the illustrated components and/or operations are exemplary only. Some embodiments may involve various additional components and/or operations not illustrated, and some embodiments may omit some components and/or operations. Moreover, in some embodiments, the arrangement of components and/or temporal order of the operations may be varied. Although some components may be illustrated as individual components, in some embodiments, some components shown separately may be integrated into single components, and/or some components shown as single components may be implemented with multiple components. The embodiments disclosed herein may be described in the context of various implementation details, but the principles of this disclosure are not limited to these or any other specific details. Some functionality has been described as being implemented by certain components, but in other embodiments, the functionality may be distributed between different systems and components in different locations. A reference to a component or element may refer to only a portion of the component or element. The use of terms such as “first” and “second” in this disclosure and the claims may only be for purposes of distinguishing the things they modify and may not indicate any spatial or temporal order unless apparent otherwise from context. A reference to a first thing may not imply the existence of a second thing. Moreover, the various details and embodiments described above may be combined to produce additional embodiments according to the inventive principles of this patent disclosure. Various organizational aids such as section headings and the like may be provided as a convenience, but the subject matter arranged according to these aids and the principles of this disclosure are not defined or limited by these organizational aids. Since the inventive principles of this patent disclosure may be modified in arrangement and detail without departing from the inventive concepts, such changes and modifications are considered to fall within the scope of the following claims. APPENDIX 1 a Type0-PDCCH CSS set configured by pdcch-ConfigSIB1 in MIB or by searchSpaceSIB1 in PDCCH-ConfigCommon or by searchSpaceZero in PDCCH-ConfigCommon for a DCI format with CRC scrambled by a SI-RNTI on the primary cell of the MCGa Type0A-PDCCH CSS set configured by searchSpaceOtherSystemInformatian in PDCCH-ConfigCommon fox a DCI format with CRC scrambled by a SI-RNTI on the primary cell of the MCGa Type1-PDCCH CSS set configured by ra-SearchSpace in PDCCH-ConfigCommon for a DCI format with CRC scrambled by a RA-RNTI, a MsgB-RNTI, or a TC-RNTI on the primary cell??a Type2-PDCCH CSS set configured by pagingSearchSpace in PDCCH-ConfigCommon for a DCI format with CRC scrambled by a P-RNTI on the primary cell of the MCGa Type3-PDCCH CSS set configured by SearchSpace in PDCCH-Config with searchSpaceType=common for DCI formats with CRC scrambled by INT-RNTI, SFI-RNTI, TPC-PUSCH-RNTI, TPC-PUCCH-RNTI, TPC-SRS-RNTI, CI-RNTI, or PS-RNTI and, only for the primary ceil, C-RNTI, MCS-C-RNTI, or CS-RNTI(s), anda USS set configured by SearchSpace in PDCCH-Config with searchSpaceType=ue-Specific fox DCI formats with CRC scrambled by C-RNTI, MCS-C-RNTI, SP-CSI-RNTI, CS-RNTI(s), SL-RNTI, SL-CS-RNTI, or SL-L-CS-RNTI. APPENDIX 2 PDSCH Processing Time for PDSCH Processing Capability 1 PDSCH decoding time N1[symbols]dmrs-AdditionalPosition =dmrs-AdditionalPosition ≠pos0 in DMRS-pos0 in DMRS-DownlinkConfigDownlinkConfig in eitherin both of dmrs-of dmrs-DownlinkForPDSCH-DownlinkForPDSCH-MappingTypeA, dmrs-MappingTypeA, dmrs-DownlinkForPDSCH-DownlinkForPDSCH-MappingtypeB or if the higherμMappingTypeBlayer parameter is not configured08N1,0110132172032024 PDSCH Processing Time for PDSCH Processing Capability 2 PDSCH decoding time N1[symbols]dmrs-AdditionalPosition = pos0 inDMRS-DownlinkConfig in both of dmrs-DownlinkForPDSCH-MappingTypeA,μdmrs-DownlinkForPDSCH-MappingTypeB0314.529 for frequency range 1 PUSCH Preparation Time for PUSCH Timing Capability 1 μPUSCH preparation time N2[symbols]010112223336 PUSCH Preparation Time for PUSCH Timing Capability 2 μPUSCH preparation time N2[symbols]0515.5211 for frequency range 1 APPENDIX 3 A UE does not expect to be configured CSS sets that result to corresponding total, or per scheduled cell, numbers of monitored PDCCH candidates and non-overlapped CCEs per slot that exceed the corresponding maximum numbers per slot. For same cell scheduling or for cross-carrier scheduling where a scheduling cell and scheduled cell(s) have DL BWPs with same SCS configuration μ, a UE does not expect a number of PDCCH candidates, and a number of corresponding non-overlapped CCEs per slot on a secondary cell to be larger than the corresponding numbers that the UE is capable of monitoring on the secondary cell per slot. For cross-carrier scheduling, the number of PDCCH candidates for monitoring and the number of non-overlapped CCEs per slot are separately counted for each scheduled cell. For all search space sets within a slot n, denote by Scssa set of CSS sets with cardinality of Icssand by Sussa set of USS sets with cardinality of Juss. The location of USS sets Sj, 0≤j<Juss, in Sussis according to an ascending order of the search space set index. Denote by MScss(i)(L), 0≤i<Icss, the number of counted PDCCH candidates for monitoring for CSS set Scss(i) and by MSuss(j)(L), 0≤j<Juss, the number of counted PDCCH candidates for monitoring for USS set Suss(j). For the CSS sets, a UE monitors MPDCCHC⁢S⁢S=∑i=0ICSS-1⁢∑L⁢MSCSS(i)(L) PDCCH candidates requiring a total of CPDCCCSSnon-overlapping CCEs in a slot. The UE allocates PDCCH candidates for monitoring to USS sets for the primary cell having an active DL BWP with SCS configuration μ in a slot if the UE is not provided PDCCHMonitoringCapabiliiyConfig for the primary cell or if the UE is provided PDCCHMonitoringCapabilityConfig=R15 PDCCH monitoring capability for ail serving cells, or in a span if the UE is provided PDCCHMonitoringCapabilityConfig=R16 PDCCH monitoring capability tor the primary cell, according to the following pseudocode, if for the USS sets for scheduling on the primary cell the UE is not provided CORESETPoolIndex for first CORESETs, or is provided CORESETPoolIndex with value 0 for first CORESETs, and is provided CORESETPoolIndex with value 1 for second CORESETs, and if min (γ·MPDCCHmax,slot,μ,MPDCCHtotal,slot,μ)>min(MPDCCHmax,slot,μ,MPDCCHtotal,slot,μ) or min(γ·CPDCCHmax,slot,μ,CPDCCHtotal,slot,μ)>min(CPDCCHmax,slot,μ,CPDCCHtotal,slot,μ), the following pseudocode applies only to USS sets associated with the first CORESETs. A UE does not expect to monitor PDCCH in a USS set without allocated PDCCH candidates for monitoring. Denote by VCCE(Suss(j)) the set of non-overlapping CCEs for search space set Suss(j) and by C(VCCE(Suss(j))) cardinality of VCCE(Suss(j)) where the non-overlapping CCEs for search space set Suss(j) are determined considering the allocated PDCCH candidates for monitoring for the CSS sets and the allocated PDCCH candidates for monitoring for all search space sets Suss(k), 0≤k≤j. Set MPDCCHuss= min (MPDCCHmax,slot,μ, MPDCCHtotal,slot,μ) − MPDCCHcssSet CPDCCHuss= min (CPDCCHmax,slot,μ, CPDCCHtotal,slot,μ) − CPDCCHcssSet j = 0while⁢∑LMSuss(j)(L)≤MP⁢D⁢C⁢C⁢Hu⁢s⁢s⁢AND(VCCE(Suss(j))) ≤ CPDCCHussallocate⁢∑LMsuss(j)(L)⁢PDCCH⁢candidates⁢for⁢monitoring⁢to⁢USS⁢set⁢Suss(j)MP⁢D⁢C⁢C⁢Hu⁢s⁢s=MP⁢D⁢C⁢C⁢Hu⁢s⁢s-∑LMSuss(j)(L);CPDCCHuss=CPDCCHuss−(VCCE(Suss(j)));j = j + 1;end while APPENDIX 4 With reference to slots for PUCCH transmissions, when a UE receives in a PDSCH an activation command [11, TS 38.321] for a secondary cell ending in slot n, the UE applies the corresponding actions in [11, TS 38.321] no later than the minimum requirement defined in [10, TS 38.133] and no earlier than slot n+k, except for the following:the actions related to CSI reporting on a serving cell that is active in slot n+kthe actions related to the sCellDeactivationTimer associated with the secondary cell [11, TS 38.321] that the UE applies in slot n+kthe actions related to CSI reporting on a serving cell which is not active in slot n+k that the UE applies in the earliest slot after n+k in which the serving cell is active. The value of k is k1+3·Nslotsubframe,μ+1 where k1is a number of slots for a PUCCH transmission with HARQ-ACK information for the PDSCH reception and is indicated by the PDSCH-to-HARQ_feedback timing indicator field in the DCI format scheduling the PDSCH reception as described in Clause 9.2.3 and Nslotsubframe,μis a number of slots per subframe for the SCS configuration μ of the PUCCH transmission. With reference to slots for PUCCH transmissions, if a UE receives a deactivation command [11, TS 38,321] for a secondary cell ending in slot n, the UE applies the corresponding actions in [11, TS 38.321] no later than the minimum requirement defined in [10, TS 38.133], except for the actions related to CSI reporting on an activated serving cell which the UE applies in slot n+k. If the sCellDeactivationTimer associated with the secondary cell expires in slot n, the UE applies the corresponding actions in [11, TS 38.321] no later than the minimum requirement defined in [10, TS 38.133], except for the actions related to CSI reporting on an activated serving cell which the UE applies in the first slot that is after slot n+3·Nslotsubframe,μwhere μ is the SCS configuration for PDSCH reception on the secondary cell. APPENDIX 5 A value of the counter downlink assignment indicator (DAI) field in DCI formats denotes the accumulative number of {serving cell, PDCCH monitoring occasion}-pair(s) in which PDSCH reception(s) or SPS PDSCH release associated with the DCI formats is present up to the current serving cell and current PDCCH monitoring occasion, first in ascending order of serving cell index and then in ascending order of PDCCH monitoring occasion index m, where 0≤m<M. The value of the total DAI, when present [5, TS 38.212], in a DCI format denotes the total number of {serving cell, PDCCH monitoring occasion}-pair(s) in which PDSCH reception(s) or SPS PDSCH release associated with DCI formats is present, up to the current PDCCH monitoring occasion m and is updated from PDCCH monitoring occasion to PDCCH monitoring occasion. Denote by NC-DAIDLthe number of bits for the counter DAI and set TD=2NC-DAIDL. Denote by VC-DAI,c,mDLthe value of the counter DAI in a DCI format scheduling PDSCH reception or SPS PDSCH release on serving cell c in PDCCH monitoring occasion m according to Table 9.1.3-1 or Table 9.1.3-1 A. Denote by VT-DAI,mDLthe value of the total DAI in a DCI format in PDCCH monitoring occasion m according to Table 9.1.3-1. The UE assumes a same value of total DAI in all DCI formats that include a total DAI field in PDCCH monitoring occasion m. If the UE transmits HARQ-ACK information in a PUCCH in slot n and for any PUCCH format, the UE determines the õ0ACK, õ1ACK, . . . , õOACK−1ACK, for a total number of OACKHARQ-ACK information bits, according to the following pseudo-code: Set m=0 - PDCCH with DCI format scheduling PDSCH reception or SPS PDSCH releasemonitoring occasion index: lower index corresponds to earlier PDCCH monitoringoccasionSet j=0Set Vtemp=0Set Vtemp2=0Set Vs=ØSet NcellDLto the number of serving cells configured by higher layers for the UE- if, for an active DL BWP of a serving cell, the UE is not provided CORESETPoollndex or isprovided CORESETPoollndex with value 0 for one or more first CORESETs and is providedCORESETPoollndex with value 1 for one or more second CORESETs, and is providedACKNACKFeedbackMode = JointFeedback, the serving cell is counted two times wherethe first time corresponds to the first CORESETs and the second time corresponds to thesecond CORESETsSet M to the number of PDCCH monitoring occasion(s)while m<MSet c=0 - serving cell index: lower indexes correspond to lower RRC indexes ofcorresponding cellwhile c < NcellsDLif PDCCH monitoring occasion m is before an active DL BWP change on serving cell cor an active UL BWP change on the PCell and an active DL BWP change is not triggeredin PDCCH monitoring occasion mc=c+1;elseif there is a PDSCH on serving cell c associated with PDCCH in PDCCH monitoringoccasion m, or there is a PDCCH indicating SPS PDSCH release on serving cell cif VC-DAI,c,mDL≤ Vtempj=j+1end ifVtemp= VC-DAI,c,mDLif VT-DAI,mDL= ØVtemp2= VC-DAI,c,mDLelseVtemp2= VT-DAI,mDLend ifif harq-ACK-SpatialBundlingPUCCH is not provided and the UE is configured bymaxNrofCodeWordsScheduledByDCI with reception of two transport blocks forat least one configured DL BWP of at least one serving cell,Õ2·TD·j+2(VC-DIA,c,mDL−1)ACK=HARQ-ACK information bit corresponding to the firsttransport block of this cellÕ2·TD·j+2(VC-DIA,c,mDL−1)+1ACK=HARQ-ACK information bit corresponding to thesecond transport block of this cellVs= Vs∪ {2 · TD· j + 2( VC-DAI,c,mDL−1), 2 · TD· j + 2(VC-DAI,c,mDL−1) + 1}elseif harq-ACK-SpatialBundlingPUCCHis provided to the UE and m is amonitoring occasion for PDCCH with a DCI format that supports PDSCHreception with two transport blocks and the UE is configured bymaxNrofCodeWordsScheduledByDCI with reception of two transport blocks in atleast one configured DL BWP of a serving cell,ÕTD·j +VC-DIA,c,mDL−1ACK= binary AND operation of the HARQ-ACK information bitsbitscorresponding to the first and second transport blocks of this cellVs= Vs∪ {TD· j + VC-DAI,c,mDL−1}elseÕTD·j +VC-DIA,c,mDL−1)ACK=HARQ-ACK information bit of this cellVs= Vs∪ {TD· j + VC-DAI,c,mDL−1}end ifend ifc=c+1end ifend whilem=m+1end whileif Vtemp2< Vtempj=j+1end if
60,241
11943156
DETAILED DESCRIPTION In some wireless communications systems, one or more UEs may communicate with each other, such as an anchor node (e.g., an anchor UE, which may communicate directly with a base station) communicating with one or more other UEs, which may be referred to as sidelink communication. To perform sidelink positioning, a UE may receive PRSs from one or more anchor nodes, which may indicate to the UE a position of another wireless device relative to the UE. In some cases, each anchor node may transmit one or more PRSs. The UE may determine the position of the anchor nodes by processing the PRSs. Further, the positions of the anchor nodes may be known values (e.g., known to the anchor node and a base station), and the UE may determine a position of the UE relative to the anchor nodes. In some cases, resources of multiple PRSs may collide (e.g., at least partially overlap in time or frequency). That is, because anchor nodes select resources (e.g., time-frequency resources) for PRSs from a shared pool, an anchor node may select the same resources as another anchor node for transmitting PRSs. When multiple anchor nodes select the same resources for PRS transmission, the UE may receive multiple PRSs, but may have the capability to process a portion (e.g., a subset) of the multiple PRSs and the UE may not know which of the multiple PRSs to select for processing in order to obtain accurate positioning information. In some examples, a receiving UE may process one or more PRSs based on a respective priority of the PRSs. For example, the UE may receive PRSs from anchor nodes over a set of resources. The UE may use PRSs to determine positioning information for the UE relative to the one or more anchor nodes or positioning information of one or more neighboring UEs relative to the UE or the one or more anchor nodes, one or more surrounding wireless devices (e.g., UEs, base stations, etc.), or the like. In some cases, the UE may detect a priority of the PRSs based on receiving the PRSs (e.g., based on a scrambling sequence, a resource mapping pattern, an order of the PRSs in a list, or the like). The resource mapping pattern may include a comb-structure type, a comb-structure offset, or the like of the PRSs, where a comb-structure refers to a frequency pattern in which a PRS uses every Nth subcarrier. The comb value (e.g., N) may be configured, which may be referred to as a comb-structure type. The offset of a comb may refer to a number of frequency resources prior to the pattern-structure beginning. In some other cases, the UE may receive an explicit indication of the priority of the PRSs, such as a dynamic indication in a control channel, radio resource control (RRC) signaling, or a medium access control-control element (MAC-CE). The UE may decode one or more PRSs of the received PRSs (e.g., a subset of PRSs) based on applying the respective priority, such as according to a rule. For example, the UE may select the one or more PRSs based on a rule related to parameters of the anchor node (e.g., positioning reliability, transmission parameters, signal quality, mobility, and the like), a capability of the UE, or the like. UEs supporting PRS priority in sidelink communications systems may utilize the techniques described herein to experience power savings and extended battery life while ensuring reliable and efficient communications in the group of UEs. For example, the UE may receive PRSs of different priorities from one or more wireless devices and may process a portion of the received PRSs (e.g., due to UE capability) according to respective priorities (e.g., may process PRSs with higher priority), which may reduce processing and signaling overhead, saving power at the UE and obtaining accurate positioning information. Particular aspects of the subject matter described in this disclosure may be implemented to support high reliability through transmission of priority indications for PRSs, and processing PRSs according to priorities, among other examples. The described techniques may thus include features for reducing power consumption, spectral efficiency, and provide higher data rates. Aspects of the disclosure are initially described in the context of wireless communications systems. Aspects of the disclosure are further described in the context of a resource diagram and a process flow. Aspects of the disclosure are further illustrated by and described with reference to apparatus diagrams, system diagrams, and flowcharts that relate to processing PRSs according to priority. FIG.1illustrates an example of a wireless communications system100that supports processing PRSs according to priority in accordance with one or more aspects of the present disclosure. The wireless communications system100may include one or more base stations105, one or more UEs115, and a core network130. In some examples, the wireless communications system100may be a Long Term Evolution (LTE) network, an LTE-Advanced (LTE-A) network, an LTE-A Pro network, or a New Radio (NR) network. In some examples, the wireless communications system100may support enhanced broadband communications, ultra-reliable communications, low latency communications, communications with low-cost and low-complexity devices, or any combination thereof. The base stations105may be dispersed throughout a geographic area to form the wireless communications system100and may be devices in different forms or having different capabilities. The base stations105and the UEs115may wirelessly communicate via one or more communication links125. Each base station105may provide a coverage area110over which the UEs115and the base station105may establish one or more communication links125. The coverage area110may be an example of a geographic area over which a base station105and a UE115may support the communication of signals according to one or more radio access technologies. The UEs115may be dispersed throughout a coverage area110of the wireless communications system100, and each UE115may be stationary, or mobile, or both at different times. The UEs115may be devices in different forms or having different capabilities. Some example UEs115are illustrated inFIG.1. The UEs115described herein may be able to communicate with various types of devices, such as other UEs115, the base stations105, or network equipment (e.g., core network nodes, relay devices, integrated access and backhaul (IAB) nodes, or other network equipment), as shown inFIG.1. The base stations105may communicate with the core network130, or with one another, or both. For example, the base stations105may interface with the core network130through one or more backhaul links120(e.g., via an S1, N2, N3, or other interface). The base stations105may communicate with one another over the backhaul links120(e.g., via an X2, Xn, or other interface) either directly (e.g., directly between base stations105), or indirectly (e.g., via core network130), or both. In some examples, the backhaul links120may be or include one or more wireless links. One or more of the base stations105described herein may include or may be referred to by a person having ordinary skill in the art as a base transceiver station, a radio base station, an access point, a radio transceiver, a NodeB, an eNodeB (eNB), a next-generation NodeB or a giga-NodeB (either of which may be referred to as a gNB), a Home NodeB, a Home eNodeB, or other suitable terminology. A UE115may include or may be referred to as a mobile device, a wireless device, a remote device, a handheld device, or a subscriber device, or some other suitable terminology, where the “device” may also be referred to as a unit, a station, a terminal, or a client, among other examples. A UE115may also include or may be referred to as a personal electronic device such as a cellular phone, a personal digital assistant (PDA), a tablet computer, a laptop computer, or a personal computer. In some examples, a UE115may include or be referred to as a wireless local loop (WLL) station, an Internet of Things (IoT) device, an Internet of Everything (IoE) device, or a machine type communications (MTC) device, among other examples, which may be implemented in various objects such as appliances, or vehicles, meters, among other examples. The UEs115described herein may be able to communicate with various types of devices, such as other UEs115that may sometimes act as relays as well as the base stations105and the network equipment including macro eNBs or gNBs, small cell eNBs or gNBs, or relay base stations, among other examples, as shown inFIG.1. The UEs115and the base stations105may wirelessly communicate with one another via one or more communication links125over one or more carriers. The term “carrier” may refer to a set of radio frequency spectrum resources having a defined physical layer structure for supporting the communication links125. For example, a carrier used for a communication link125may include a portion of a radio frequency spectrum band (e.g., a bandwidth part (BWP)) that is operated according to one or more physical layer channels for a given radio access technology (e.g., LTE, LTE-A, LTE-A Pro, NR). Each physical layer channel may carry acquisition signaling (e.g., synchronization signals, system information), control signaling that coordinates operation for the carrier, user data, or other signaling. The wireless communications system100may support communication with a UE115using carrier aggregation or multi-carrier operation. A UE115may be configured with multiple downlink component carriers and one or more uplink component carriers according to a carrier aggregation configuration. Carrier aggregation may be used with both frequency division duplexing (FDD) and time division duplexing (TDD) component carriers. In some examples (e.g., in a carrier aggregation configuration), a carrier may also have acquisition signaling or control signaling that coordinates operations for other carriers. A carrier may be associated with a frequency channel (e.g., an evolved universal mobile telecommunication system terrestrial radio access (E-UTRA) absolute radio frequency channel number (EARFCN)) and may be positioned according to a channel raster for discovery by the UEs115. A carrier may be operated in a standalone mode where initial acquisition and connection may be conducted by the UEs115via the carrier, or the carrier may be operated in a non-standalone mode where a connection is anchored using a different carrier (e.g., of the same or a different radio access technology). The communication links125shown in the wireless communications system100may include uplink transmissions from a UE115to a base station105, or downlink transmissions from a base station105to a UE115. Carriers may carry downlink or uplink communications (e.g., in an FDD mode) or may be configured to carry downlink and uplink communications (e.g., in a TDD mode). A carrier may be associated with a particular bandwidth of the radio frequency spectrum, and in some examples the carrier bandwidth may be referred to as a “system bandwidth” of the carrier or the wireless communications system100. For example, the carrier bandwidth may be one of a number of determined bandwidths for carriers of a particular radio access technology (e.g., 1.4, 3, 5, 10, 15, 20, 40, or 80 megahertz (MHz)). Devices of the wireless communications system100(e.g., the base stations105, the UEs115, or both) may have hardware configurations that support communications over a particular carrier bandwidth or may be configurable to support communications over one of a set of carrier bandwidths. In some examples, the wireless communications system100may include base stations105or UEs115that support simultaneous communications via carriers associated with multiple carrier bandwidths. In some examples, each served UE115may be configured for operating over portions (e.g., a sub-band, a BWP) or all of a carrier bandwidth. Signal waveforms transmitted over a carrier may be made up of multiple subcarriers (e.g., using multi-carrier modulation (MCM) techniques such as orthogonal frequency division multiplexing (OFDM) or discrete Fourier transform spread OFDM (DFT-S-OFDM)). In a system employing MCM techniques, a resource element may consist of one symbol period (e.g., a duration of one modulation symbol) and one subcarrier, where the symbol period and subcarrier spacing are inversely related. The number of bits carried by each resource element may depend on the modulation scheme (e.g., the order of the modulation scheme, the coding rate of the modulation scheme, or both). Thus, the more resource elements that a UE115receives and the higher the order of the modulation scheme, the higher the data rate may be for the UE115. A wireless communications resource may refer to a combination of a radio frequency spectrum resource, a time resource, and a spatial resource (e.g., spatial layers or beams), and the use of multiple spatial layers may further increase the data rate or data integrity for communications with a UE115. One or more numerologies for a carrier may be supported, where a numerology may include a subcarrier spacing (Δf) and a cyclic prefix. A carrier may be divided into one or more BWPs having the same or different numerologies. In some examples, a UE115may be configured with multiple BWPs. In some examples, a single BWP for a carrier may be active at a given time and communications for the UE115may be restricted to one or more active BWPs. The time intervals for the base stations105or the UEs115may be expressed in multiples of a basic time unit which may, for example, refer to a sampling period of Ts=1/(Δfmax·Nf) seconds, where Δfmaxmay represent the maximum supported subcarrier spacing, and Nfmay represent the maximum supported discrete Fourier transform (DFT) size. Time intervals of a communications resource may be organized according to radio frames each having a specified duration (e.g., 10 milliseconds (ms)). Each radio frame may be identified by a system frame number (SFN) (e.g., ranging from 0 to 1023). Each frame may include multiple consecutively numbered subframes or slots, and each subframe or slot may have the same duration. In some examples, a frame may be divided (e.g., in the time domain) into subframes, and each subframe may be further divided into a number of slots. Alternatively, each frame may include a variable number of slots, and the number of slots may depend on subcarrier spacing. Each slot may include a number of symbol periods (e.g., depending on the length of the cyclic prefix prepended to each symbol period). In some wireless communications systems100, a slot may further be divided into multiple mini-slots containing one or more symbols. Excluding the cyclic prefix, each symbol period may contain one or more (e.g., Nf) sampling periods. The duration of a symbol period may depend on the subcarrier spacing or frequency band of operation. A subframe, a slot, a mini-slot, or a symbol may be the smallest scheduling unit (e.g., in the time domain) of the wireless communications system100and may be referred to as a transmission time interval (TTI). In some examples, the TTI duration (e.g., the number of symbol periods in a TTI) may be variable. Additionally or alternatively, the smallest scheduling unit of the wireless communications system100may be dynamically selected (e.g., in bursts of shortened TTIs (sTTIs)). Physical channels may be multiplexed on a carrier according to various techniques. A physical control channel and a physical data channel may be multiplexed on a downlink carrier, for example, using one or more of time division multiplexing (TDM) techniques, frequency division multiplexing (FDM) techniques, or hybrid TDM-FDM techniques. A control region (e.g., a control resource set (CORESET)) for a physical control channel may be defined by a number of symbol periods and may extend across the system bandwidth or a subset of the system bandwidth of the carrier. One or more control regions (e.g., CORESETs) may be configured for a set of the UEs115. For example, one or more of the UEs115may monitor or search control regions for control information according to one or more search space sets, and each search space set may include one or multiple control channel candidates in one or more aggregation levels arranged in a cascaded manner. An aggregation level for a control channel candidate may refer to a number of control channel resources (e.g., control channel elements (CCEs)) associated with encoded information for a control information format having a given payload size. Search space sets may include common search space sets configured for sending control information to multiple UEs115and UE-specific search space sets for sending control information to a specific UE115. Each base station105may provide communication coverage via one or more cells, for example a macro cell, a small cell, a hot spot, or other types of cells, or any combination thereof. The term “cell” may refer to a logical communication entity used for communication with a base station105(e.g., over a carrier) and may be associated with an identifier for distinguishing neighboring cells (e.g., a physical cell identifier (PCID), a virtual cell identifier (VCID), or others). In some examples, a cell may also refer to a geographic coverage area110or a portion of a geographic coverage area110(e.g., a sector) over which the logical communication entity operates. Such cells may range from smaller areas (e.g., a structure, a subset of structure) to larger areas depending on various factors such as the capabilities of the base station105. For example, a cell may be or include a building, a subset of a building, or exterior spaces between or overlapping with geographic coverage areas110, among other examples. A macro cell generally covers a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by the UEs115with service subscriptions with the network provider supporting the macro cell. A small cell may be associated with a lower-powered base station105, as compared with a macro cell, and a small cell may operate in the same or different (e.g., licensed, unlicensed) frequency bands as macro cells. Small cells may provide unrestricted access to the UEs115with service subscriptions with the network provider or may provide restricted access to the UEs115having an association with the small cell (e.g., the UEs115in a closed subscriber group (CSG), the UEs115associated with users in a home or office). A base station105may support one or multiple cells and may also support communications over the one or more cells using one or multiple component carriers. In some examples, a carrier may support multiple cells, and different cells may be configured according to different protocol types (e.g., MTC, narrowband IoT (NB-IoT), enhanced mobile broadband (eMBB)) that may provide access for different types of devices. In some examples, a base station105may be movable and therefore provide communication coverage for a moving geographic coverage area110. In some examples, different geographic coverage areas110associated with different technologies may overlap, but the different geographic coverage areas110may be supported by the same base station105. In other examples, the overlapping geographic coverage areas110associated with different technologies may be supported by different base stations105. The wireless communications system100may include, for example, a heterogeneous network in which different types of the base stations105provide coverage for various geographic coverage areas110using the same or different radio access technologies. The wireless communications system100may support synchronous or asynchronous operation. For synchronous operation, the base stations105may have similar frame timings, and transmissions from different base stations105may be approximately aligned in time. For asynchronous operation, the base stations105may have different frame timings, and transmissions from different base stations105may, in some examples, not be aligned in time. The techniques described herein may be used for either synchronous or asynchronous operations. Some UEs115, such as MTC or IoT devices, may be low cost or low complexity devices and may provide for automated communication between machines (e.g., via Machine-to-Machine (M2M) communication). M2M communication or MTC may refer to data communication technologies that allow devices to communicate with one another or a base station105without human intervention. In some examples, M2M communication or MTC may include communications from devices that integrate sensors or meters to measure or capture information and relay such information to a central server or application program that makes use of the information or presents the information to humans interacting with the application program. Some UEs115may be designed to collect information or enable automated behavior of machines or other devices. Examples of applications for MTC devices include smart metering, inventory monitoring, water level monitoring, equipment monitoring, healthcare monitoring, wildlife monitoring, weather and geological event monitoring, fleet management and tracking, remote security sensing, physical access control, and transaction-based business charging. Some UEs115may be configured to employ operating modes that reduce power consumption, such as half-duplex communications (e.g., a mode that supports one-way communication via transmission or reception, but not transmission and reception simultaneously). In some examples, half-duplex communications may be performed at a reduced peak rate. Other power conservation techniques for the UEs115include entering a power saving deep sleep mode when not engaging in active communications, operating over a limited bandwidth (e.g., according to narrowband communications), or a combination of these techniques. For example, some UEs115may be configured for operation using a narrowband protocol type that is associated with a defined portion or range (e.g., set of subcarriers or resource blocks (RBs)) within a carrier, within a guard-band of a carrier, or outside of a carrier. The wireless communications system100may be configured to support ultra-reliable communications or low-latency communications, or various combinations thereof. For example, the wireless communications system100may be configured to support ultra-reliable low-latency communications (URLLC). The UEs115may be designed to support ultra-reliable, low-latency, or critical functions. Ultra-reliable communications may include private communication or group communication and may be supported by one or more services such as push-to-talk, video, or data. Support for ultra-reliable, low-latency functions may include prioritization of services, and such services may be used for public safety or general commercial applications. The terms ultra-reliable, low-latency, and ultra-reliable low-latency may be used interchangeably herein. In some examples, a UE115may also be able to communicate directly with other UEs115over a device-to-device (D2D) communication link135(e.g., using a peer-to-peer (P2P) or D2D protocol). One or more UEs115utilizing D2D communications may be within the geographic coverage area110of a base station105. Other UEs115in such a group may be outside the geographic coverage area110of a base station105or be otherwise unable to receive transmissions from a base station105. In some examples, groups of the UEs115communicating via D2D communications may utilize a one-to-many (1:M) system in which each UE115transmits to every other UE115in the group. In some examples, a base station105facilitates the scheduling of resources for D2D communications. In other cases, D2D communications are carried out between the UEs115without the involvement of a base station105. In some systems, the D2D communication link135may be an example of a communication channel, such as a sidelink communication channel, between vehicles (e.g., UEs115). In some examples, vehicles may communicate using vehicle-to-everything (V2X) communications, vehicle-to-vehicle (V2V) communications, or some combination of these. A vehicle may signal information related to traffic conditions, signal scheduling, weather, safety, emergencies, or any other information relevant to a V2X system. In some examples, vehicles in a V2X system may communicate with roadside infrastructure, such as roadside units, or with the network via one or more network nodes (e.g., base stations105) using vehicle-to-network (V2N) communications, or with both. The core network130may provide user authentication, access authorization, tracking, Internet Protocol (IP) connectivity, and other access, routing, or mobility functions. The core network130may be an evolved packet core (EPC) or 5G core (5GC), which may include at least one control plane entity that manages access and mobility (e.g., a mobility management entity (MME), an access and mobility management function (AMF)) and at least one user plane entity that routes packets or interconnects to external networks (e.g., a serving gateway (S-GW), a Packet Data Network (PDN) gateway (P-GW), or a user plane function (UPF)). The control plane entity may manage non-access stratum (NAS) functions such as mobility, authentication, and bearer management for the UEs115served by the base stations105associated with the core network130. User IP packets may be transferred through the user plane entity, which may provide IP address allocation as well as other functions. The user plane entity may be connected to IP services150for one or more network operators. The IP services150may include access to the Internet, Intranet(s), an IP Multimedia Subsystem (IMS), or a Packet-Switched Streaming Service. Some of the network devices, such as a base station105, may include subcomponents such as an access network entity140, which may be an example of an access node controller (ANC). Each access network entity140may communicate with the UEs115through one or more other access network transmission entities145, which may be referred to as radio heads, smart radio heads, or transmission/reception points (TRPs). Each access network transmission entity145may include one or more antenna panels. In some configurations, various functions of each access network entity140or base station105may be distributed across various network devices (e.g., radio heads and ANCs) or consolidated into a single network device (e.g., a base station105). The wireless communications system100may operate using one or more frequency bands, such as in the range of 300 megahertz (MHz) to 300 gigahertz (GHz). Generally, the region from 300 MHz to 3 GHz is known as the ultra-high frequency (UHF) region or decimeter band because the wavelengths range from approximately one decimeter to one meter in length. The UHF waves may be blocked or redirected by buildings and environmental features, but the waves may penetrate structures sufficiently for a macro cell to provide service to the UEs115located indoors. The transmission of UHF waves may be associated with smaller antennas and shorter ranges (e.g., less than 100 kilometers) compared to transmission using the smaller frequencies and longer waves of the high frequency (HF) or very high frequency (VHF) portion of the spectrum below 300 MHz. The electromagnetic spectrum is often subdivided, based on frequency/wavelength, into various classes, bands, channels, etc. In 5G NR two initial operating bands have been identified as frequency range designations FR1 (410 MHz-7.125 GHz) and FR2 (24.25 GHz-52.6 GHz). It should be understood that although a portion of FR1 is greater than 6 GHz, FR1 is often referred to (interchangeably) as a “Sub-6 GHz” band in various documents and articles. A similar nomenclature issue sometimes occurs with regard to FR2, which is often referred to (interchangeably) as a “millimeter wave” band in documents and articles, despite being different from the extremely high frequency (EHF) band (30 GHz-300 GHz) which is identified by the International Telecommunications Union (ITU) as a “millimeter wave” band. The frequencies between FR1 and FR2 are often referred to as mid-band frequencies. Recent 5G NR studies have identified an operating band for these mid-band frequencies as frequency range designation FR3 (7.125 GHz-24.25 GHz). Frequency bands falling within FR3 may inherit FR1 characteristics and/or FR2 characteristics, and thus may effectively extend features of FR1 and/or FR2 into mid-band frequencies. In addition, higher frequency bands are currently being explored to extend 5G NR operation beyond 52.6 GHz. For example, three higher operating bands have been identified as frequency range designations FR4a or FR4-1 (52.6 GHz-71 GHz), FR4 (52.6 GHz-114.25 GHz), and FR5 (114.25 GHz-300 GHz). Each of these higher frequency bands falls within the EHF band. With the above aspects in mind, unless specifically stated otherwise, it should be understood that the term “sub-6 GHz” or the like if used herein may broadly represent frequencies that may be less than 6 GHz, may be within FR1, or may include mid-band frequencies. Further, unless specifically stated otherwise, it should be understood that the term “millimeter wave” or the like if used herein may broadly represent frequencies that may include mid-band frequencies, may be within FR2, FR4, FR4-a or FR4-1, and/or FR5, or may be within the EHF band. The wireless communications system100may also operate in a super high frequency (SHF) region using frequency bands from 3 GHz to 30 GHz, also known as the centimeter band, or in an extremely high frequency (EHF) region of the spectrum (e.g., from 30 GHz to 300 GHz), also known as the millimeter band. In some examples, the wireless communications system100may support millimeter wave (mmW) communications between the UEs115and the base stations105, and EHF antennas of the respective devices may be smaller and more closely spaced than UHF antennas. In some examples, this may facilitate use of antenna arrays within a device. The propagation of EHF transmissions, however, may be subject to even greater atmospheric attenuation and shorter range than SHF or UHF transmissions. The techniques disclosed herein may be employed across transmissions that use one or more different frequency regions, and designated use of bands across these frequency regions may differ by country or regulating body. The wireless communications system100may utilize both licensed and unlicensed radio frequency spectrum bands. For example, the wireless communications system100may employ License Assisted Access (LAA), LTE-Unlicensed (LTE-U) radio access technology, or NR technology in an unlicensed band such as the 5 GHz industrial, scientific, and medical (ISM) band. When operating in unlicensed radio frequency spectrum bands, devices such as the base stations105and the UEs115may employ carrier sensing for collision detection and avoidance. In some examples, operations in unlicensed bands may be based on a carrier aggregation configuration in conjunction with component carriers operating in a licensed band (e.g., LAA). Operations in unlicensed spectrum may include downlink transmissions, uplink transmissions, P2P transmissions, or D2D transmissions, among other examples. A base station105or a UE115may be equipped with multiple antennas, which may be used to employ techniques such as transmit diversity, receive diversity, multiple-input multiple-output (MIMO) communications, or beamforming. The antennas of a base station105or a UE115may be located within one or more antenna arrays or antenna panels, which may support MIMO operations or transmit or receive beamforming. For example, one or more base station antennas or antenna arrays may be co-located at an antenna assembly, such as an antenna tower. In some examples, antennas or antenna arrays associated with a base station105may be located in diverse geographic locations. A base station105may have an antenna array with a number of rows and columns of antenna ports that the base station105may use to support beamforming of communications with a UE115. Likewise, a UE115may have one or more antenna arrays that may support various MIMO or beamforming operations. Additionally or alternatively, an antenna panel may support radio frequency beamforming for a signal transmitted via an antenna port. The base stations105or the UEs115may use MIMO communications to exploit multipath signal propagation and increase the spectral efficiency by transmitting or receiving multiple signals via different spatial layers. Such techniques may be referred to as spatial multiplexing. The multiple signals may, for example, be transmitted by the transmitting device via different antennas or different combinations of antennas. Likewise, the multiple signals may be received by the receiving device via different antennas or different combinations of antennas. Each of the multiple signals may be referred to as a separate spatial stream and may carry bits associated with the same data stream (e.g., the same codeword) or different data streams (e.g., different codewords). Different spatial layers may be associated with different antenna ports used for channel measurement and reporting. MIMO techniques include single-user MIMO (SU-MIMO), where multiple spatial layers are transmitted to the same receiving device, and multiple-user MIMO (MU-MIMO), where multiple spatial layers are transmitted to multiple devices. Beamforming, which may also be referred to as spatial filtering, directional transmission, or directional reception, is a signal processing technique that may be used at a transmitting device or a receiving device (e.g., a base station105, a UE115) to shape or steer an antenna beam (e.g., a transmit beam, a receive beam) along a spatial path between the transmitting device and the receiving device. Beamforming may be achieved by combining the signals communicated via antenna elements of an antenna array such that some signals propagating at particular orientations with respect to an antenna array experience constructive interference while others experience destructive interference. The adjustment of signals communicated via the antenna elements may include a transmitting device or a receiving device applying amplitude offsets, phase offsets, or both to signals carried via the antenna elements associated with the device. The adjustments associated with each of the antenna elements may be defined by a beamforming weight set associated with a particular orientation (e.g., with respect to the antenna array of the transmitting device or receiving device, or with respect to some other orientation). A base station105or a UE115may use beam sweeping techniques as part of beam forming operations. For example, a base station105may use multiple antennas or antenna arrays (e.g., antenna panels) to conduct beamforming operations for directional communications with a UE115. Some signals (e.g., synchronization signals, reference signals, beam selection signals, or other control signals) may be transmitted by a base station105multiple times in different directions. For example, the base station105may transmit a signal according to different beamforming weight sets associated with different directions of transmission. Transmissions in different beam directions may be used to identify (e.g., by a transmitting device, such as a base station105, or by a receiving device, such as a UE115) a beam direction for later transmission or reception by the base station105. Some signals, such as data signals associated with a particular receiving device, may be transmitted by a base station105in a single beam direction (e.g., a direction associated with the receiving device, such as a UE115). In some examples, the beam direction associated with transmissions along a single beam direction may be determined based on a signal that was transmitted in one or more beam directions. For example, a UE115may receive one or more of the signals transmitted by the base station105in different directions and may report to the base station105an indication of the signal that the UE115received with a highest signal quality or an otherwise acceptable signal quality. In some examples, transmissions by a device (e.g., by a base station105or a UE115) may be performed using multiple beam directions, and the device may use a combination of digital precoding or radio frequency beamforming to generate a combined beam for transmission (e.g., from a base station105to a UE115). The UE115may report feedback that indicates precoding weights for one or more beam directions, and the feedback may correspond to a configured number of beams across a system bandwidth or one or more sub-bands. The base station105may transmit a reference signal (e.g., a cell-specific reference signal (CRS), a channel state information reference signal (CSI-RS)), which may be precoded or unprecoded. The UE115may provide feedback for beam selection, which may be a precoding matrix indicator (PMI) or codebook-based feedback (e.g., a multi-panel type codebook, a linear combination type codebook, a port selection type codebook). Although these techniques are described with reference to signals transmitted in one or more directions by a base station105, a UE115may employ similar techniques for transmitting signals multiple times in different directions (e.g., for identifying a beam direction for subsequent transmission or reception by the UE115) or for transmitting a signal in a single direction (e.g., for transmitting data to a receiving device). A receiving device (e.g., a UE115) may try multiple receive configurations (e.g., directional listening) when receiving various signals from the base station105, such as synchronization signals, reference signals, beam selection signals, or other control signals. For example, a receiving device may try multiple receive directions by receiving via different antenna subarrays, by processing received signals according to different antenna subarrays, by receiving according to different receive beamforming weight sets (e.g., different directional listening weight sets) applied to signals received at multiple antenna elements of an antenna array, or by processing received signals according to different receive beamforming weight sets applied to signals received at multiple antenna elements of an antenna array, any of which may be referred to as “listening” according to different receive configurations or receive directions. In some examples, a receiving device may use a single receive configuration to receive along a single beam direction (e.g., when receiving a data signal). The single receive configuration may be aligned in a beam direction determined based on listening according to different receive configuration directions (e.g., a beam direction determined to have a highest signal strength, highest signal-to-noise ratio (SNR), or otherwise acceptable signal quality based on listening according to multiple beam directions). The wireless communications system100may be a packet-based network that operates according to a layered protocol stack. In the user plane, communications at the bearer or Packet Data Convergence Protocol (PDCP) layer may be IP-based. A Radio Link Control (RLC) layer may perform packet segmentation and reassembly to communicate over logical channels. A Medium Access Control (MAC) layer may perform priority handling and multiplexing of logical channels into transport channels. The MAC layer may also use error detection techniques, error correction techniques, or both to support retransmissions at the MAC layer to improve link efficiency. In the control plane, the Radio Resource Control (RRC) protocol layer may provide establishment, configuration, and maintenance of an RRC connection between a UE115and a base station105or a core network130supporting radio bearers for user plane data. At the physical layer, transport channels may be mapped to physical channels. The UEs115and the base stations105may support retransmissions of data to increase the likelihood that data is received successfully. Hybrid automatic repeat request (HARQ) feedback is one technique for increasing the likelihood that data is received correctly over a communication link125. HARQ may include a combination of error detection (e.g., using a cyclic redundancy check (CRC)), forward error correction (FEC), and retransmission (e.g., automatic repeat request (ARQ)). HARQ may improve throughput at the MAC layer in poor radio conditions (e.g., low signal-to-noise conditions). In some examples, a device may support same-slot HARQ feedback, where the device may provide HARQ feedback in a specific slot for data received in a previous symbol in the slot. In other cases, the device may provide HARQ feedback in a subsequent slot, or according to some other time interval. Techniques described herein, in addition to or as an alternative to be carried out between UEs115and base stations105, may be implemented via additional or alternative wireless devices, including IAB nodes104, distributed units (DUs)165, centralized units (CUs)160, radio units (RUs)170, and the like. For example, in some implementations, aspects described herein may be implemented in the context of a disaggregated radio access network (RAN) architecture (e.g., open RAN architecture). In a disaggregated architecture, the RAN may be split into three areas of functionality corresponding to the CU160, the DU165, and the RU170. The split of functionality between the CU160, DU165, and RU175is flexible and as such gives rise to numerous permutations of different functionalities depending upon which functions (e.g., MAC functions, baseband functions, radio frequency functions, and any combinations thereof) are performed at the CU160, DU165, and RU175. For example, a functional split of the protocol stack may be employed between a DU165and an RU170such that the DU165may support one or more layers of the protocol stack and the RU170may support one or more different layers of the protocol stack. Some wireless communications systems (e.g., wireless communications system100), infrastructure and spectral resources for NR access may additionally support wireless backhaul link capabilities in supplement to wireline backhaul connections, providing an IAB network architecture. One or more base stations105may include CUs160, DUs165, and RUs170and may be referred to as donor base stations105or IAB donors. One or more DUs165(e.g., and/or RUs170) associated with a donor base station105may be partially controlled by CUs160associated with the donor base station105. The one or more donor base stations105(e.g., IAB donors) may be in communication with one or more additional base stations105(e.g., IAB nodes104) via supported access and backhaul links. IAB nodes104may support mobile terminal (MT) functionality controlled and/or scheduled by DUs165of a coupled IAB donor. In addition, the IAB nodes104may include DUs165that support communication links with additional entities (e.g., IAB nodes104, UEs115, etc.) within the relay chain or configuration of the access network (e.g., downstream). In such cases, one or more components of the disaggregated RAN architecture (e.g., one or more IAB nodes104or components of IAB nodes104) may be configured to perform one or more of the prioritization of PRSs, decoding of PRSs, or corresponding signaling described herein. In some examples, the wireless communications system100may include a core network130(e.g., a next generation core network (NGC)), one or more IAB donors, IAB nodes104, and UEs115, where IAB nodes104may be partially controlled by each other and/or the IAB donor. The IAB donor and IAB nodes104may be examples of aspects of base stations105. IAB donor and one or more IAB nodes104may be configured as (e.g., or in communication according to) some relay chain. For instance, an access network (AN) or RAN may refer to communications between access nodes (e.g., IAB donor), IAB nodes104, and one or more UEs115. The IAB donor may facilitate connection between the core network130and the AN (e.g., via a wireline or wireless connection to the core network130). That is, an IAB donor may refer to a RAN node with a wireline or wireless connection to core network130. The IAB donor may include a CU160and at least one DU165(e.g., and RU170), where the CU160may communicate with the core network130over an NG interface (e.g., some backhaul link). The CU160may host layer 3 (L3) (e.g., RRC, service data adaption protocol (SDAP), PDCP, etc.) functionality and signaling. The at least one DU165and/or RU170may host lower layer, such as layer 1 (L1) and layer 2 (L2) (e.g., RLC, MAC, physical (PHY), etc.) functionality and signaling, and may each be at least partially controlled by the CU160. The DU165may support one or multiple different cells. IAB donor and IAB nodes104may communicate over an F1 interface according to some protocol that defines signaling messages (e.g., F1 AP protocol). Additionally, CU160may communicate with the core network over an NG interface (which may be an example of a portion of backhaul link), and may communicate with other CUs160(e.g., a CU160associated with an alternative IAB donor) over an Xn-C interface (which may be an example of a portion of a backhaul link). IAB nodes104may refer to a RAN node that provides IAB functionality (e.g., access for UEs115, wireless self-backhauling capabilities, etc.). IAB nodes104may include a DU165and an MT. A DU165may act as a distributed scheduling node towards child nodes associated with the IAB node104, and the MT may act as a scheduled node towards parent nodes associated with the IAB node104. That is, an IAB donor may be referred to as a parent node in communication with one or more child nodes (e.g., an IAB donor may relay transmissions for UEs through one or more other IAB nodes104). Additionally, an IAB node104may also be referred to as a parent node or a child node to other IAB nodes104, depending on the relay chain or configuration of the AN. Therefore, the MT entity of IAB nodes104(e.g., MTs) may provide a Uu interface for a child node to receive signaling from a parent IAB node104, and the DU interface (e.g., DUs165) may provide a Uu interface for a parent node to signal to a child IAB node104or UE115. For example, IAB node104may be referred to a parent node associated with IAB node, and a child node associated with IAB donor. The IAB donor may include a CU160with a wireline (e.g., optical fiber) or wireless connection to the core network, and may act as parent node to IAB nodes104. For example, the DU165of IAB donor may relay transmissions to UEs115through IAB nodes104, and may directly signal transmissions to a UE115. The CU160of IAB donor may signal communication link establishment via an F1 interface to IAB nodes104, and the IAB nodes104may schedule transmissions (e.g., transmissions to the UEs115relayed from the IAB donor) through the DUs165. That is, data may be relayed to and from IAB nodes104via signaling over an NR Uu interface to MT of the IAB node104. Communications with IAB node104may be scheduled by DU165of IAB donor and communications with IAB node104may be scheduled by DU165of IAB node104. In the case of the techniques described herein applied in the context of a disaggregated RAN architecture, one or more components of the disaggregated RAN architecture (e.g., one or more IAB nodes104or components of IAB nodes104) may be configured to perform one or more of the signaling and prioritization techniques for PRSs described herein. The wireless communications system100may be configured to support priority based PRS processing. For example, base stations105may include a communications manager101and UEs115may include a communications manager102that may support operating in accordance with a priority for PRS processing. For instance, a base station105may include a communications manager101, a first UE115may include a communications manager102-a, and a second UE115may include a communications manager102-b. The communications manager101may be examples of aspects of a communications manager as described inFIGS.9through12. The communications manager102-aand the communications manager102-bmay be examples of aspects of a communications manager as described inFIGS.5through8. By way of example, a first UE115(e.g., using the communications manager102-a) may communicate with a base station105(e.g., using the communications manager101) and with a second UE115(e.g., using the communications manager102-b). In some examples, a UE115, such as the first UE115, may decode one or more PRSs based on a priority of received PRSs. In some cases, the UE115may receive the PRSs from one or more anchor nodes, such as an anchor UE115(e.g., the second UE115) or an anchor base station105. In some cases, the UE115may determine a priority of the PRSs based on respective scrambling sequences of the PRSs, respective resource mapping patterns of the PRSs, or based on a direct indication of the priorities (e.g., in one or more control channels). The UE115may select PRSs to decode according to priority, such as selecting higher priority PRSs to decode. FIG.2illustrates an example of a wireless communications system200that supports processing PRSs according to priority in accordance with one or more aspects of the present disclosure. In some examples, the wireless communications system200may implement aspects of the wireless communications system100. For example, the wireless communications system200may include UE115-a, UE115-b, UE115-c, and a base station105-awith coverage area110-a, which may be examples of UEs115and a base station105with a coverage area110as described with reference toFIG.1. In some examples, UE115-amay communicate with one or more wireless devices via a communication link. For example, UE115-amay receive data or control signaling via a downlink communication link205. In some examples, UE115-amay receive one or more PRSs210, and may process or decode the PRSs210according to a priority. The wireless communications system200may support both access links and sidelinks for communications between one or more communication devices. An access link may refer to a communication link between a UE115(such as, UE115-a, UE115-band UE115-c) and a base station105-a. A sidelink may refer to any communication link between similar wireless devices (for example, a communication link between UEs115, or a backhaul communication link between base stations105). It is noted that while various examples provided herein are discussed for UE sidelink devices, such sidelink techniques may be used for any type of wireless devices that use sidelink communications. For example, a sidelink may support one or more of D2D communications, V2X or V2V communications, message relaying, discovery signaling, beacon signaling, or other signals transmitted over-the-air from one UE115to one or more other UEs115. In some examples, downlink communication links205may be examples of access links, sidelinks, or both. In some examples, a base station105may communicate with one or more UEs115in a coverage area110. For example, base station105-amay communicate with UE115-athrough UE115-cin coverage area110-a. Additionally or alternatively, UE115-a, UE115-b, and UE115-cmay communicate with each other (or with another group of UEs115) over sidelink communications, such as according to a P2P or D2D protocol. In some examples, the UE115-amay transmit sidelink transmissions to the UE115-bor the UE115-c. In some examples, the UE115-aor the UE115-bmay monitor resource pools for the sidelink communications or indications of the sidelink communications (for example, resource reservations, control channel transmissions, among other examples) from other UEs115in a group. Additionally or alternatively, the UEs115may have data to transmit to (or receive from) a UE115in the UE group and may use the sidelink communications to transmit the data transmission. That is, base station105-amay transmit data for UE115-ato relay to other UEs115, such as UE115-bor UE115-c, via a sidelink communication link. In some examples, a base station105or a UE115may be considered a wireless device or may act as an anchor node for one or more wireless devices. A wireless device may transmit a PRS210to another wireless device, such as a UE115. For example, base station105-amay transmit PRS210-ato UE115-aover downlink communication link205-a, UE115-bmay transmit PRS210-bto UE115-aover downlink communication link205-b, and UE115-cmay transmit PRS210-cto UE115-aover downlink communication link205-c. UE115-amay use the PRSs210to determine sidelink positioning of base station105-a, UE115-b, UE115-c, or a combination thereof. In some examples, when UE115-areceives a PRS210from a base station105, the PRS210may be referred to as a Uu-PRS. In some other examples, when UE115-areceives a PRS210from another UE115, such as a sidelink anchor node or a non-anchor UE115, the PRS210may be referred to as a sidelink-PRS (SL-PRS). In some cases, base station105-amay be an anchor node for UE115-a, and UE115-bmay be a sidelink anchor node for UE115-a. The sidelink anchor node may be another UE115with relatively accurate position information. In some deployment scenarios, one or more UEs115may experience relatively poor channel conditions, such as an out-of-coverage (OoC) condition, or may otherwise lack a line-of-sight (LOS) or other communication link with a network entity, and such UEs115may be unable to receive PRSs210that may be transmitted by a network entity. In some cases, for UEs115experiencing the relatively poor channel conditions (e.g., fewer LOS links with a base station105), communicating with a sidelink anchor node may add additional LOS links. For example, indoor factory (InF) channels (e.g., InF-dense high (DH)/downlink) may have reduced base station LOS links. Thus, a UE115using the InF channels may communicate with an anchor node (e.g., a UE115with sufficient channel conditions) to improve overall positioning accuracy, such as by additional measurements. Using an anchor node for communications may improve power efficient for P2P positioning and ranging for public safety and other uses. Further, the anchor node may provide for a group of wireless devices in an out-of-coverage scenario to determine relative positions of each other. In some examples, sidelink sensing (e.g., positioning for a device-free object) may be performed in a joint framework with sidelink positioning. For sidelink positioning or sidelink-assisted positioning, a receiving UE115may observe PRSs210from one or more sidelink and base station105anchor nodes. For example, UE115-amay receive PRS210-afrom base station105-a, which may be a Uu anchor node, and PRS210-bfrom UE115-b, which may be a sidelink or UE115anchor node. UE115-amay also receive PRS210-cfrom UE115-c, which may or may not be an anchor node. In some cases, a sidelink anchor may be a sidelink node that obtains position and timing information from a global navigation satellite system (GNSS) or another base station105or sidelink anchor node. In some implementations, one or more aspects of sidelink communication may be associated with a distributed coordination of communications between UEs115, such as when UEs115are in an OoC condition or otherwise lack a communications link with a network entity, or when UEs115perform sidelink communications using resources212that are allocated to or are otherwise available for selection or reservation by the UEs115(e.g., without the sidelink communications themselves being scheduled by a network entity). For example, a pool of resources212, such as communication resources in the frequency domain, resources in the time domain, or a combination thereof, may be allocated by a network entity (e.g., during a communication link establishment or other configuration with the network), or allocated in accordance with a communications protocol or standard, and UEs115may contend for use of such resources212in accordance with various sensing and exclusion techniques. An example of such an allocation and reservation of resources212for sidelink communications may be referred to as a sidelink resource allocation Mode2. Under such circumstances, reference signal transmissions, among other transmissions, may be aperiodic or semi-persistently scheduled (e.g., sent in response to a request). In some cases, the resources212of PRSs210may collide (e.g., overlap in time and frequency). For example, PRS210-bmay overlap in time, frequency, or both with PRS210-c. To reduce or eliminate the resource collision, at220, UE115-amay select a subset of PRSs210to process. The selection may be based on a rule to increase (e.g., maximize) the positioning and sensing performance of UE115-a. In some examples, an anchor node may transmit a priority indication215to a sidelink device, the priority indication215including respective priorities for one or more PRSs210. In some cases, UE115-bmay transmit a priority indication215to UE115-a. The priority indication215may include a priority parameter for a PRS resource, a PRS resource set, or an anchor node. In some cases, the priority of a PRS210may be based on a reliability of the PRS210. For example, an anchor node that obtains position and timing information directly from a GNSS may have a higher priority than another anchor node that obtains position and timing information from a base station105(e.g., via NR Uu positioning), from other anchor nodes, or both. In some other examples, a base station anchor node may have a higher priority than a sidelink anchor node, which may be referred to as cross-Uu-sidelink PRS priority. In some cases, the priority of a PRS210may be based on defined rules, such as according to Table 1. TABLE 1PriorityPriority rule 1Priority rule 2P0Anchor synced with GNSSAnchor synced with base(highest)station anchorsP1Anchor synced with baseAnchor synced with GNSSstation anchorsP2Anchor synced with both baseAnchor synced with both basestation and sidelink anchorsstation and sidelink anchorsP3Anchor synced with sidelinkAnchor synced with sidelinkanchorsanchors UE115-amay select a rule among the defined (e.g., predetermined) set of rules. For example, UE115-amay select priority rule 1 from Table 1 or priority rule 2 from Table 1 to follow. Additionally or alternatively, the priority may be a function of one or more parameters of the anchor node, such as positioning reliability (hardware capability, processing algorithm, etc.), PRS transmission parameters (error vector magnitude (EVM), transmit power, etc.), number and signal quality (e.g., reference signal received power (RSRP)) of observed anchors, speed and direction (e.g., for a mobile anchor node), type of the node (e.g., gNB, eNB, road side unit (RSU), vehicle, or handset), type of PRS210(e.g., periodic, semi-persistent, or aperiodic), band for the PRS210transmission (e.g., licensed or unlicensed band, which due to less uncertainty of channel access, licensed may be prioritized). In some cases, the network may indicate, via control signaling, the rule for UE115-ato use. In some examples, since the priority of one or more PRSs210may change over time, the anchor node may indicate the priority each time there is a change. For example, an anchor node, such as UE115-b, may refine its positioning and timing information, or a mobile anchor node may change position. UE115-bmay send priority indication215-aeach time the position changes or the positioning and timing information is refined. UE115-bmay include priority indication215-ain a dynamic indication, such as in an explicit indication. Similarly, base station105-amay transmit priority indication215-bfor PRS210-a, UE115-cmay transmit priority indication215-cindicating a priority for PRS210-c, or both. In some cases, UE115-b, UE115-c, base station105-a, or a combination thereof may transmit respective control channels for a PRS210(e.g., PRS210-a, PRS210-b, PRS210-c, or a combination thereof), which may be an aperiodic or semi-persistent PRS210. The control channels may include a priority indication field in SCI (e.g., SCI format 1-A, which includes a 3 bit field for the priority indication). When SCI schedules a sidelink shared channel (e.g., a physical sidelink shared channel (PSSCH)), the priority field may indicate a quality of service (QoS) value for the scheduled data. In some examples, when the SCI triggers transmission of a PRS210, the priority field may indicate the priority of the PRS210, as well as a priority for resource reservation (e.g., rather than a QoS value when the SCI schedules a sidelink shared channel). Additionally or alternatively, a new field may be introduced in the SCI for the priority indication215. In some other cases, UE115-bmay send the PRS210according to a scrambling sequence, a resource pattern, or the like, which may implicitly indicate priority of the PRS210to UE115-a, which is described in further detail with respect toFIG.3. Base station105-a, UE115-b, UE115-c, or a combination thereof may include a priority indication215in a semi-static indication, such as in an explicit indication. For example, UE115-bmay transmit priority indication215-avia RRC signaling or a MAC-CE, such as for a periodic or semi-persist PRS210. A coordinating hub or node (an anchor node, a base station105, location management function (LMF), etc.) may assign a priority value to a configured PRS210at an anchor node level, such as at UE115-b, positioning-frequency-layer, resource-set, or resource. The order UE115-amay process the PRSs210or priority of the PRSs210may be implicit by the order in the list of PRSs210in the RRC information element (IE) or MAC-CE. In some examples, the anchor node, such as UE115-b, may broadcast a signal including a priority indication215(e.g., a position system information block (SIB), sidelink-synchronization signal block (S-SSB) with a sidelink-primary synchronization signal (S-PSS), sidelink-secondary synchronization signal (S-SSS), and physical sidelink broadcast channel (PSBCH)). The anchor node may also be used as a synchronization reference UE115for sidelink communications. The priority indication215may be implicitly included according to synchronization signal sequence (e.g., related to synchronization priority) or explicitly in a PSBCH payload. In some examples, a list of priority values may be semi-statically configured, and UE115-bmay select the priorities dynamically for priority indication215-a. In some cases, when a UE115observes PRSs210from multiple anchor nodes, such as at220, the UE115may select a subset of the PRSs210for positioning and sensing. For example, UE115-amay receive PRS210-athrough PRS210-cand may select one or more of PRS210-athrough PRS210-cto process. By processing a portion of the PRSs210(e.g., high priority PRSs210), the positioning accuracy may be improved. In some cases, such as for a unicast PRS210, the UE115may also request or report a priority value to the anchor node, or may override the priority value indicated by the anchor node. That is, UE115-amay request, report, or override the priority indication215from UE115-b. The priority may be based on one or more parameters of the anchor node, parameters of the receiving node (geometric dilution of precision (GDOP), LOS condition, RSRP etc.), or both. When multiple PRS resources212overlap in time or frequency, the UE115may prioritize the reception of PRS resources212with relatively higher priorities. For mmW operation (e.g., frequencies in the FR2 range), the UE115may determine a receive beam for a PRS210based on the priority indication215. FIG.3illustrates an example of a wireless communications system300that supports processing PRSs according to priority in accordance with one or more aspects of the present disclosure. In some examples, the wireless communications system300may implement aspects of the wireless communications system100and the wireless communications system200. For example, the wireless communications system300may include UE115-d, UE115-e, UE115-f, and a base station105-bwith coverage area110-bwhich may be examples of UEs115and a base station105with a coverage area110as described with reference toFIGS.1and2. In some examples, UE115-dmay communicate with one or more wireless devices via a communication link. For example, UE115-dmay receive data or control signaling via a downlink communication link305. In some examples, UE115-dmay receive one or more PRSs310, and may process the PRSs310according to a priority. In some examples, base station105-bmay communicate with UE115-dthrough UE115-fin coverage area110-b. Additionally or alternatively, UE115-d, UE115-e, and UE115-fmay communicate with each other (or with another group of UEs115) over sidelink communications, as described with reference toFIG.2. In some examples, a base station105, a UE115, or both may act as an anchor node for one or more wireless devices. A wireless device may transmit a PRS310to another wireless device, such as a UE115. For example, base station105-bmay transmit PRS310-ato UE115-dover downlink communication link305-a, UE115-emay transmit PRS310-bto UE115-dover sidelink communication link305-b, and UE115-fmay transmit PRS310-cto UE115-dover sidelink communication link305-c. The resources312of the PRSs310may overlap. In some cases, one or more anchor nodes may send PRSs310according to a priority configuration315(e.g., scrambling sequence, a resource pattern, or the like) to implicitly indicate a priority of the PRSs310to a UE115. For example, UE115-e, UE115-f, base station105-b, or a combination thereof may be acting as anchor nodes for UE115-d. UE115-e, UE115-f, base station105-b, or a combination may send the PRSs310according to a scrambling sequence, a resource pattern, or the like, to UE115-d. Base station105-bmay transmit PRS310-aaccording to priority configuration315-a, UE115-emay transmit PRS310-baccording to priority configuration315-b, UE115-fmay transmit PRS310-caccording to priority configuration315-c, or a combination thereof. Each priority configuration315may include a respective scrambling sequence of a PRS310, a resource pattern (comb type, comb offset, etc.) of the PRS310, or the like. In some examples, demodulation reference signal (DMRS) patterns may be configured or otherwise defined for use by a transmitter (e.g., 2, 3, 4, 9, and 12 symbol DMRS patterns). The transmitter may select a DMRS pattern, and signal the DMRS pattern in SCI (e.g., if more than one DMRS patterns are configured at a receiving device) according to channel conditions. In some examples, for a threshold number of symbols in a sidelink shared channel (e.g., a 12 symbol PSSCH), there may be a dedicated slot structure, which may be define a resource configuration, for PRSs310. A slot may include a number of symbols, which may be time resources for transmission. Additionally or alternatively to the existing slot structures, the transmitting device may use a new slot structure. The slot, which may be a time period for a transmission, may include a PSSCH (e.g., SCI format 1 (SCI-1)), SCI format 2 (SCI-2), a DMRS, and a PRS310without a sidelink shared channel). Each symbol of the PRS may have a comb-structure in frequency, such that the PRS310may use one or more frequencies. In some cases, a PRS310may have a structure with a staggered comb pattern (e.g., skipping over one or more frequencies), which is described in further detail with respect toFIGS.4A and4B. A candidate slot and PRS pattern may be configured or otherwise defined and indicated by a control signal (e.g., SCI-2). In some cases, PRS slots may have different transmission power and timing from other sidelink transmissions. PRS power and timing may also be different within a slot compared to a PSCCH, SCI-2, DMRS, and the like. In some examples, an anchor node may include a gap symbol within a slot before and after a PRS burst. In some examples, at320, UE115-dmay select one or more PRSs310to process based on the priority indications of each PRS310. For example, UE115-dmay select one or more of PRS310-a, PRS310-b, and PRS310-c, to process to avoid resource312collision. FIGS.4A and4Billustrate examples of resource diagrams400that support processing PRSs according to priority in accordance with one or more aspects of the present disclosure. The resource diagrams400-aand400-bmay implement or be implemented to realize aspects of the wireless communications system100, the wireless communications system200, or the wireless communications system300. For example, one or both of the resource diagrams400-aor400-bmay be an example of a slot format that is dedicated for SL-PRS transmission, and a UE115may transmit an SL-PRS430, or receive an SL-PRS430, in accordance with the resource diagram400-aor the resource diagram400-b. In some implementations, such UEs115may be examples of UEs115as described with reference toFIG.1throughFIG.3. The resource diagram400-amay represent a 12-symbol pattern and the resource diagram400-bmay represent a 9-symbol pattern. In some implementations, the resource diagrams400-aand400-bmay be introduced for (such as available for) sidelink communications in addition to other slot formats, but the resource diagrams400-aand400-bmay be dedicated for SL-PRS transmissions. The resource diagrams400-aand400-bmay include a first symbol portion405and a first symbol portion410, one or more DMRSs415, a PSCCH420(carrying SCI-1), SCI-2425, an SL-PRS burst445including the SL-PRS430, and one or more gap symbols440. In some implementations, a first symbol portion405may be a repetition of a PSCCH420, and a first symbol portion410may be a repetition of a DMRS415(for example, in a slot format associated with 14 total symbol durations). As shown inFIG.4AandFIG.4B, respectively, the resource diagram400-amay include an SL-PRS burst445-aspanning eight symbols and the resource diagram400-bmay include an SL-PRS burst445-bspanning four symbols. In some implementations, the resource diagrams400-aand400-bmay lack or be absent of a PSSCH. For example, instead of including a PSSCH portion, the resource diagrams400-aand400-bmay include a portion dedicated for an SL-PRS burst445. The SL-PRS430, which a UE115may transmit during an SL-PRS burst445(a set of consecutive symbols), may feature or otherwise be associated with a transmission pattern, such as a staggered comb pattern, where the SL-PRS430may be distributed on different subsets of frequency resources in successive symbol durations. In some implementations, the SL-PRS430may feature or be associated with a similar structure or pattern as a Uu-PRS. In some implementations, one or both of a candidate slot and the SL-PRS pattern may be configured (for example, pre-configured) at a UE115transmitting an SL-PRS430, or a UE115receiving an SL-PRS430may receive control signaling (such as SCI-2) indicating one or both of the candidate slot and the SL-PRS pattern. In some implementations, a UE115may use a different transmit power or different timing for SL-PRS slots (slots transmitted in accordance with the resource diagram400-aor the resource diagram400-b) as compared to other sidelink transmissions. For example, a UE115may use a different transmit power for signaling sent during an SL-PRS slot or an SL-PRS slot may span a different amount of time (such as a different quantity of symbols) than other slots, such as slots including a PSSCH portion. In some examples in which the UE115uses a different transmit power for signaling sent during an SL-PRS slot, the UE115may use a different transmit power control procedure (such as a transmit power control procedure that is dedicated for SL-PRS slots) than another transmit power control procedure used for other signaling (such as for a sidelink slot including a PSSCH). In other words, the UE115may use a first transmit power control procedure that is associated with a first target receive power and a first pathloss compensation factor for SL-PRS slots and the UE115may use a second transmit power control procedure that is associated with a second target receive power and a second pathloss compensation factor for slots including a PSSCH. In some examples in which the UE115uses different timing for SL-PRS slots than for slots including a PSSCH, the UE115may use different timing synchronization (associated with a synchronization source) for the different types of slots. For example, for a PSSCH transmission, the UE115may use a transmission timing that is relative to timing obtained from a network entity or another sidelink node (via a synchronization signal, such as an SSB), but, for an SL-PRS transmission, the UE115may use a transmission timing that is relative to an absolute time obtained from a GNSS. Additionally or alternatively, the first UE115may transmit the SL-PRS430using a different transmit power or different timing than other signaling sent within that SL-PRS slot. For example, a UE115may transmit the SL-PRS430using a different transmit power or different timing (for example, may define symbol durations differently for an SL-PRS burst445than other symbols within the SL-PRS slot) than the UE115uses for the first symbol portion405, the first symbol portion410, the one or more DMRSs415, the PSCCH420, the SCI-2425, or the PSFCH435, or any combination thereof. In such examples, the UE115may use a slot format (such as the resource diagram400-b) that provides one or more gap symbols440during which the UE115may adjust one or both of a transmit power or a transmit timing. For example, the resource diagram400-bmay include a gap symbol440before an SL-PRS burst445and a gap symbol440after the SL-PRS burst445-band the UE115may adjust a transmit power of the UE115during those gap symbols440. To support the resource diagrams400-aand400-band efficient SL-PRS measurement and reporting, the SCI-1 (sent using the PSCCH420) or the SCI-2425, or both, may include information specific to SL-PRS transmission, such as one or more fields for positioning assistant data, in addition to other SCI fields. For example, the SCI-1 or the SCI-2425associated with the resource diagrams400-aand400-bmay have different formats that are dedicated or specific for SL-PRS slots. For example, a UE115may transmit, via one or both of SCI-1 or SCI-2425, an indication of its position such that SCI-1 or SCI-2425includes the position of the transmitter (which may be an example of a sidelink anchor node). In some implementations, instead of a rough position (such as a zone identifier), a UE115transmitting an SL-PRS may include a relatively more accurate geographic position (such as a position determined or inferred from GNSS signaling) in SCI. To reduce overhead, the UE115may signal the relatively more accurate position at the beginning (at transmission of a temporally first or initial SL-PRS430) and may signal time-variation (relative to the initially reported position) thereafter. Additionally or alternatively, a UE115may signal a transmission time stamp (such as an accurate transmission time stamp) via SCI for range or pseudo-range measurement. Additionally or alternatively, a UE115may signal a pattern of the SL-PRS430(if more than one pattern is configured, such as via previous SCI, MAC-CE, or RRC signaling) via SCI. In implementations in which a UE115signals the pattern of the SL-PRS430via SCI, the first UE115may dynamically indicate a scrambling sequence associated with an SL-PRS pattern. Additionally or alternatively, a UE115may indicate changes in SL-PRS patterns in accordance with a configured (for example, pre-configured) pattern for resource reuse (for example, such that SL-PRS430uses a resource that also may be allocated to another, remote or relatively distant UE115) or in accordance with interference randomization. In some implementations, a UE115also may indicate a pattern of a responding or responsive SL-PRS430(for RTT-based sidelink positioning) or a transmit power of the SL-PRS430(for signal strength-based, such as RSRP-based, positioning or sensing). In some implementations, some of the positioning data associated with a UE115transmitting an SL-PRS may be configured (for example, preconfigured) at one or more receiving UEs115and, in such implementations, the transmitting UE115may refrain from dynamically indicating such positioning information. In some cases, one or more anchor nodes may send one or more SL-PRSs430according to a resource pattern to implicitly indicate a priority of the SL-PRSs430to a UE. For example, SL-PRS burst445-amay indicate a first priority, while SL-PRS burst445-bmay indicate a second priority different from the first. A priority, or priority parameter, may be associated with an SL-PRS resource450or an SL-PRS resource set (e.g., resources for SL-PRS burst445-a).FIG.5illustrates an example of a process flow500that supports processing PRSs according to priority in accordance with one or more aspects of the present disclosure. In some examples, process flow500may implement aspects of wireless communications system100, wireless communications system200, wireless communication system300, and resource diagrams400. The process flow500may illustrate an example of a UE115-gand an anchor node, such as UE115-h, which may be examples of UEs115as described with reference toFIG.1. In some examples, UE115-gmay process one or more PRSs from a set of PRSs based on respective priorities. Alternative examples of the following may be implemented, where some processes are performed in a different order than described or are not performed. In some cases, processes may include additional features not mentioned below, or further processes may be added. At505, UE115-h, which may be an anchor node, may transmit one or more PRSs to UE115-g. UE115-gmay receive PRSs from multiple anchor nodes (e.g., including UE115-h) over a set of resources. The resources may be time and frequency resources. In some cases, UE115-g, UE115-h, or both may be sidelink devices. In some examples, at510, UE115-gmay transmit a request for one or more priorities of the PRSs. The priorities may be respective priorities of each PRS (e.g., each PRS may have a priority). At515, UE115-hmay detect a change in position, which may cause UE115-hto adjust a priority of one or more PRSs. At520, UE115-gmay receive an indication of the respective priorities for one or more PRSs. In some cases, the indication may be dynamic, semi-static, or both. The indication may be explicit, such as included in one or more control channels with the PRSs. The control channels may include SCIs with a priority field (e.g., a new field or reusing an existing field) that indicates a priority of a PRS for each control channel. In some cases, UE115-gmay receive at least one control channel with a list of the one or more PRSs, the list ordered according to the respective priorities. In some examples, the indication may be implicit. For example, UE115-hmay transmit the PRSs by scrambling them according to respective scrambling sequences. The respective scrambling sequences may indicate the respective priorities for the one or more PRSs. Similarly, UE115-hmay receive the PRSs according to respective resource mapping patterns, which may indicate the respective priorities for the one or more PRSs. In some cases, UE115-gmay receive one or more broadcast signals from anchor nodes, such as UE115-h, indicating the respective priorities for the PRSs. For example, a payload or a sequence of the one or more broadcast signals may indicate the respective priorities. UE115-gmay receive a configuration message with a list of priorities, where the respective priorities for the PRSs are based on the list of priorities. UE115-gmay receive one or more priority parameters that indicate the respective priorities for the PRSs based on one or more resources, an anchor node of the one or more anchor nodes, a reliability of a positioning reference signal of the one or more positioning reference signals, or a combination thereof. At525, UE115-gmay determine to override the respective priority for the PRSs based on one or more parameters of UE115-g. At530, UE115-gmay select one or more PRSs to decode based on the priority indication. For example, UE115-gmay select PRSs with relatively higher priority to process. UE115-gmay select the PRSs based on a priority rule for obtaining the positioning information, a capability of UE115-g, a priority of the selected PRSs being greater than others (e.g., other PRSs with overlapping resources), or a combination thereof. The priority rules may be configured or otherwise defined at UE115-g, UE115-h, or both. UE115-gmay decode at least one PRS to obtain positioning information of UE115-g(e.g., relative to UE115-h). At535, UE115-gmay transmit a report including a priority of the selected PRSs. In some cases, the respective priorities may be based on a type of an anchor node of the one or more anchor nodes, positioning reliability information of the anchor node, transmission parameters for the anchor node, a signal quality parameter for the anchor node, a mobility parameter of the anchor node (e.g., speed and direction), a type of a positioning reference signal, a band type for the positioning reference signal, or a combination thereof. The positioning reliability information may include a hardware capability, a processing algorithm, or the like. The type of the PRS may be periodic, semi-persistent, or aperiodic. The band type of the PRS may be licensed or unlicensed (e.g., due to uncertainty of channel access). FIG.5shows a block diagram600of a device605that supports processing PRSs according to priority in accordance with one or more aspects of the present disclosure. The device605may be an example of aspects of a UE115as described herein. The device605may include a receiver610, a transmitter615, and a communications manager620. The device605may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver610may provide a means for receiving information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to processing PRSs according to priority). Information may be passed on to other components of the device605. The receiver610may utilize a single antenna or a set of multiple antennas. The transmitter615may provide a means for transmitting signals generated by other components of the device605. For example, the transmitter615may transmit information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to processing PRSs according to priority). In some examples, the transmitter615may be co-located with a receiver610in a transceiver module. The transmitter615may utilize a single antenna or a set of multiple antennas. The communications manager620, the receiver610, the transmitter615, or various combinations thereof or various components thereof may be examples of means for performing various aspects of processing PRSs according to priority as described herein. For example, the communications manager620, the receiver610, the transmitter615, or various combinations or components thereof may support a method for performing one or more of the functions described herein. In some examples, the communications manager620, the receiver610, the transmitter615, or various combinations or components thereof may be implemented in hardware (e.g., in communications management circuitry). The hardware may include a processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic, discrete hardware components, or any combination thereof configured as or otherwise supporting a means for performing the functions described in the present disclosure. In some examples, a processor and memory coupled with the processor may be configured to perform one or more of the functions described herein (e.g., by executing, by the processor, instructions stored in the memory). Additionally or alternatively, in some examples, the communications manager620, the receiver610, the transmitter615, or various combinations or components thereof may be implemented in code (e.g., as communications management software or firmware) executed by a processor. If implemented in code executed by a processor, the functions of the communications manager620, the receiver610, the transmitter615, or various combinations or components thereof may be performed by a general-purpose processor, a DSP, a central processing unit (CPU), an ASIC, an FPGA, or any combination of these or other programmable logic devices (e.g., configured as or otherwise supporting a means for performing the functions described in the present disclosure). In some examples, the communications manager620may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the receiver610, the transmitter615, or both. For example, the communications manager620may receive information from the receiver610, send information to the transmitter615, or be integrated in combination with the receiver610, the transmitter615, or both to receive information, transmit information, or perform various other operations as described herein. The communications manager620may support wireless communication at a UE in accordance with examples as disclosed herein. For example, the communications manager620may be configured as or otherwise support a means for receiving, from one or more anchor nodes, a set of PRSs over a set of resources. The communications manager620may be configured as or otherwise support a means for receiving one or more indications that indicate a respective priority for each of one or more PRSs of the set of PRSs. The communications manager620may be configured as or otherwise support a means for decoding at least one PRS of the set of PRSs to obtain positioning information associated with the UE, the at least one PRS based on the respective priorities for the one or more PRSs. By including or configuring the communications manager620in accordance with examples as described herein, the device605(e.g., a processor controlling or otherwise coupled to the receiver610, the transmitter615, the communications manager620, or a combination thereof) may support techniques for a UE to process PRSs according to priority, which may reduce processing performed by the UE, thereby reducing power consumption, which may enable more efficient utilization of resources including network resources. FIG.6shows a block diagram700of a device705that supports processing PRSs according to priority in accordance with one or more aspects of the present disclosure. The device705may be an example of aspects of a device605or a UE115as described herein. The device705may include a receiver710, a transmitter715, and a communications manager720. The device705may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver710may provide a means for receiving information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to processing PRSs according to priority). Information may be passed on to other components of the device705. The receiver710may utilize a single antenna or a set of multiple antennas. The transmitter715may provide a means for transmitting signals generated by other components of the device705. For example, the transmitter715may transmit information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to processing PRSs according to priority). In some examples, the transmitter715may be co-located with a receiver710in a transceiver module. The transmitter715may utilize a single antenna or a set of multiple antennas. The device705, or various components thereof, may be an example of means for performing various aspects of processing PRSs according to priority as described herein. For example, the communications manager720may include an PRS component725, a priority component730, a positioning component735, or any combination thereof. The communications manager720may be an example of aspects of a communications manager620as described herein. In some examples, the communications manager720, or various components thereof, may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the receiver710, the transmitter715, or both. For example, the communications manager720may receive information from the receiver710, send information to the transmitter715, or be integrated in combination with the receiver710, the transmitter715, or both to receive information, transmit information, or perform various other operations as described herein. The communications manager720may support wireless communication at a UE in accordance with examples as disclosed herein. The PRS component725may be configured as or otherwise support a means for receiving, from one or more anchor nodes, a set of PRSs over a set of resources. The priority component730may be configured as or otherwise support a means for receiving one or more indications that indicate a respective priority for each of one or more PRSs of the set of PRSs. The positioning component735may be configured as or otherwise support a means for decoding at least one PRS of the set of PRSs to obtain positioning information associated with the UE, the at least one PRS based on the respective priorities for the one or more PRSs. FIG.7shows a block diagram800of a communications manager820that supports processing PRSs according to priority in accordance with one or more aspects of the present disclosure. The communications manager820may be an example of aspects of a communications manager620, a communications manager720, or both, as described herein. The communications manager820, or various components thereof, may be an example of means for performing various aspects of processing PRSs according to priority as described herein. For example, the communications manager820may include an PRS component825, a priority component830, a positioning component835, a capability component840, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses). The communications manager820may support wireless communication at a UE in accordance with examples as disclosed herein. The PRS component825may be configured as or otherwise support a means for receiving, from one or more anchor nodes, a set of PRSs over a set of resources. The priority component830may be configured as or otherwise support a means for receiving one or more indications that indicate a respective priority for each of one or more PRSs of the set of PRSs. The positioning component835may be configured as or otherwise support a means for decoding at least one PRS of the set of PRSs to obtain positioning information associated with the UE, the at least one PRS based on the respective priorities for the one or more PRSs. In some examples, the PRS component825may be configured as or otherwise support a means for receiving the set of PRSs, the set of PRSs scrambled according to a respective scrambling sequence for each of the one or more PRSs, where the respective scrambling sequence indicates the respective priority. In some examples, the PRS component825may be configured as or otherwise support a means for receiving the set of PRSs according to respective resource mapping patterns, where the respective resource mapping patterns for the one or more PRSs indicate the respective priority for each of the one or more PRSs. In some examples, the priority component830may be configured as or otherwise support a means for receiving one or more control channels associated with the one or more PRSs, where the one or more control channels indicate the respective priority for each of the one or more PRSs. In some examples, a control channel of the one or more control channels includes sidelink control information having a priority field that indicates a priority of a PRS associated with the control channel. In some examples, the priority component830may be configured as or otherwise support a means for receiving at least one control channel including a list associated with the one or more PRSs, the list ordered according to the respective priorities. In some examples, the priority component830may be configured as or otherwise support a means for receiving, from the one or more anchor nodes, one or more broadcast signals indicating the respective priorities for the one or more PRSs, where a payload or a sequence of the one or more broadcast signals indicates the respective priorities. In some examples, the priority component830may be configured as or otherwise support a means for receiving a configuration message including a list of priorities, where the respective priorities for the one or more PRSs are based on the list of priorities. In some examples, the priority component830may be configured as or otherwise support a means for receiving one or more priority parameters that indicate the respective priority for each of the one or more PRSs, the respective priorities based on the set of resources, a resource of the set of resources, an anchor node of the one or more anchor nodes, a reliability of a PRS of the one or more PRSs, or a combination thereof. In some examples, the priority component830may be configured as or otherwise support a means for selecting the at least one PRS from the one or more PRSs based on a priority rule for obtaining the positioning information associated with the UE and the indication of the respective priorities for the one or more PRSs. In some examples, the capability component840may be configured as or otherwise support a means for selecting the at least one PRS from the one or more PRSs based on a capability of the UE. In some examples, the priority component830may be configured as or otherwise support a means for selecting the at least one PRS based on a priority of the at least one PRS being greater than a priority of a second PRS of the one or more PRSs, where one or more resources of the at least one PRS overlap with one or more resources of the second PRS. In some examples, the priority component830may be configured as or otherwise support a means for transmitting a report including a priority of the at least one PRS, where the priority is based on one or more parameters associated with the UE. In some examples, the priority component830may be configured as or otherwise support a means for transmitting a request for a priority of the at least one PRS. In some examples, the priority component830may be configured as or otherwise support a means for determining to override the respective priority for the at least one PRS based on one or more parameters of the UE. In some examples, the PRS component825may be configured as or otherwise support a means for selecting the at least one PRS based on determining to override the respective priority for the at least one PRS. In some examples, the respective priorities are based on a type of an anchor node of the one or more anchor nodes, positioning reliability information associated with the anchor node, transmission parameters associated with the anchor node, a signal quality parameter associated with the anchor node, a mobility parameter of the anchor node, a type of a PRS, a band type for the PRS, or a combination thereof. FIG.9shows a diagram of a system900including a device905that supports processing PRSs according to priority in accordance with one or more aspects of the present disclosure. The device905may be an example of or include the components of a device605, a device705, or a UE115as described herein. The device905may communicate wirelessly with one or more base stations105, UEs115, or any combination thereof. The device905may include components for bi-directional voice and data communications including components for transmitting and receiving communications, such as a communications manager920, an input/output (I/O) controller910, a transceiver915, an antenna925, a memory930, code935, and a processor940. These components may be in electronic communication or otherwise coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more buses (e.g., a bus945). The I/O controller910may manage input and output signals for the device905. The I/O controller910may also manage peripherals not integrated into the device905. In some cases, the I/O controller910may represent a physical connection or port to an external peripheral. In some cases, the I/O controller910may utilize an operating system such as iOS®, ANDROID®, MS-DOS®, MS-WINDOWS®, OS/2®, UNIX®, LINUX®, or another known operating system. Additionally or alternatively, the I/O controller910may represent or interact with a modem, a keyboard, a mouse, a touchscreen, or a similar device. In some cases, the I/O controller910may be implemented as part of a processor, such as the processor940. In some cases, a user may interact with the device905via the I/O controller910or via hardware components controlled by the I/O controller910. In some cases, the device905may include a single antenna925. However, in some other cases, the device905may have more than one antenna925, which may be capable of concurrently transmitting or receiving multiple wireless transmissions. The transceiver915may communicate bi-directionally, via the one or more antennas925, wired, or wireless links as described herein. For example, the transceiver915may represent a wireless transceiver and may communicate bi-directionally with another wireless transceiver. The transceiver915may also include a modem to modulate the packets, to provide the modulated packets to one or more antennas925for transmission, and to demodulate packets received from the one or more antennas925. The transceiver915, or the transceiver915and one or more antennas925, may be an example of a transmitter615, a transmitter715, a receiver610, a receiver710, or any combination thereof or component thereof, as described herein. The memory930may include random access memory (RAM) and read-only memory (ROM). The memory930may store computer-readable, computer-executable code935including instructions that, when executed by the processor940, cause the device905to perform various functions described herein. The code935may be stored in a non-transitory computer-readable medium such as system memory or another type of memory. In some cases, the code935may not be directly executable by the processor940but may cause a computer (e.g., when compiled and executed) to perform functions described herein. In some cases, the memory930may contain, among other things, a basic I/O system (BIOS) which may control basic hardware or software operation such as the interaction with peripheral components or devices. The processor940may include an intelligent hardware device (e.g., a general-purpose processor, a DSP, a CPU, a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some cases, the processor940may be configured to operate a memory array using a memory controller. In some other cases, a memory controller may be integrated into the processor940. The processor940may be configured to execute computer-readable instructions stored in a memory (e.g., the memory930) to cause the device905to perform various functions (e.g., functions or tasks supporting processing PRSs according to priority). For example, the device905or a component of the device905may include a processor940and memory930coupled to the processor940, the processor940and memory930configured to perform various functions described herein. The communications manager920may support wireless communication at a UE in accordance with examples as disclosed herein. For example, the communications manager920may be configured as or otherwise support a means for receiving, from one or more anchor nodes, a set of PRSs over a set of resources. The communications manager920may be configured as or otherwise support a means for receiving one or more indications that indicate a respective priority for each of one or more PRSs of the set of PRSs. The communications manager920may be configured as or otherwise support a means for decoding at least one PRS of the set of PRSs to obtain positioning information associated with the UE, the at least one PRS based on the respective priorities for the one or more PRSs. By including or configuring the communications manager920in accordance with examples as described herein, the device905may support techniques for a UE to process PRSs according to priority, which may cause improved communication reliability, reduced latency, improved user experience related to reduced processing, reduced power consumption, more efficient utilization of communication resources, improved coordination between devices, longer battery life, improved utilization of processing capability, and the like. In some examples, the communications manager920may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the transceiver915, the one or more antennas925, or any combination thereof. Although the communications manager920is illustrated as a separate component, in some examples, one or more functions described with reference to the communications manager920may be supported by or performed by the processor940, the memory930, the code935, or any combination thereof. For example, the code935may include instructions executable by the processor940to cause the device905to perform various aspects of processing PRSs according to priority as described herein, or the processor940and the memory930may be otherwise configured to perform or support such operations. FIG.10shows a block diagram1000of a device1005that supports processing PRSs according to priority in accordance with one or more aspects of the present disclosure. The device1005may be an example of aspects of an anchor node as described herein. The device1005may include a receiver1010, a transmitter1015, and a communications manager1020. The device1005may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver1010may provide a means for receiving information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to processing PRSs according to priority). Information may be passed on to other components of the device1005. The receiver1010may utilize a single antenna or a set of multiple antennas. The transmitter1015may provide a means for transmitting signals generated by other components of the device1005. For example, the transmitter1015may transmit information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to processing PRSs according to priority). In some examples, the transmitter1015may be co-located with a receiver1010in a transceiver module. The transmitter1015may utilize a single antenna or a set of multiple antennas. The communications manager1020, the receiver1010, the transmitter1015, or various combinations thereof or various components thereof may be examples of means for performing various aspects of processing PRSs according to priority as described herein. For example, the communications manager1020, the receiver1010, the transmitter1015, or various combinations or components thereof may support a method for performing one or more of the functions described herein. In some examples, the communications manager1020, the receiver1010, the transmitter1015, or various combinations or components thereof may be implemented in hardware (e.g., in communications management circuitry). The hardware may include a processor, a DSP, an ASIC, an FPGA or other programmable logic device, a discrete gate or transistor logic, discrete hardware components, or any combination thereof configured as or otherwise supporting a means for performing the functions described in the present disclosure. In some examples, a processor and memory coupled with the processor may be configured to perform one or more of the functions described herein (e.g., by executing, by the processor, instructions stored in the memory). Additionally or alternatively, in some examples, the communications manager1020, the receiver1010, the transmitter1015, or various combinations or components thereof may be implemented in code (e.g., as communications management software or firmware) executed by a processor. If implemented in code executed by a processor, the functions of the communications manager1020, the receiver1010, the transmitter1015, or various combinations or components thereof may be performed by a general-purpose processor, a DSP, a CPU, an ASIC, an FPGA, or any combination of these or other programmable logic devices (e.g., configured as or otherwise supporting a means for performing the functions described in the present disclosure). In some examples, the communications manager1020may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the receiver1010, the transmitter1015, or both. For example, the communications manager1020may receive information from the receiver1010, send information to the transmitter1015, or be integrated in combination with the receiver1010, the transmitter1015, or both to receive information, transmit information, or perform various other operations as described herein. The communications manager1020may support wireless communication at an anchor node in accordance with examples as disclosed herein. For example, the communications manager1020may be configured as or otherwise support a means for transmitting, to a UE, a set of PRSs over a set of resources. The communications manager1020may be configured as or otherwise support a means for transmitting, to the UE, one or more indications that indicate a respective priority for each of one or more PRSs of the set of PRSs. By including or configuring the communications manager1020in accordance with examples as described herein, the device1005(e.g., a processor controlling or otherwise coupled to the receiver1010, the transmitter1015, the communications manager1020, or a combination thereof) may support techniques for a UE to process PRSs according to priority, which may cause reduced processing, reduced power consumption, more efficient utilization of communication resources, and the like. FIG.10shows a block diagram1100of a device1105that supports processing PRSs according to priority in accordance with one or more aspects of the present disclosure. The device1105may be an example of aspects of a device1005or an anchor node115as described herein. The device1105may include a receiver1110, a transmitter1115, and a communications manager1120. The device1105may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver1110may provide a means for receiving information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to processing PRSs according to priority). Information may be passed on to other components of the device1105. The receiver1110may utilize a single antenna or a set of multiple antennas. The transmitter1115may provide a means for transmitting signals generated by other components of the device1105. For example, the transmitter1115may transmit information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to processing PRSs according to priority). In some examples, the transmitter1115may be co-located with a receiver1110in a transceiver module. The transmitter1115may utilize a single antenna or a set of multiple antennas. The device1105, or various components thereof, may be an example of means for performing various aspects of processing PRSs according to priority as described herein. For example, the communications manager1120may include an PRS manager1125a priority manager1130, or any combination thereof. The communications manager1120may be an example of aspects of a communications manager920as described herein. In some examples, the communications manager1120, or various components thereof, may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the receiver1110, the transmitter1115, or both. For example, the communications manager1120may receive information from the receiver1110, send information to the transmitter1115, or be integrated in combination with the receiver1110, the transmitter1115, or both to receive information, transmit information, or perform various other operations as described herein. The communications manager1120may support wireless communication at an anchor node in accordance with examples as disclosed herein. The PRS manager1125may be configured as or otherwise support a means for transmitting, to a UE, a set of PRSs over a set of resources. The priority manager1130may be configured as or otherwise support a means for transmitting, to the UE, one or more indications that indicate a respective priority for each of one or more PRSs of the set of PRSs. FIG.12shows a block diagram1200of a communications manager1220that supports processing PRSs according to priority in accordance with one or more aspects of the present disclosure. The communications manager1220may be an example of aspects of a communications manager1020, a communications manager1120, or both, as described herein. The communications manager1220, or various components thereof, may be an example of means for performing various aspects of processing PRSs according to priority as described herein. For example, the communications manager1220may include an PRS manager1225, a priority manager1230, a position manager1235, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses). The communications manager1220may support wireless communication at an anchor node in accordance with examples as disclosed herein. The PRS manager1225may be configured as or otherwise support a means for transmitting, to a UE, a set of PRSs over a set of resources. The priority manager1230may be configured as or otherwise support a means for transmitting, to the UE, one or more indications that indicate a respective priority for each of one or more PRSs of the set of PRSs. In some examples, the PRS manager1225may be configured as or otherwise support a means for transmitting the set of PRSs, the set of PRSs scrambled according to a respective scrambling sequence for each of the one or more PRSs, where the respective scrambling sequence indicates the respective priority. In some examples, the PRS manager1225may be configured as or otherwise support a means for transmitting the set of PRSs according to respective resource mapping patterns, where the respective resource mapping patterns for the one or more PRSs indicate the respective priority for each of the one or more PRSs. In some examples, the priority manager1230may be configured as or otherwise support a means for transmitting one or more control channels associated with the one or more PRSs, where the one or more control channels indicate the respective priority for each of the one or more PRSs. In some examples, a control channel of the one or more control channels includes sidelink control information having a priority field that indicates a priority of a PRS associated with the control channel. In some examples, the priority manager1230may be configured as or otherwise support a means for transmitting at least one control channel including a list associated with the one or more PRSs, the list ordered according to the respective priorities. In some examples, the priority manager1230may be configured as or otherwise support a means for transmitting, to the UE, one or more broadcast signals indicating the respective priorities for the one or more PRSs, where a payload or a sequence of the one or more broadcast signals indicates the respective priorities. In some examples, the priority manager1230may be configured as or otherwise support a means for transmitting a configuration message including a list of priorities, where the respective priorities for the one or more PRSs are based on the list of priorities. In some examples, the priority manager1230may be configured as or otherwise support a means for transmitting one or more priority parameters that indicate the respective priority for each of the one or more PRSs, the respective priorities based on the set of resources, a resource of the set of resources, the anchor node, a reliability of a PRS of the one or more PRSs, or a combination thereof. In some examples, the priority manager1230may be configured as or otherwise support a means for receiving a report including a priority of at least one PRS of the set of PRSs, where the priority of the at least one PRS is based on one or more parameters associated with the anchor node. In some examples, the priority manager1230may be configured as or otherwise support a means for receiving a request for a priority of at least one PRS of the set of PRSs. In some examples, the position manager1235may be configured as or otherwise support a means for detecting a change in position at the anchor node. In some examples, the respective priorities are based on a type of the anchor node, positioning reliability information associated with the anchor node, transmission parameters associated with the anchor node, a signal quality parameter associated with the anchor node, a mobility parameter of the anchor node, a type of a PRS, a band type for the PRS, or a combination thereof. In some examples, the anchor node is a sidelink device. FIG.13shows a diagram of a system1300including a device1305that supports processing PRSs according to priority in accordance with one or more aspects of the present disclosure. The device1305may be an example of or include the components of a device1005, a device1105, or an anchor node as described herein. The device1305may include components for bi-directional voice and data communications including components for transmitting and receiving communications, such as a communications manager1320, a network communications manager1310, a transceiver1315, an antenna1325, a memory1330, code1335, a processor1340, and an inter-station communications manager1345. These components may be in electronic communication or otherwise coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more buses (e.g., a bus1350). The network communications manager1310may manage communications with a core network130(e.g., via one or more wired backhaul links). For example, the network communications manager1310may manage the transfer of data communications for client devices, such as one or more UEs115. In some cases, the device1305may include a single antenna1325. However, in some other cases the device1305may have more than one antenna1325, which may be capable of concurrently transmitting or receiving multiple wireless transmissions. The transceiver1315may communicate bi-directionally, via the one or more antennas1325, wired, or wireless links as described herein. For example, the transceiver1315may represent a wireless transceiver and may communicate bi-directionally with another wireless transceiver. The transceiver1315may also include a modem to modulate the packets, to provide the modulated packets to one or more antennas1325for transmission, and to demodulate packets received from the one or more antennas1325. The transceiver1315, or the transceiver1315and one or more antennas1325, may be an example of a transmitter915, a transmitter1015, a receiver910, a receiver1010, or any combination thereof or component thereof, as described herein. The memory1330may include RAM and ROM. The memory1330may store computer-readable, computer-executable code1335including instructions that, when executed by the processor1340, cause the device1305to perform various functions described herein. The code1335may be stored in a non-transitory computer-readable medium such as system memory or another type of memory. In some cases, the code1335may not be directly executable by the processor1340but may cause a computer (e.g., when compiled and executed) to perform functions described herein. In some cases, the memory1330may contain, among other things, a BIOS which may control basic hardware or software operation such as the interaction with peripheral components or devices. The processor1340may include an intelligent hardware device (e.g., a general-purpose processor, a DSP, a CPU, a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some cases, the processor1340may be configured to operate a memory array using a memory controller. In some other cases, a memory controller may be integrated into the processor1340. The processor1340may be configured to execute computer-readable instructions stored in a memory (e.g., the memory1330) to cause the device1305to perform various functions (e.g., functions or tasks supporting processing PRSs according to priority). For example, the device1305or a component of the device1305may include a processor1340and memory1330coupled to the processor1340, the processor1340and memory1330configured to perform various functions described herein. The inter-station communications manager1345may manage communications with other base stations105, and may include a controller or scheduler for controlling communications with UEs115in cooperation with other base stations105. For example, the inter-station communications manager1345may coordinate scheduling for transmissions to UEs115for various interference mitigation techniques such as beamforming or joint transmission. In some examples, the inter-station communications manager1345may provide an X2 interface within an LTE/LTE-A wireless communications network technology to provide communication between base stations105. The communications manager1320may support wireless communication at an anchor node in accordance with examples as disclosed herein. For example, the communications manager1320may be configured as or otherwise support a means for transmitting, to a UE, a set of PRSs over a set of resources. The communications manager1320may be configured as or otherwise support a means for transmitting, to the UE, one or more indications that indicate a respective priority for each of one or more PRSs of the set of PRSs. By including or configuring the communications manager1320in accordance with examples as described herein, the device1305may support techniques for a UE to process PRSs according to priority, which may cause improved communication reliability, reduced latency, improved user experience related to reduced processing, reduced power consumption, more efficient utilization of communication resources, improved coordination between devices, longer battery life, improved utilization of processing capability, and the like. In some examples, the communications manager1320may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the transceiver1315, the one or more antennas1325, or any combination thereof. Although the communications manager1320is illustrated as a separate component, in some examples, one or more functions described with reference to the communications manager1320may be supported by or performed by the processor1340, the memory1330, the code1335, or any combination thereof. For example, the code1335may include instructions executable by the processor1340to cause the device1305to perform various aspects of processing PRSs according to priority as described herein, or the processor1340and the memory1330may be otherwise configured to perform or support such operations. FIG.14shows a flowchart illustrating a method1400that supports processing PRSs according to priority in accordance with one or more aspects of the present disclosure. The operations of the method1400may be implemented by a UE or its components as described herein. For example, the operations of the method1400may be performed by a UE115as described with reference toFIGS.1through9. In some examples, a UE may execute a set of instructions to control the functional elements of the UE to perform the described functions. Additionally or alternatively, the UE may perform aspects of the described functions using special-purpose hardware. At1405, the method may include receiving, from one or more anchor nodes, a set of PRSs over a set of resources. The operations of1405may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1405may be performed by an PRS component825as described with reference toFIG.8. At1410, the method may include receiving one or more indications that indicate a respective priority for each of one or more PRSs of the set of PRSs. The operations of1410may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1410may be performed by a priority component830as described with reference toFIG.8. At1415, the method may include decoding at least one PRS of the set of PRSs to obtain positioning information associated with the UE, the at least one PRS based on the respective priorities for the one or more PRSs. The operations of1415may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1415may be performed by a positioning component835as described with reference toFIG.8. FIG.15shows a flowchart illustrating a method1500that supports processing PRSs according to priority in accordance with one or more aspects of the present disclosure. The operations of the method1500may be implemented by a UE or its components as described herein. For example, the operations of the method1500may be performed by a UE115as described with reference toFIGS.1through9. In some examples, a UE may execute a set of instructions to control the functional elements of the UE to perform the described functions. Additionally or alternatively, the UE may perform aspects of the described functions using special-purpose hardware. At1505, the method may include receiving, from one or more anchor nodes, a set of PRSs over a set of resources. The operations of1505may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1505may be performed by an PRS component825as described with reference toFIG.8. At1510, the method may include receiving the set of PRSs, the set of PRSs scrambled according to respective scrambling sequences. The operations of1510may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1510may be performed by an PRS component825as described with reference toFIG.8. At1515, the method may include receiving one or more indications that indicate a respective priority for each of one or more PRSs of the set of PRSs, where the respective scrambling sequences for the one or more PRSs indicate the respective priority for each of the one or more PRSs. The operations of1515may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1515may be performed by a priority component830as described with reference toFIG.8. At1520, the method may include decoding at least one PRS of the set of PRSs to obtain positioning information associated with the UE, the at least one PRS based on the respective priorities for the one or more PRSs. The operations of1520may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1520may be performed by a positioning component835as described with reference toFIG.8. FIG.16shows a flowchart illustrating a method1600that supports processing PRSs according to priority in accordance with one or more aspects of the present disclosure. The operations of the method1600may be implemented by a UE or its components as described herein. For example, the operations of the method1600may be performed by a UE116as described with reference toFIGS.1through8. In some examples, a UE may execute a set of instructions to control the functional elements of the UE to perform the described functions. Additionally or alternatively, the UE may perform aspects of the described functions using special-purpose hardware. At1605, the method may include receiving, from one or more anchor nodes, a set of PRSs over a set of resources. The operations of1605may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1605may be performed by an PRS component825as described with reference toFIG.8. At1610, the method may include receiving one or more indications that indicate a respective priority for each of one or more PRSs of the set of PRSs. The operations of1610may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1610may be performed by a priority component830as described with reference toFIG.8. At1616, the method may include receiving one or more control channels associated with the one or more PRSs, where the one or more control channels indicate the respective priority for each of the one or more PRSs. The operations of1616may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1616may be performed by a priority component830as described with reference toFIG.8. At1620, the method may include decoding at least one PRS of the set of PRSs to obtain positioning information associated with the UE, the at least one PRS based on the respective priorities for the one or more PRSs. The operations of1620may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1620may be performed by a positioning component835as described with reference toFIG.8. FIG.17shows a flowchart illustrating a method1700that supports processing PRSs according to priority in accordance with one or more aspects of the present disclosure. The operations of the method1700may be implemented by an anchor node or its components as described herein. For example, the operations of the method1700may be performed by an anchor node as described with reference toFIGS.1through4and10through13. In some examples, an anchor node may execute a set of instructions to control the functional elements of the anchor node to perform the described functions. Additionally or alternatively, the anchor node may perform aspects of the described functions using special-purpose hardware. At1705, the method may include transmitting, to a UE, a set of PRSs over a set of resources. The operations of1705may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1705may be performed by an PRS manager1225as described with reference toFIG.12. At1710, the method may include transmitting, to the UE, one or more indications that indicate a respective priority for each of one or more PRSs of the set of PRSs. The operations of1710may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1710may be performed by a priority manager1230as described with reference toFIG.12. The following provides an overview of aspects of the present disclosure: Aspect 1: A method for wireless communication at a UE, comprising: receiving, from one or more anchor nodes, a set of positioning reference signals over a set of resources; receiving one or more indications that indicate a respective priority for each of one or more positioning reference signals of the set of positioning reference signals; and decoding at least one positioning reference signal of the set of positioning reference signals to obtain positioning information associated with the UE, the at least one positioning reference signal based at least in part on the indication of the respective priority. Aspect 2: The method of aspect 1, the receiving the set of positioning reference signals comprising: receiving the set of positioning reference signals, the set of positioning reference signals scrambled according to a respective scrambling sequence for each of the one or more positioning reference signals, wherein the respective scrambling sequence indicates the respective priority. Aspect 3: The method of any of aspects 1 through 2, the receiving the set of positioning reference signals comprising: receiving the set of positioning reference signals, the set of positioning reference signals received according to a respective resource mapping pattern for each of the one or more positioning reference signals, wherein the respective resource mapping pattern indicates the respective priority. Aspect 4: The method of aspect 1, the receiving the indication comprising: receiving one or more control channels associated with the one or more positioning reference signals, wherein the one or more control channels indicate the respective priority for each of the one or more positioning reference signals. Aspect 5: The method of aspect 4, wherein a control channel of the one or more control channels comprises sidelink control information having a priority field that indicates a priority of a positioning reference signal associated with the control channel. Aspect 6: The method of any of aspects 1 through 5, the receiving the indication comprising: receiving at least one control channel comprising a list associated with the one or more positioning reference signals, the list ordered according to the respective priority for each of the one or more positioning reference signals. Aspect 7: The method of any of aspects 1 through 6, the receiving the indication comprising: receiving, from the one or more anchor nodes, one or more broadcast signals indicating the respective priority for each of the one or more positioning reference signals, wherein a payload or a sequence of the one or more broadcast signals indicates the respective priority. Aspect 8: The method of any of aspects 1 through 7, the receiving the indication comprising: receiving a configuration message comprising a list of priorities, wherein the respective priority for each of the one or more positioning reference signals are based at least in part on the list of priorities. Aspect 9: The method of any of aspects 1 through 8, the receiving the indication comprising: receiving one or more priority parameters that indicate the respective priority for each of the one or more positioning reference signals, the respective priority based at least in part on the set of resources, a resource of the set of resources, an anchor node of the one or more anchor nodes, a reliability of a positioning reference signal of the one or more positioning reference signals, or a combination thereof. Aspect 10: The method of any of aspects 1 through 9, further comprising: selecting the at least one positioning reference signal from the one or more positioning reference signals based at least in part on a priority rule for obtaining the positioning information associated with the UE and the one or more indications. Aspect 11: The method of any of aspects 1 through 10, further comprising: selecting the at least one positioning reference signal from the one or more positioning reference signals based at least in part on a capability of the UE. Aspect 12: The method of any of aspects 1 through 11, further comprising: selecting the at least one positioning reference signal based at least in part on a priority of the at least one positioning reference signal being greater than a priority of a second positioning reference signal of the one or more positioning reference signals, wherein one or more resources of the at least one positioning reference signal overlap with one or more resources of the second positioning reference signal. Aspect 13: The method of any of aspects 1 through 12, further comprising: transmitting a report comprising a priority of the at least one positioning reference signal, wherein the priority is based at least in part on one or more parameters associated with the UE. Aspect 14: The method of any of aspects 1 through 13, further comprising: transmitting a request for a priority of the at least one positioning reference signal. Aspect 15: The method of any of aspects 1 through 14, further comprising: determining to override the respective priority for the at least one positioning reference signal based at least in part on one or more parameters of the UE; and selecting the at least one positioning reference signal based at least in part on determining to override the respective priority for the at least one positioning reference signal. Aspect 16: The method of any of aspects 1 through 15 wherein the respective priority for each of the one or more positioning reference signals are based at least in part on a type of an anchor node of the one or more anchor nodes, positioning reliability information associated with the anchor node, transmission parameters associated with the anchor node, a signal quality parameter associated with the anchor node, a mobility parameter of the anchor node, a type of a positioning reference signal, a band type for the positioning reference signal, or a combination thereof. Aspect 17: A method for wireless communication at a node, comprising: transmitting, to a UE, a set of positioning reference signals over a set of resources; and transmitting, to the UE, one or more indications that indicate a respective priority for each of one or more positioning reference signals of the set of positioning reference signals. Aspect 18: The method of aspect 17, the transmitting the set of positioning reference signals comprising: transmitting the set of positioning reference signals, the set of positioning reference signals scrambled according to a respective scrambling sequence for each of the one or more positioning reference signals, wherein the respective scrambling sequence indicates the respective priority. Aspect 19: The method of any of aspects 17 through 18, the transmitting the set of positioning reference signals comprising: transmitting the set of positioning reference signals according to a respective resource mapping pattern, wherein the respective resource mapping pattern indicates the respective priority. Aspect 20: The method of aspect 17, the transmitting the indication comprising: transmitting one or more control channels associated with the one or more positioning reference signals, wherein the one or more control channels indicate the respective priority for each of the one or more positioning reference signals. Aspect 21: The method of aspect 20, wherein a control channel of the one or more control channels comprises sidelink control information having a priority field that indicates a priority of a positioning reference signal associated with the control channel. Aspect 22: The method of any of aspects 17 through 21, the transmitting the indication comprising: transmitting at least one control channel comprising a list associated with the one or more positioning reference signals, the list ordered according to the respective priority for each of the one or more positioning reference signals. Aspect 23: The method of any of aspects 17 through 22, the transmitting the indication comprising: transmitting, to the UE, one or more broadcast signals indicating the respective priority for each of the one or more positioning reference signals, wherein a payload or a sequence of the one or more broadcast signals indicates the respective priority. Aspect 24: The method of any of aspects 17 through 23, the transmitting the indication comprising: transmitting a configuration message comprising a list of priorities, wherein the respective priority for each of the one or more positioning reference signals are based at least in part on the list of priorities. Aspect 25: The method of any of aspects 17 through 24, the transmitting the indication comprising: transmitting one or more priority parameters that indicate the respective priority for each of the one or more positioning reference signals, the respective priority based at least in part on the set of resources, a resource of the set of resources, the node, a reliability of a positioning reference signal of the one or more positioning reference signals, or a combination thereof. Aspect 26: The method of any of aspects 17 through 25, further comprising: receiving a report comprising a priority of at least one positioning reference signal of the set of positioning reference signals, wherein the priority of the at least one positioning reference signal is based at least in part on one or more parameters associated with the node. Aspect 27: The method of any of aspects 17 through 26, further comprising: receiving a request for a priority of at least one positioning reference signal of the set of positioning reference signals. Aspect 28: The method of any of aspects 17 through 27, the transmitting the indication of respective priorities comprising: detecting a change in position at the node. Aspect 29: The method of any of aspects 17 through 28 wherein the respective priorities are based at least in part on a type of the node, positioning reliability information associated with the node, transmission parameters associated with the node, a signal quality parameter associated with the anchor node, a mobility parameter of the node, a type of a positioning reference signal, a band type for the positioning reference signal, or a combination thereof. Aspect 30: The method of any of aspects 17 through 29 wherein the node is a sidelink device. Aspect 31: An apparatus for wireless communication at a UE, comprising a processor; and memory coupled to the processor, the processor and memory configured to perform a method of any of aspects 1 through 16. Aspect 32: An apparatus for wireless communication at a UE, comprising at least one means for performing a method of any of aspects 1 through 16. Aspect 33: A non-transitory computer-readable medium storing code for wireless communication at a UE, the code comprising instructions executable by a processor to perform a method of any of aspects 1 through 16. Aspect 34: An apparatus for wireless communication at an anchor node, comprising a processor; and memory coupled to the processor, the processor and memory configured to perform a method of any of aspects 17 through 30. Aspect 35: An apparatus for wireless communication at a node, comprising at least one means for performing a method of any of aspects 17 through 30. Aspect 36: A non-transitory computer-readable medium storing code for wireless communication at a node, the code comprising instructions executable by a processor to perform a method of any of aspects 17 through 30. Aspect 37: A method for wireless communication at a UE, comprising: receiving, from one or more anchor nodes, a set of positioning reference signals over a set of resources; receiving an indication of respective priorities for one or more positioning reference signals of the set of positioning reference signals; and decoding at least one positioning reference signal of the set of positioning reference signals to obtain positioning information associated with the UE, the at least one positioning reference signal based at least in part on the respective priorities for the one or more positioning reference signals. Aspect 38: The method of aspect 37, the receiving the set of positioning reference signals comprising: receiving the set of positioning reference signals, the set of positioning reference signals scrambled according to respective scrambling sequences, wherein the respective scrambling sequences for the one or more positioning reference signals indicate the respective priorities for the one or more positioning reference signals. Aspect 39: The method of any of aspects 37 through 38, the receiving the set of positioning reference signals comprising: receiving the set of positioning reference signals, the set of positioning reference signals received according to a resource mapping pattern, wherein the respective resource mapping patterns for the one or more positioning reference signals indicate the respective priorities for the one or more positioning reference signals. Aspect 40: The method of any of aspects 37 through 39, the receiving the indication comprising: receiving one or more control channels associated with the one or more positioning reference signals, wherein the one or more control channels indicate the respective priorities for the one or more positioning reference signals. Aspect 41: The method of aspect 40, wherein a control channel of the one or more control channels comprises sidelink control information having a priority field that indicates a priority of a positioning reference signal associated with the control channel. Aspect 42: The method of any of aspects 37 through 41, the receiving the indication comprising: receiving at least one control channel comprising a list associated with the one or more positioning reference signals, the list ordered according to the respective priorities. Aspect 43: The method of any of aspects 37 through 42, the receiving the indication comprising: receiving, from the one or more anchor nodes, one or more broadcast signals indicating the respective priorities for the one or more positioning reference signals, wherein a payload or a sequence of the one or more broadcast signals indicates the respective priorities. Aspect 44: The method of any of aspects 37 through 43, the receiving the indication comprising: receiving a configuration message comprising a list of priorities, wherein the respective priorities for the one or more positioning reference signals are based at least in part on the list of priorities. Aspect 45: The method of any of aspects 37 through 44, the receiving the indication comprising: receiving one or more priority parameters that indicate the respective priorities for the one or more positioning reference signals, the respective priorities based at least in part on the set of resources, a resource of the set of resources, an anchor node of the one or more anchor nodes, a reliability of a positioning reference signal of the one or more positioning reference signals, or a combination thereof. Aspect 46: The method of any of aspects 37 through 45, further comprising: selecting the at least one positioning reference signal from the one or more positioning reference signals based at least in part on a priority rule for obtaining the positioning information associated with the UE and the indication of the respective priorities for the one or more positioning reference signals. Aspect 47: The method of any of aspects 37 through 46, further comprising: selecting the at least one positioning reference signal from the one or more positioning reference signals based at least in part on a capability of the UE. Aspect 48: The method of any of aspects 37 through 47, further comprising: selecting the at least one positioning reference signal based at least in part on a priority of the at least one positioning reference signal being greater than a priority of a second positioning reference signal of the one or more positioning reference signals, wherein one or more resources of the at least one positioning reference signal overlap with one or more resources of the second positioning reference signal. Aspect 49: The method of any of aspects 37 through 48, further comprising: transmitting a report comprising a priority of the at least one positioning reference signal, wherein the priority is based at least in part on one or more parameters associated with the UE. Aspect 50: The method of any of aspects 37 through 49, further comprising: transmitting a request for a priority of the at least one positioning reference signal. Aspect 51: The method of any of aspects 37 through 50, further comprising: determining to override the respective priority for the at least one positioning reference signal based at least in part on one or more parameters of the UE; and selecting the at least one positioning reference signal based at least in part on determining to override the respective priority for the at least one positioning reference signal. Aspect 52: The method of any of aspects 37 through 51 wherein the respective priorities are based at least in part on a type of an anchor node of the one or more anchor nodes, positioning reliability information associated with the anchor node, transmission parameters associated with the anchor node, a signal quality parameter associated with the anchor node, a mobility parameter of the anchor node, a type of a positioning reference signal, a band type for the positioning reference signal, or a combination thereof. Aspect 53: A method for wireless communication at an anchor node, comprising: transmitting, to a UE, a set of positioning reference signals over a set of resources; and transmitting, to the UE, an indication of respective priorities for one or more positioning reference signals of the set of positioning reference signals. Aspect 54: The method of aspect 53, the transmitting the set of positioning reference signals comprising: transmitting the set of positioning reference signals, the set of positioning reference signals scrambled according to respective scrambling sequences, wherein the respective scrambling sequences for the one or more positioning reference signals indicate the respective priorities for the one or more positioning reference signals. Aspect 55: The method of any of aspects 53 through 54, the transmitting the set of positioning reference signals comprising: transmitting the set of positioning reference signals, the set of positioning reference signals transmitted according to a resource mapping pattern, wherein the respective resource mapping patterns for the one or more positioning reference signals indicate the respective priorities for the one or more positioning reference signals. Aspect 56: The method of any of aspects 53 through 55, the transmitting the indication comprising: transmitting one or more control channels associated with the one or more positioning reference signals, wherein the one or more control channels indicate the respective priorities for the one or more positioning reference signals. Aspect 57: The method of aspect 56, wherein a control channel of the one or more control channels comprises sidelink control information having a priority field that indicates a priority of a positioning reference signal associated with the control channel. Aspect 58: The method of any of aspects 53 through 57, the transmitting the indication comprising: transmitting at least one control channel comprising a list associated with the one or more positioning reference signals, the list ordered according to the respective priorities. Aspect 59: The method of any of aspects 53 through 58, the transmitting the indication comprising: transmitting, to the UE, one or more broadcast signals indicating the respective priorities for the one or more positioning reference signals, wherein a payload or a sequence of the one or more broadcast signals indicates the respective priorities. Aspect 60: The method of any of aspects 53 through 59, the transmitting the indication comprising: transmitting a configuration message comprising a list of priorities, wherein the respective priorities for the one or more positioning reference signals are based at least in part on the list of priorities. Aspect 61: The method of any of aspects 53 through 60, the transmitting the indication comprising: transmitting one or more priority parameters that indicate the respective priorities for the one or more positioning reference signals, the respective priorities based at least in part on the set of resources, a resource of the set of resources, the anchor node, a reliability of a positioning reference signal of the one or more positioning reference signals, or a combination thereof. Aspect 62: The method of any of aspects 53 through 61, further comprising: receiving a report comprising a priority of at least one positioning reference signal of the set of positioning reference signals, wherein the priority of the at least one positioning reference signal is based at least in part on one or more parameters associated with the anchor node. Aspect 63: The method of any of aspects 53 through 62, further comprising: receiving a request for a priority of at least one positioning reference signal of the set of positioning reference signals. Aspect 64: The method of any of aspects 53 through 63, the transmitting the indication of respective priorities comprising: detecting a change in position at the anchor node. Aspect 65: The method of any of aspects 53 through 64 wherein the respective priorities are based at least in part on a type of the anchor node, positioning reliability information associated with the anchor node, transmission parameters associated with the anchor node, a signal quality parameter associated with the anchor node, a mobility parameter of the anchor node, a type of a positioning reference signal, a band type for the positioning reference signal, or a combination thereof. Aspect 66: The method of any of aspects 53 through 65 wherein the anchor node is a sidelink device. Aspect 67: An apparatus for wireless communication at a UE, comprising a processor; and memory coupled to the processor, the processor and memory configured to perform a method of any of aspects 37 through 52. Aspect 68: An apparatus for wireless communication at a UE, comprising at least one means for performing a method of any of aspects 37 through 52. Aspect 69: A non-transitory computer-readable medium storing code for wireless communication at a UE, the code comprising instructions executable by a processor to perform a method of any of aspects 37 through 52. Aspect 70: An apparatus for wireless communication at an anchor node, comprising a processor; and memory coupled to the processor, the processor and memory configured to perform a method of any of aspects 53 through 66. Aspect 71: An apparatus for wireless communication at an anchor node, comprising at least one means for performing a method of any of aspects 53 through 66. Aspect 72: A non-transitory computer-readable medium storing code for wireless communication at an anchor node, the code comprising instructions executable by a processor to perform a method of any of aspects 53 through 66. It should be noted that the methods described herein describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Further, aspects from two or more of the methods may be combined. Although aspects of an LTE, LTE-A, LTE-A Pro, or NR system may be described for purposes of example, and LTE, LTE-A, LTE-A Pro, or NR terminology may be used in much of the description, the techniques described herein are applicable beyond LTE, LTE-A, LTE-A Pro, or NR networks. For example, the described techniques may be applicable to various other wireless communications systems such as Ultra Mobile Broadband (UMB), Institute of Electrical and Electronics Engineers (IEEE) 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, Flash-OFDM, as well as other systems and radio technologies not explicitly mentioned herein. Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. The various illustrative blocks and components described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, a CPU, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described herein may be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that may be accessed by a general-purpose or special-purpose computer. By way of example, and not limitation, non-transitory computer-readable media may include RAM, ROM, electrically erasable programmable ROM (EEPROM), flash memory, compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that may be used to carry or store desired program code means in the form of instructions or data structures and that may be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of computer-readable medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media. As used herein, including in the claims, “or” as used in a list of items (e.g., a list of items prefaced by a phrase such as “at least one of” or “one or more of”) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an example step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.” The term “determine” or “determining” encompasses a wide variety of actions and, therefore, “determining” can include calculating, computing, processing, deriving, investigating, looking up (such as via looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” can include receiving (such as receiving information), accessing (such as accessing data in a memory) and the like. Also, “determining” can include resolving, selecting, choosing, establishing and other such similar actions. In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label, or other subsequent reference label. The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “example” used herein means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details for the purpose of providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described examples. The description herein is provided to enable a person having ordinary skill in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to a person having ordinary skill in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein.
160,614
11943157
DESCRIPTION OF EMBODIMENTS (TCI, Spatial Relation, and QCL) In NR, it has been studied to control reception processing (for example, at least one of reception, demapping, demodulation, and decoding) and transmission processing (for example, at least one of transmission, mapping, precoding, modulation, and coding) in UE of at least one of a signal and a channel (expressed as a signal/channel) based on a transmission configuration indication state (TCI state). The TCI state may represent what is applied to a downlink signal/channel. One corresponding to the TCI state applied to an uplink signal/channel may be expressed as a spatial relation. The TCI state is information regarding a quasi-co-location (QCL) of the signal/channel, and may also be referred to as, for example, a spatial Rx parameter, spatial relation information (SRI), or the like. The TCI state may be configured in the UE for each channel or each signal. The QCL is an indicator indicating a statistical property of a signal/channel. For example, a case where one signal/channel and another signal/channel have a QCL relation may mean that it is possible to assume that at least one of Doppler shift, Doppler spread, an average delay, a delay spread, or a spatial parameter (for example, a spatial Rx parameter) is identical (in QCL with respect to at least one of these) between the plurality of different signals/channels. Note that the spatial Rx parameter may correspond to a reception beam of the UE (for example, a reception analog beam), and the beam may be specified based on spatial QCL. The QCL (or at least one element of the QCL) in the present disclosure may be replaced with spatial QCL (sQCL). A plurality of types of QCL (QCL types) may be defined. For example, four QCL types A to D with different parameters (or parameter sets) that can be assumed to be identical may be provided. These parameters are as follows:QCL type A: Doppler shift, Doppler spread, average delay, and delay spread;QCL type B: Doppler shift and Doppler spread;QCL type C: Doppler shift and average delay; andQCL type D: spatial Rx parameter. It may be referred to as a QCL assumption for the UE to assume that a given control resource set (CORESET), channel, or reference signal has a specific QCL (for example, QCL type D) relation with another CORESET, channel, or reference signal. The UE may determine at least one of a transmission beam (Tx beam) and a reception beam (Rx beam) of a signal/channel based on a TCI state of the signal/channel or the QCL assumption. The TCI state may be, for example, information regarding QCL of a target channel (or a reference signal (RS) for the channel) and another signal (for example, another downlink reference signal (DL-RS). The TCI state may be configured (indicated) by higher layer signaling, physical layer signaling, or a combination thereof. In the present disclosure, the higher layer signaling may be any of, for example, radio resource control (RRC) signaling, medium access control (MAC) signaling, broadcast information, and the like, or a combination thereof. For the MAC signaling, for example, a MAC Control Element (MAC CE), a MAC Protocol Data Unit (PDU), or the like may be used. The broadcast information may be, for example, a master information block (MIB), a system information block (SIB), remaining minimum system information (RMSI), other system information (OSI), or the like. The physical layer signaling may be, for example, downlink control information (DCI). A channel for which a TCI state is configured (designated) may be, for example, at least one of a downlink shared channel (physical downlink shared channel (PDSCH)), a downlink control channel (physical downlink control channel (PDCCH)), an uplink shared channel (physical uplink shared channel (PUSCH)), and an uplink control channel (physical uplink control channel (PUCCH)). Further, RS (DL-RS) that has a QCL relation with the channel may be, for example, at least one of a synchronization signal block (SSB), a channel state information reference signal (CSI-RS), and a measurement reference signal (sounding reference signal (SRS)). Alternatively, the DL-RS may be a CSI-RS (also referred to as a tracking reference signal (TRS)) used for tracking or a reference signal (also referred to as a QRS) used for QCL detection. The SSB is a signal block including at least one of a primary synchronization signal (PSS), a secondary synchronization signal (SSS), and a broadcast channel (physical broadcast channel (PBCH)). The SSB may be referred to as an SS/PBCH block. An information element in a TCI state configured by higher layer signaling (“TCI-state IE” of RRC) may include one or more pieces of QCL information (“QCL-Info”). The QCL information may include at least one of information about DL-RS having a QCL relation (DL-RS-related information) and information indicating a QCL type (QCL type information). The DL-RS-related information may include information such as an index of the DL-RS (for example, an SSB index or a non-zero-power (NZP) CSI-RS resource identifier (ID)), an index of a cell where the RS is located, or an index of a bandwidth part (BWP) where the RS is located. <TCI State for PDCCH> Information regarding QCL between PDCCH (or a demodulation reference signal (DMRS) antenna port related to the PDCCH) and a given DL-RS may be referred to as a TCI state for the PDCCH. The UE may determine a TCI state for UE-specific PDCCH (CORESET) based on higher layer signaling. In one example, for the UE, one or a plurality of (K) TCI states can be configured for each CORESET by RRC signaling. The UE may cause one of the plurality of TCI states configured by RRC signaling to be activated by the MAC CE for each CORESET. The MAC CE may be referred to as a TCI state indication for UE-specific PDCCH MAC CE. The UE may monitor CORESET based on an active TCI state corresponding to the CORESET. <TCI State for PDSCH> Information regarding QCL between a PDSCH (or a DMRS antenna port associated with the PDSCH) and a given DL-RS may be referred to as a TCI state for the PDSCH. The UE may be notified (configured) of M (M≥1) TCI states for PDSCHs (QCL information for M PDSCHs) by higher layer signaling. Note that the number M of TCI states configured for the UE may be limited by at least one of UE capability and a QCL type. The DCI used for scheduling of the PDSCH may include a given field (which may be referred to as, for example, a TCI field, a TCI state field, or the like) indicating the TCI state for the PDSCH. The DCI may be used for scheduling the PDSCH of one cell, and may be referred to as, for example, DL DCI, DL assignment, DCI format1_0, or DCI format1_1. Whether or not the TCI field is included in the DCI may be controlled by information provided from a base station to the UE. The information may be information (for example, TCI presence information, in-DCI TCI presence information, a higher layer parameter TCI-PresentInDCI) indicating whether the TCI field is present or absent in the DCI. The information may be configured in the UE by, for example, higher layer signaling. When more than eight types of TCI states are configured for the UE, MAC CE may be used to activate (or designate) eight or less TCI states. The MAC CE may be referred to as a TCI states activation/deactivation for UE-specific PDSCH MAC CE. A value of the TCI field in the DCI may indicate one of the TCI states activated by MAC CE. When the UE is configured with the TCI presence information set to “enabled” for the CORESET for scheduling the PDSCH (CORESET used for PDCCH transmission for scheduling PDSCH), the UE may assume that the TCI field is present in DCI format1_1of the PDCCH transmitted on the CORESET. When the TCI presence information is not configured for the CORESET for scheduling the PDSCH or the PDSCH is scheduled by DCI format1_0, if the time offset between the reception of the DL DCI (the DCI for scheduling the PDSCH) and the reception of the PDSCH corresponding to the DCI is equal to or greater than the threshold, the UE may assume that the TCI state or the QCL assumption for the PDSCH is the same as the TCI state or the QCL assumption applied to the CORESET used for the PDCCH transmission for scheduling the PDSCH in order to determine the QCL of the PDSCH antenna port. When the TCI presence information is set to “enabled”, the TCI field in the DCI in the component carrier (CC) for scheduling (the PDSCH) indicates the activated TCI state in the scheduled CC or DL BWP, and when the PDSCH is scheduled by DCI format1_1, the UE may use the TCI with the DCI and according to the value of the TCI field in the detected PDCCH in order to determine the QCL of the PDSCH antenna port. When the time offset between the reception of the DL DCI (scheduling the PDSCH) and the PDSCH corresponding to the DCI (PDSCH scheduled by the DCI) is equal to or greater than the threshold, the UE may assume that the DM-RS port of the PDSCH of the serving cell is QCL with the RS in the TCI state with respect to the QCL type parameter given by the indicated TCI state. When the UE is configured with a single-slot PDSCH, the indicated TCI state may be based on the activated TCI state in the slot with the scheduled PDSCH. When a multi-slot PDSCH is configured in the UE, the indicated TCI state may be based on the activated TCI state in the first slot having the scheduled PDSCH, and the UE may expect that it is the same across the slots having the scheduled PDSCH. When the UE is configured with a CORESET associated with a search space set for cross-carrier scheduling, when the TCI presence information is set to “enabled” for the CORESET for the UE and at least one of the TCI states configured for the serving cell scheduled by a search space set includes QCL type D, the UE may assume that the time offset between the detected PDCCH and the PDSCH corresponding to the PDCCH is equal to or greater than the threshold. In both a case where the TCI information in the DCI (higher layer parameter TCI-PresentInDCI) is set to “enabled” and a case where the TCI information in the DCI is not configured in the RRC connection mode, when the time offset between reception of DL DCI (DCI for scheduling the PDSCH) and the corresponding PDSCH (PDSCH scheduled by the DCI) is less than the threshold, the UE may assume that the DM-RS port of the PDSCH of the serving cell has a minimum (lowest) CORESET-ID in a newest (latest) slot in which one or more CORESETs in an active BWP of the serving cell are monitored by the UE, and is in QCL with the RS related to a QCL parameter used for QCL indication of the PDCCH of the CORESET associated with a monitored search space. The time offset between the reception of the DL DCI and the reception of the PDSCH corresponding to the DCI may be referred to as a scheduling offset. Further, the above-mentioned threshold may be referred to as a QCL time length, “timeDurationForQCL”, “Threshold”, “Threshold for offset between a DCI indicating a TCI state and a PDSCH scheduled by the DCI”, “Threshold-Sched-Offset”, a schedule offset threshold, a scheduling offset threshold, or the like. The scheduling offset threshold may be based on the UE capability, and may be based on, for example, the delay caused by decoding the PDCCH and beam switching. The information of the scheduling offset threshold may be configured from the base station using higher layer signaling, or may be transmitted from the UE to the base station. For example, the UE may assume that the DMRS ports of the PDSCH are QCL with the DL-RS based on the TCI state activated for the CORESET corresponding to the lowest CORESET-ID. The latest slot may be, for example, a slot that receives the DCI for scheduling the PDSCH. Note that the CORESET-ID may be an ID (ID for identifying CORESET) configured by RRC information element “ControlResourceSet”. <Spatial Relation for PUCCH> The UE may be configured with a parameter (PUCCH configuration information, PUCCH-Config) used for PUCCH transmission by higher layer signaling (for example, radio resource control (RRC) signaling). The PUCCH configuration information may be configured for each partial band (for example, an uplink bandwidthpart (BWP)) in a carrier (also referred to as a cell, a component carrier, and the like). The PUCCH configuration information may include a list of PUCCH resource set information (for example, PUCCH-ResourceSet) and a list of PUCCH spatial relation information (for example, PUCCH-SpatialRelationInfo). The PUCCH resource set information may include a list (for example, resourceList) of a PUCCH resource index (ID, for example, PUCCH-ResourceId). Further, when the UE does not have dedicated PUCCH resource configuration information (for example, a dedicated PUCCH resource configuration) provided by the PUCCH resource set information in the PUCCH configuration information (before RRC setup), the UE may determine the PUCCH resource set based on a parameter (for example, pucch-ResourceCommon) in the system information (for example, system information block type 1 (SIB1) or remaining minimum system information (RMSI)). The PUCCH resource set may include 16 PUCCH resources. On the other hand, when the UE has the dedicated PUCCH resource configuration information (UE-dedicated uplink control channel configuration, dedicated PUCCH resource configuration) (after RRC setup), the UE may determine the PUCCH resource set according to the number of UCI information bits. The UE may determine one PUCCH resource (index) in the PUCCH resource set (for example, a cell-specific PUCCH resource set or a PUCCH resource set determined for individual UE) based on at least one of a value of a given field (for example, a PUCCH resource indicator field) in Downlink Control Information (DCI) (for example, the DCI format1_0or1_1used for PDSCH scheduling), the number of CCEs (NCCE) in a control resource set (control resource set (CORESET)) for reception of a PDCCH that carries the DCI, or an index (nCCE,0) of a head (first) CCE of the reception of the PDCCH. The PUCCH spatial relation information (for example, the RRC information element “PUCCH-spatialRelationInfo”) may indicate a plurality of candidate beams (spatial domain filters) for PUCCH transmission. The PUCCH spatial relation information may indicate a spatial relation between a reference signal (RS) and a PUCCH. The list of PUCCH spatial relation information may include several elements (PUCCH spatial relation information IE (information element)). Each piece of the PUCCH spatial relation information may include, for example, at least one of an index (ID, for example, pucch-SpatialRelationInfold) of the PUCCH spatial relation information, an index (ID, for example, servingCellId) of the serving cell, and information related to the RS (reference RS) that has a spatial relation with the PUCCH. For example, the information regarding the RS may be the SSB index, the CSI-RS index (for example, NZP-CSI-RS resource configuration ID), or the SRS resource ID and the ID of the BWP. The SSB index, the CSI-RS index, and the SRS resource ID may be associated with at least one of a beam, a resource, and a port selected by measurement of a corresponding RS. The UE may be indicated, by a medium access control (MAC) control element (CE), with one of the one or more pieces of PUCCH spatial relation information (for example, PUCCH-SpatialRelationInfo or a candidate beam) in the list of the PUCCH spatial relation information. The MAC CE may be a MAC CE that activates or deactivates the PUCCH spatial relation information (PUCCH spatial relation information activation/deactivation MAC CE, PUCCH spatial relation information indication MAC CE). After 3 ms from transmission of an acknowledgment (ACK) for the MAC CE activating the given PUCCH spatial relation information, the UE may activate the PUCCH relation information indicated by the MAC CE. The UE may control the transmission of the PUCCH based on the PUCCH spatial relation information activated by the MAC CE. Note that when a single piece of PUCCH spatial relation information is included in the list of the PUCCH spatial relation information, the UE may control the transmission of the PUCCH based on the PUCCH spatial relation information. <Spatial Relation for SRS and PUSCH> The UE may receive information (SRS configuration information, for example, a parameter in the RRC control element “SRS-Config”) used for transmission of a measurement reference signal (for example, a sounding reference signal (SRS)). Specifically, the UE may receive at least one of information related to one or a plurality of SRS resource sets (SRS resource set information, for example, the RRC control element “SRS-ResourceSet”) and information related to one or a plurality of SRS resources (SRS resource information, for example, the RRC control element “SRS-Resource”). One SRS resource set may be associated with a given number of SRS resources (a given number of SRS resources may be grouped). Each SRS resource may be specified by an SRS resource identifier (SRS resource indicator (SRI)) or an SRS resource identifier (ID). The SRS resource set information may include information of an SRS resource set ID (SRS-ResourceSetId), a list of SRS resource IDs (SRS-ResourceId) used in the resource set, an SRS resource type (for example, one of periodic SRS, semi-persistent SRS, and aperiodic CSI (Aperiodic SRS)), and SRS usage. Here, the SRS resource type may indicate any one of a periodic SRS (P-SRS), a semi-persistent SRS (SP-SRS), and an aperiodic CSI (A-SRS). Note that the UE may transmit a P-SRS and an SP-SRS periodically (or periodically after activated), and transmit an A-SRS based on an SRS request in the DCI. Furthermore, the usage (“usage” of an RRC parameter and “SRS-SetUse” of the L1 (Layer-1) parameter) may be, for example, beam management (beamManagement), codebook based transmission (codebook: CB), non-codebook based transmission (nonCodebook: NCB), antenna switching (antennaSwitching), or the like. An SRS used for the codebook based transmission or the non-codebook based transmission may be used to determine a precoder for codebook-based or non-codebook-based PUSCH transmission based on an SRI. For example, in the case of codebook based transmission, the UE may determine the precoder for PUSCH transmission based on SRI, transmitted rank indicator (TRI) and transmitted precoding matrix indicator (TPMI). For the non-codebook based transmission, the UE may determine a precoder for PUSCH transmission based on the SRI. The SRS resource information may include an SRS resource ID (SRS-ResourceId), the number of SRS ports, an SRS port number, transmission Comb, SRS resource mapping (for example, time and/or frequency resource position, resource offset, resource periodicity, the number of repetitions, the number of SRS symbols, and SRS bandwidth), hopping related information, an SRS resource type, a sequence ID, and SRS spatial relation information. The SRS spatial relation information (for example, the RRC information element “spatialRelationInfo”) may indicate spatial relation information between a given reference signal and the SRS. The given reference signal may be at least one of a Synchronization Signal/Physical Broadcast Channel (SS/PBCH) block, a Channel State Information Reference Signal (CSI-RS), or an SRS (for example, another SRS). The SS/PBCH block may be referred to as a synchronization signal block (SSB). The SRS spatial relation information may include at least one of an SSB index, a CSI-RS resource ID, and an SRS resource ID as an index of the given reference signal. Note that, in the present disclosure, an SSB index, an SSB resource ID, and an SSB resource indicator (SSBRI) may be replaced with each other. Furthermore, a CSI-RS index, a CSI-RS resource ID, and a CSI-RS resource indicator (CRI) may be replaced with each other. Further, an SRS index, an SRS resource ID and an SRI may be replaced with each other. The SRS spatial relation information may include a serving cell index and a BWP index (BWP ID) corresponding to the above-described given reference signal. In NR, the transmission of the uplink signal may be controlled based on the presence or absence of beam correspondence (BC). The BC may be, for example, a capability of a certain node (for example, a base station or UE) to determine a beam (transmission beam, Tx beam) used for signal transmission based on a beam (reception beam, Rx beam) used for signal reception. Note that BC may be referred to as transmission/reception beam correspondence (Tx/Rx beam correspondence), beam reciprocity, beam calibration, calibrated/non-calibrated, reciprocity calibrated/non-calibrated, a correspondence degree, a matching degree, or the like. As illustrated inFIG.1, in BC, a gNB performs transmission beam sweeping by using beams B21to B24, and the UE performs reception beam sweeping by using beams b1to b4, so that the gNB and the UE determine the beam B22of the gNB as a DL transmission beam and determine the beam b2of the UE as a DL reception beam based on a measurement result. The gNB also uses the determined beam B22as a UL reception beam and the UE also uses the determined beam b2as a UL transmission beam. For example, when there is no BC, the UE may transmit an uplink signal (for example, PUSCH, PUCCH, SRS, and the like) using the same beam (spatial domain transmission filter) as the SRS (or the SRS resource) indicated from the base station based on the measurement results of one or more SRSs (or the SRS resources). On the other hand, when there is BC, the UE may transmit an uplink signal (for example, PUSCH, PUCCH, SRS, and the like) using a beam (spatial domain transmission filter) identical or corresponding to the beam (spatial domain reception filter) used for reception of a given SSB or CSI-RS (or CSI-RS resource). When spatial relation information regarding the SSB or CSI-RS and the SRS is configured for a certain SRS resource (for example, when there is BC), the UE may transmit the SRS resource by using the same spatial domain filter (spatial domain transmission filter) as a spatial domain filter (spatial domain reception filter) for receiving the SSB or CSI-RS. In this case, the UE may assume that the UE reception beam of the SSB or CSI-RS is the same as the UE transmission beam of the SRS. When spatial relation information regarding another SRS (reference SRS) and the SRS (target SRS) is configured for a certain SRS (target SRS) resource (for example, when there is no BC), the UE may transmit the target SRS resource by using the same spatial domain filter (spatial domain transmission filter) as the spatial domain filter (spatial domain transmission filter) for transmitting the reference SRS. That is, in this case, the UE may assume that the UE transmission beam of the reference SRS is the same as the UE transmission beam of the target SRS. The UE may determine the spatial relation of the PUSCH scheduled by the DCI based on a value of a given field (for example, SRS resource identifier (SRI) field) in the DCI (for example, DCI format0_1). Specifically, the UE may use the spatial relation information (for example, the RRC information element “spatialRelationInfo”) of the SRS resource determined based on the value (for example, SRI) of the given field for the PUSCH transmission. (Method for Determining Spatial Relation) As described above, for the PDCCH or the PDSCH, the UE may be configured with a plurality of TCI states by the RRC and indicated with one of the plurality of TCI states by the MAC CE or the DCI. Therefore, the beam can be quickly switched without performing RRC reconfiguration. The maximum number of TCI states configurable by the RRC (maxNrofTCI-States) is 128 and the maximum number of TCI states for the PDCCH (maxNrofTCI-StatesPDCCH) is 64. For the PUCCH, the UE may be configured with eight spatial relations for one PUCCH resource by the RRC and indicated with one spatial relation by the MAC CE. In order to use a spatial relation other than the eight spatial relations configured by the RRC, RRC reconfiguration is required. When using codebook based transmission for the PUSCH, the UE may be configured with two SRS resources by the RRC and indicated with one of the two SRS resources by the DCI (1-bit field). When using non-codebook based transmission for the PUSCH, the UE may be configured with four SRS resources by the RRC and indicated with one of the four SRS resources by the DCI (2-bit field). In order to use a spatial relation other than the two or four spatial relations configured by the RRC, RRC reconfiguration is required. The DL-RS can be configured for the spatial relation of the SRS resource used for the PUSCH. For the SP-SRS, the UE may be configured with a spatial relation of a plurality of (for example, up to 16) SRS resources by the RRC and indicated with one of the plurality of SRS resources by the MAC CE. For the A-SRS and the P-SRS, the UE cannot be indicated with the spatial relation of the SRS resource by the MAC CE. As described above, there is a possibility that it is necessary to configure many spatial relation candidates at a time as the spatial relation for the UL transmission (PUCCH, PUSCH, or SRS). For example, in a case where the DL-RS (TCI state of DL) is used as the spatial relation of the UL transmission by beam correspondence, there is a possibility that many DL-RSs (for example, 32 SSBs) are configured. However, as described above, the number of spatial relation candidates that can be configured at a time for the UL transmission is limited, and is smaller than the number of TCI state candidates that can be configured at a time for the DL transmission. In order to use a spatial relation that is not configured for the UL transmission, it is conceivable to configure another spatial relation by RRC reconfiguration. When the RRC reconfiguration is performed, there is a possibility that an uncommunicable time occurs, resources are consumed, and the performance of the system deteriorates. Accordingly, the present inventors have conceived a method in which the UE assumes that the spatial relation of specific uplink transmission is the same as a transmission control indication (TCI) state or quasi-co-location (QCL) assumption of a specific downlink channel. Hereinafter, embodiments according to the present disclosure will be described in detail with reference to the drawings. A radio communication method according to each embodiment may be applied independently, or may be applied in combination with others. In the present disclosure, the spatial relation may be replaced with spatial relation information, spatial relation assumption, spatial domain transmission filter, UE spatial domain transmission filter, spatial domain filter, UE transmission beam, UL transmission beam, DL-RS, QCL assumption, SRI, spatial relation based on SRI, or the like. The TCI state may be replaced with a TCI state or a QCL assumption, a QCL assumption, a spatial domain reception filter, a UE spatial domain reception filter, a spatial domain filter, a UE reception beam, a DL reception beam, a DL-RS, or the like. The RS of QCL type D, the DL-RS associated with QCL type D, the DL-RS with QCL type D, a source of the DL-RS, the SSB, and the CSI-RS may be replaced with each other. In the present disclosure, the TCI state may be information (for example, a DL-RS, a QCL type, a cell in which a DL-RS is transmitted, or the like) about a reception beam (spatial domain reception filter) indicated (configured) for the UE. The QCL assumption is based on transmission or reception of an associated signal (for example, PRACH), and may be information (for example, a DL-RS, a QCL type, a cell in which a DL-RS is transmitted, or the like) about a reception beam (spatial domain reception filter) assumed by the UE. In the present disclosure, the PCell, the primary secondary cell (PSCell), and the special cell (SpCell) may be replaced with each other. In the present disclosure, x or more and more than x may be replaced with each other. In the present disclosure, less than x and x or less may be replaced with each other. Radio Communication Method Embodiment 1 The UE may use a default spatial relation or a spatial relation of a reference UL transmission as a spatial relation of specific UL transmission. The UE may assume (may consider) that the spatial relation of the specific UL transmission is the same as the RS of the default spatial relation or the RS of the spatial relation of the reference UL transmission. The specific UL transmission may be replaced with a specific UL signal or a specific UL channel, or may be replaced with at least one of a PUSCH, a PUCCH, an SRS, an SRS resource set (with usage information (usage) indicating codebook based transmission or non-codebook based transmission) whose usage is codebook based transmission (codebook) or non-codebook based transmission (nonCodebook), or an SRS resource in the SRS resource set whose usage is codebook based transmission or non-codebook based transmission. The spatial relation of the specific UL transmission, the RS of the spatial relation of the specific UL transmission, the spatial relation of the SRS configuration information, the PUCCH spatial relation information, the spatial relation of the PUSCH, the spatial relation information of the specific UL transmission, the RS of the spatial relation of the specific UL transmission, and the spatial domain transmission filter of the specific UL transmission may be replaced with each other. When the specific UL transmission is the PUSCH, the spatial relation of the specific UL transmission may be replaced with the SRI, the spatial relation of the SRI, and the spatial domain transmission filter. The default spatial relation, a specific RS, the TCI state or the QCL assumption of a specific DL transmission, the RS related to the QCL parameter (QCL parameter) given by the TCI state or the QCL assumption of the specific DL transmission, and the RS of QCL type D in the TCI state or the QCL assumption of the specific DL transmission may be replaced with each other. The specific DL transmission may be replaced with at least one of a specific DL channel, a specific RS, a specific DL RS, the PDCCH, or the PDSCH. The reference UL transmission may be the UL transmission satisfying a given condition, the latest PUSCH transmission, the latest PUCCH transmission, the latest PRACH transmission, the latest SRS transmission, the latest UL transmission, or the latest transmission of at least one of PUSCH, PUCCH, PRACH, or SRS. As the RS of the spatial relation of the specific UL transmission for determining the UL transmission beam (spatial domain transmission filter), it is preferable to use the RS of QCL type D in the TCI state or QCL assumption of the specific DL transmission for determining the UE reception beam (spatial domain reception filter). In particular, when the TCI state or the QCL assumption of the specific DL transmission has both the RS of QCL type A and the RS of QCL type D and the RS of QCL type A and the RS of QCL type D are different from each other, it is preferable to use the RS of QCL type D in the TCI state or the QCL assumption of the specific DL transmission as the RS of the spatial relation of the specific UL transmission. For example, as described above, when the TCI state indicates the RS of QCL type A, which is the TRS of the serving cell (for example, SCell) in which the TCI state is configured and the RS of QCL type D, which is the CSI-RS of another serving cell (for example, PCell) in which repetition is configured, the RS of QCL type A and the RS of QCL type D are different from each other. Since it is conceivable that the parameter of QCL type A varies depending on the cell, the RS of QCL type A is preferably transmitted in the cell in which the TCI state is configured. On the other hand, the RS of QCL type D may be transmitted in a serving cell other than the cell in which the TCI state is configured. Note that the serving cell in which the TCI state is configured may be the PCell, and the serving cell in which the RS of QCL type D is transmitted may be the SCell. As illustrated inFIG.2, the UE may use the RS of QCL type D in the TCI state (for example, DL-RS, spatial domain reception filter, spatial domain filter, or UE reception beam) of the specific DL transmission as the RS of the spatial relation (for example, DL-RS, spatial domain transmission filter, spatial domain filter, or UE transmission beam) of the specific UL transmission. <<Conditions for Applying Default Spatial Relation>> When the UE is implicitly or explicitly configured to use the default spatial relation for the spatial relation of the specific UL transmission, the UE may use the default spatial relation for the spatial relation of the specific UL transmission (may assume that the spatial relation of the specific UL transmission is the same as the default spatial relation). A case where the UE is implicitly configured to use the default spatial relation for the spatial relation of the specific UL transmission may be, for example, a case where the UE is not configured for the spatial relation of the specific UL transmission (for example, spatialRelationInfo and PUCCH-SpatialRelationInfo). A case where the UE is explicitly configured to use the default spatial relation for the spatial relation of the specific UL transmission may be a case where the specific parameter is configured by a specific higher layer parameter. In frequency range1(FR1, a frequency of 6 GHz or less), the UE may not use analog beam forming for UL transmission, or may not be configured with a spatial relation for UL transmission. In frequency range2(FR2, a frequency higher than 6 GHz (or a frequency higher than 24 GHz)), the UE may assume that the spatial relation of the specific UL transmission is the same as the default spatial relation (the RS of the spatial relation of the specific UL transmission is the same as the RS of QCL type D in the TCI state of the specific DL transmission). In FR2, when the UE is implicitly or explicitly configured to use the default spatial relation for the spatial relation of the specific UL transmission, the UE may assume that the spatial relation of the specific UL transmission is the same as the default spatial relation. When the RS of QCL type D in the TCI state of the specific DL transmission can be applied, the UE may assume that the RS of the spatial relation of the specific UL transmission is the same as the RS of QCL type D in the TCI state of the specific DL transmission. When the RS of QCL type D in the TCI state of the specific DL transmission can be applied and it is implicitly or explicitly configured to use the default spatial relation for the spatial relation of the specific UL transmission, the UE may assume that the spatial relation of the specific UL transmission is the same as the default spatial relation. In FR2, when the RS of QCL type D in the TCI state of the specific DL transmission can be applied, the UE may assume that the RS of the spatial relation of the specific UL transmission is the same as the RS of QCL type D in the TCI state of the specific DL transmission. In FR2, when the RS of QCL type D in the TCI state of the specific DL transmission can be applied and it is implicitly or explicitly configured to use the default spatial relation for the spatial relation of the specific UL transmission, the UE may assume that the RS of the spatial relation of the specific UL transmission is the same as the RS of QCL type D in the TCI state of the specific DL transmission. The specific UL transmission may be an SRS using an SRS resource set (or an SRS resource in the SRS resource set) whose usage is not beam management (beamManagement) (is codebook based transmission (codebook) or non-codebook based transmission (nonCodebook) or antenna switching (antennaSwitching)). When the UE is implicitly or explicitly configured to use the default spatial relation for the spatial relation of the SRS, the UE may assume that the spatial relation of the SRS is the same as the default spatial relation. In FR2, when the UE is implicitly or explicitly configured to use the default spatial relation for the spatial relation of the SRS, the UE may assume that the spatial relation of the SRS is the same as the default spatial relation. If the usage of the SRS resource set is beam management, if the UE uses the default spatial relation for the spatial relation of the SRS, the same beam (default spatial relation) is used in the symbols of all the SRSs, and the beam of the SRS cannot be swept. The UE can sweep the beam when the usage of the SRS resource set is beam management by using the default spatial relation for the spatial relation of the SRS only when the usage of the SRS resource set is not beam management. When a given function after Rel. 16 is configured, the UE may assume that the spatial relation of the specific UL transmission is the same as the default spatial relation (the RS of the spatial relation of the specific UL transmission is the same as the RS of QCL type D in the TCI state of the specific DL transmission). When the given function is configured and it is implicitly or explicitly configured to use the default spatial relation for the spatial relation of the specific UL transmission, the UE may assume that the spatial relation of the specific UL transmission is the same as the default spatial relation. The given function may be a beam-related function after Rel. 16. The given function may be configured in the UE by higher layer signaling. The beam-related function may be at least one of low latency beam selection, Layer 1 (L1)-signal to interference plus noise ratio (SINR) beam reporting (L1-SINR beam reporting), BFR on a secondary cell (SCell) (BFR on SCell). The low latency beam selection may be referred to as fast beam selection, beam selection w/o TCI state, beam selection type II, TCI state indication type 2, or the like. The L1-SINR beam reporting may be that the UE reports a measurement result (CSI, L1-SINR corresponding to beam) of the L1-SINR for beam management. The BFR on SCell may be at least one of detecting a beam failure (BF) in the SCell, transmitting a beam failure recovery request (BFRQ) to the SCell, and receiving a beam failure recovery (BFR) response from the SCell. The UE may report specific UE capability (capability) information. The specific UE capability information may indicate support for the assumption that the spatial relation of the specific UL transmission is the same as the default spatial relation (the RS of the spatial relation of the specific UL transmission is the same as the RS of QCL type D in the TCI state of the specific DL transmission), or may indicate support for the given function described above. The specific UE capability information may be a parameter indicating support for the default spatial relation, or may be a parameter having a name indicating either the default spatial relation or the default spatial relation information (default spatial relation info). When reporting the specific UE capability information, the UE may assume that the spatial relation of the specific UL transmission is the same as the default spatial relation. When reporting the specific UE capability information and being implicitly or explicitly configured to use the default spatial relation for the spatial relation of the specific UL transmission, the UE may assume that the spatial relation of the specific UL transmission is the same as the default spatial relation. When not reporting the specific UE capability information, the UE may assume that the spatial relation of the specific UL transmission is configured. The UE that supports the default spatial relation may report UE capability information indicating that the default spatial relation is supported. The UE that supports the default spatial relation may report the UE capability information indicating a channel type supporting the default spatial relation. The channel type may be at least one of PUCCH, SRS, or PUSCH. The UE that supports the default spatial relation may report the UE capability information indicating a QCL source type supporting the default spatial relation. The QCL source type may be at least one of CORESET, PDCCH, or PDSCH. The UE that does not support the default spatial relation (for example, a UE that does not report to support the default spatial relation, a UE that reports not to support the default spatial relation) may use the spatial relation of the reference UL transmission instead of the default spatial relation. In other words, a UE that does not support the default spatial relation may assume that the spatial relation of the specific UL transmission is the same as the spatial relation of the reference UL transmission. By reporting the specific UE capability information, an overhead of notification (at least one of setting or activation) related to spatial relation information can be reduced. <<TCI State, QCL Assumption, or RS Used as Default Spatial Relation>> The default spatial relation may be the TCI state of the specific DL transmission or may be the QCL assumption of the specific DL transmission. This TCI state or QCL assumption may be explicitly configured (activated or indicated) to the UE by at least one of RRC signaling, MAC CE, or DCI, or may be determined by the UE based on measurement of SSB or CSI-RS. This TCI state or QCL assumption may be the RS used for the reference UL transmission. The default spatial relation may be replaced with an active TCI state (activated TCI state), an active TCI state or QCL assumption, a default TCI state, a default QCL assumption, or the like. A plurality of TCI states may be active with respect to the specific DL transmission. In this case, the default spatial relation may be the default TCI state (default RS, default TCI state or QCL assumption). The default TCI state may be replaced with the RS related to a QCL parameter used for QCL indication of the PDCCH of the CORESET associated with the monitored search space having the lowest CORESET-ID in the latest slot in which one or more CORESETs in the active BWP of the serving cell are monitored by the UE, may be replaced with the TCI state or the QCL assumption of the CORESET having the lowest CORESET-ID in the latest slot and associated with the monitored search space, may be replaced with the TCI state or the QCL assumption of the CORESET having the lowest CORESET-ID in a specific slot and associated with the monitored search space, may be replaced with the TCI state or the QCL assumption of a specific CORESET, may be replaced with the TCI state or the QCL assumption (for example, RS of QCL type D in TCI state or QCL assumption) of the DL transmission (alternatively, the DL channel that triggers the specific UL transmission, the DL channel that schedules the specific UL transmission, and the DL channel that schedules the DL channel corresponding to the specific UL transmission) corresponding to the specific UL transmission, or may be replaced with the RS related to a QCL parameter of the specific DL transmission (RS that is QCL with the specific DL transmission (for example, RS of QCL type D)). The specific slot may be the latest slot in the PDSCH reception or the latest slot in the specific UL transmission. The specific CORESET may be a CORESET indicated by higher layer signaling (for example, the spatial relation information of the specific UL transmission). The CORESET used for the default TCI state may include CORESET0 or may not include CORESET0. The default spatial relation may be a spatial relation of the reference UL transmission. The default spatial relation may be a PRACH resource used for latest PRACH transmission or the RS (RS resource index, SSB index, CSI-RS resource index) corresponding to a PRACH occasion. When the specific UL transmission is the PUSCH of a certain cell, the specific DL transmission may be a PUCCH resource having the lowest ID in the active UL BWP of the cell, or may be a PUCCH resource group having the lowest ID in the active UL BWP of the cell. When the specific UL transmission is the PUCCH, the specific DL transmission may be a PDCCH (PDCCH that schedules a PDSCH corresponding to HARQ-ACK carried by the PUCCH) corresponding to the PUCCH or a PDSCH corresponding to HARQ-ACK carried by the PUCCH. When the specific UL transmission is the PUSCH, the specific DL transmission may be a PDCCH that schedules the PUSCH, a PDCCH that schedules the PDSCH corresponding to HARQ-ACK carried by the PUSCH, or a PDSCH corresponding to HARQ-ACK carried by the PUSCH. When the specific UL transmission is the A-SRS, the specific DL transmission may be a PDCCH that triggers the A-SRS. When the specific UL transmission is the UL transmission triggered by the MAC CE, such as the SP-SRS, the specific DL transmission may be a PDCCH that schedules the MAC CE, or may be a PDSCH that carries the MAC CE. For example, when the specific UL transmission is a PUCCH (or PUSCH) carrying HARQ-ACK, the specific DL transmission may be a PDCCH (PDCCH that schedules a PDSCH corresponding to the HARQ-ACK) indicating a resource of the PUCCH, or may be a PDSCH (used to generate the HARQ-ACK) corresponding to the HARQ-ACK. The UE may determine the default spatial relation used for a certain slot. The specific DL transmission may be the latest PDSCH. The specific DL transmission may be configured in the UE by higher layer signaling, or may be specified by specifications. The specific DL transmission may be a DL RS for pathloss measurement (for example, pathlossReferenceRS in SRS-ResourceSet in SRS-Config, PUCCH-PathlossReferenceRS in PUCCH-PowerControl in PUCCH-Config, and PUSCH-PathlossReferenceRS in PUSCH-PowerControl in PUSCH-Config). The DL RS for pathloss measurement may be a CSI-RS or an SSB. When the DL RS for pathloss measurement is configured by higher layer signaling, the UE may use the configured DL RS for pathloss measurement as the default spatial relation. When the UE does not configure the DL RS for pathloss measurement by the higher layer signaling, the UE may determine the ID (RS resource index qd) of the DL RS for pathloss measurement for PUSCH transmission and use the determined DL RS for pathloss measurement as the default spatial relation. When the default spatial relation is the TCI state or the QCL assumpti2019P00165WO_F20200034_DRAWINGS2019P00165WO_F20200034_DRAWINGSon, the DL RS for the spatial relation of the specific UL transmission and the DL RS for pathloss measurement for power control of the specific UL transmission may be different. By making the DL RS for the spatial relation of the specific UL transmission and the DL RS for pathloss measurement for power control of the specific UL transmission common, it is possible to appropriately perform power control of the specific UL transmission. <<Time Offsets of DL and UL>> When the UE is implicitly or explicitly configured to use the default spatial relation for the spatial relation of the specific UL transmission, when the time offset between reception of the DCI (for example, DCI for scheduling specific DL transmission) and reception of the specific DL transmission is equal to or greater than the threshold, the UE may assume that the spatial relation of the specific UL transmission (for example, RS of the spatial relation) is the same as the TCI state or QCL assumption (for example, RS of QCL type D in TCI state or QCL assumption) applied to the CORESET used for PDCCH transmission scheduling the specific DL transmission. When the UE is implicitly or explicitly configured to use the default spatial relation for the spatial relation of the specific UL transmission, or when the UE is configured with a specific parameter by a specific higher layer parameter, when the time offset between reception of the DCI (for example, DCI for scheduling specific DL transmission) and reception of the specific DL transmission is less than the threshold, the UE may assume that the spatial relation of the specific UL transmission (for example, RS of the spatial relation) is the same as the default spatial relation. When the TCI presence information (for example, higher layer parameter TCI-PresentInDCI) is not configured for the CORESET for scheduling the PDSCH or when the PDSCH is scheduled by the DCI format1_0, when the time offset between reception of the DL DCI (for example, DCI for scheduling the PDSCH) and reception of the PDSCH corresponding to the DCI is equal to or greater than the threshold, the UE may assume that the spatial relation (for example, RS of spatial relation) of the PUCCH (or PUSCH) carrying the HARQ-ACK for the PDSCH is the same as the TCI state or the QCL assumption (for example, RS of QCL type D in TCI state or QCL assumption) applied to the CORESET used for the PDCCH transmission for scheduling the PDSCH. When the TCI presence information is set to “enabled”, the TCI field in the DCI in the component carrier (CC) for scheduling (the PDSCH) indicates the activated TCI state in the scheduled CC or DL BWP, and when the PDSCH is scheduled by DCI format1_1, the UE may use the TCI with the DCI and according to the value of the TCI field in the detected PDCCH to determine the spatial relation of the PUCCH (or PUSCH) carrying HARQ-ACK for the PDSCH. When the time offset between the reception of the DL DCI (scheduling the PDSCH) and the PDSCH corresponding to the DCI is equal to or greater than the threshold, the UE may assume that the spatial relation (for example, RS of the spatial relation) of the PUCCH (or PUSCH) carrying the HARQ-ACK for the PDSCH is QCL with the RS (for example, RS of QCL type D) in the TCI state with respect to the QCL type parameter given by the indicated TCI state (for example,FIG.3A). In both a case where the TCI presence information is set to “enabled” and a case where the TCI information in the DCI is not configured in the RRC connection mode, when the time offset between reception of DL DCI (DCI for scheduling the PDSCH) and the corresponding PDSCH (PDSCH scheduled by the DCI) is less than the threshold, the UE may assume that the spatial relation (for example, RS of spatial relation) of the PUCCH (or PUSCH) carrying the HARQ-ACK for the PDSCH is QCL with the RS related to a QCL parameter used for QCL indication of the PDCCH of the CORESET associated with a monitored search space and having the lowest CORESET-ID in a specific slot (for example, latest slot) in which one or more CORESETs in an active BWP of the serving cell are monitored by the UE (for example,FIG.3B), or may assume that the spatial relation of the PUCCH (or PUSCH) carrying the HARQ-ACK for the PDSCH is QCL with the RS related to a QCL parameter of the PDSCH (RS (for example, RS of QCL type D) that is QCL with the PDSCH (DM-RS port of the PDSCH, the antenna port of the PDSCH)). The specific slot may be the latest slot in the PDSCH (for example, the PDSCH corresponding to the HARQ-ACK carried by the specific UL transmission) corresponding to the specific UL transmission. In this case, by using the RS related to the QCL parameter associated with the CORESET of the latest slot for the PDSCH for the spatial relation of the specific UL transmission, the UE can make the beam (spatial domain reception filter) of the PDSCH and the beam (spatial domain transmission filter) of the specific UL transmission the same, avoid beam change processing, and suppress the processing load. The specific slot may be the latest slot in the specific UL transmission. In this case, by using the RS related to the QCL parameter associated with the CORESET of the latest slot for the specific UL transmission for the spatial relation of the specific UL transmission, the UE can make the beam (spatial domain reception filter) of the latest PDCCH and the beam (spatial domain transmission filter) of the specific UL transmission the same, avoid beam change processing, and suppress the processing load. <<Specific Example of Implicit or Explicit Configuration>> A case where the UE is implicitly or explicitly configured to use the default spatial relation for the spatial relation of the specific UL transmission may be at least one of the following cases 1 to 5. <<Case 1>> Case 1 may be a case where there is no specific field in the specific higher layer parameter (for example, the RRC information element) (information of the specific field is not configured in the specific higher layer parameter). The specific higher layer parameter may be SRS configuration information (SRS-Config), PUCCH configuration information (PUCCH-Config), or the like. When there is no specific field in the SRS resource information (SRS-Resource) in the SRS configuration information (SRS-Config), the UE may assume that the spatial relation of the specific UL transmission is the same as the default spatial relation. The specific field may be spatial relation information (spatialRelationInfo) that is a configuration of the spatial relation between a reference RS (reference RS, for example, SSB, CSI-RS, or SRS) and a target SRS. In FR2, the UE may assume that when an SRS resource in the SRS resource set whose usage is codebook based transmission or non-codebook based transmission does not include spatial relation information, the spatial relation with respect to the SRS resource is the same as the default spatial relation. When it is indicated that the SRS resource set information (SRS-ResourceSet) in the SRS configuration information (SRS-Config) is used for codebook based transmission or non-codebook based transmission (the usage in the SRS resource set information indicates codebook based transmission (codebook) or non-codebook based transmission (nonCodebook)), and there is no specific field in the SRS resource information (SRS-Resource) indicating the SRS resource in the SRS resource set, the UE may assume that the RS of the spatial relation of the PUSCH is the same as the RS of QCL type D in the active TCI state of the specific DL transmission. The specific field may be spatial relation information (spatialRelationInfo). If the usage in the SRS resource set information indicates codebook based transmission or non-codebook based transmission, and there is no specific field in SRS resource information (SRS-Resource) indicating the SRS resource in the SRS resource set, the UE may assume that the RS of the spatial relation of a PUSCH is the same as the RS of the QCL type D in the active TCI state of the specific DL transmission. The specific field may be spatial relation information (spatialRelationInfo). When there is no specific field in the PUCCH configuration information (PUCCH-Config), the UE may assume that the RS of the spatial relation of the PUCCH is the same as the RS of QCL type D in the active TCI state of the specific DL transmission. The specific field may be an element of a list (spatialRelationInfoToAddModList). The element may be PUCCH spatial relation information (PUCCH-SpatialRelationInfo) used to configure a spatial setting for PUCCH transmission. <<Case 2>> Case 2 may be a case where the specific higher layer parameter is not configured. The specific higher layer parameter may be a specific RRC information element or a higher layer parameter (for example, spatialRelationInfo, PUCCH-SpatialRelationInfo) of the spatial relation information. The SRS parameter (a higher layer parameter (spatialRelationInfo) of the spatial relation information that is a configuration of the spatial relation between the reference RS and the target SRS) may be semi-statically configurable by a higher layer parameter (SRS-Resource) of the SRS resource. In a case where the higher layer parameter spatialRelationInfo is configured, it may include the ID of the reference RS. The reference RS may be an SS/PBCH block, a CSI-RS, or an SRS. When there is a higher layer parameter (servingCellId) of the serving cell ID, the CSI-RS may be configured on the serving cell indicated thereby. The SRS may be configured on the UL BWP indicated by the higher layer parameter (uplinkBWP) of the UL BWP, may be configured on the serving cell indicated by the higher layer parameter (servingCellId) of the serving cell ID when such exists, or may be configured on the same serving cell as the target SRS when such does not exist. When the higher layer parameter spatialRelationInfo is not configured, the UE may assume that the RS of the spatial relation is the same as the RS of QCL type D in the active TCI state of the specific DL transmission. When the higher layer parameter spatialRelationInfo is not configured, the UE may assume that the RS of the spatial relation is the same as the RS of QCL type D in the active TCI state of the specific DL transmission or the RS of QCL type D in the TCI state or QCL assumption of the CORESET having the lowest CORESET-ID in the latest slot and associated with the monitored search space. <<Case 3>> The case 3 may be a case where the specific RS is not configured in the specific higher layer parameter (the specific higher layer parameter does not include the specific RS, and the specific higher layer parameter does not provide the specific RS). The specific higher layer parameter may be SRS configuration information (SRS-Config), spatial relation information (spatialRelationInfo), PUCCH configuration information (PUCCH-Config), PUCCH spatial relation information (PUCCH-SpatialRelationInfo), or the like. The specific RS may be any of an SRS, an SSB, and a CSI-RS. When the specific RS is not configured in the specific higher layer parameter, none of the SRS, the SSB, and the CSI-RS may be configured in the specific higher layer parameter. When a specific RS is not configured in the SRS resource information (SRS-Resource) in the SRS configuration information (SRS-Config), the UE may assume that the RS of the spatial relation of the specific UL transmission is the same as the RS of QCL type D in the active TCI state of the specific DL transmission. The specific RS may be an RS (referenceSignal) in the spatial relation information (spatialRelationInfo). In FR2, when an SRS resource set (or the SRS resource in the SRS resource set) whose usage is codebook based transmission or non-codebook based transmission does not include the specific RS, the UE may assume that the spatial relation with respect to the SRS resource set (or the SRS resource in the SRS resource set) is the same as the default spatial relation. When it is indicated that the SRS resource set information (SRS-ResourceSet) in the SRS configuration information (SRS-Config) is used for codebook based transmission or non-codebook based transmission (the usage in the SRS resource set information indicates codebook based transmission or non-codebook based transmission), and the specific RS is not configured in the SRS resource information (SRS-Resource) indicating the SRS resource in the SRS resource set, the UE may assume that the RS of the spatial relation of the PUSCH is the same as the RS of QCL type D in the active TCI state of the specific DL transmission. The specific RS may be an RS (referenceSignal) in the spatial relation information (spatialRelationInfo). When the specific RS is not configured in the PUCCH configuration information (PUCCH-Config), the UE may assume that the RS of the spatial relation of the PUCCH is the same as the RS of QCL type D in the active TCI state of the specific DL transmission. The specific RS may be an RS (referenceSignal) in the PUCCH spatial relation information (PUCCH-SpatialRelationInfo). In a case where the PUCCH spatial relation information does not include the specific RS but includes information for PUCCH power control (for example, pucch-PathlossReferenceRS-Id, p0-PUCCH-Id, closedLoopIndex), the UE can perform PUCCH power control based on the PUCCH spatial relation information. <<Case 4>> Case 4 may be a case where the specific higher layer parameter for a specific type is not configured. The specific type may be at least one of the P-SRS, the SP-SRS, and the A-SRS, or may be specified by a higher layer parameter (resourceType) of the resource type in the SRS resource information. <<<P-SRS>>> A case where the SRS resource information (SRS-Resource) indicates the P-SRS to the UE in which one or more SRS resource configurations are configured (a case where the higher layer parameter (resourceType) of the resource type in the SRS resource information indicates “periodic”) will be described. In a case where the UE is configured with the higher layer parameter spatialRelationInfo including the ID (ssb-Index) of a reference SS/PBCH block, the UE may transmit the target SRS resource having the same spatial domain transmission filter as that used to receive the reference SS/PBCH block. In a case where the UE is configured with the higher layer parameter spatialRelationInfo including the ID (csi-RS-Index) of a reference CSI-RS, the UE may transmit the target SRS resource having the same spatial domain transmission filter as that used to receive a reference periodic CSI-RS or a reference semi-persistent CSI-RS. In a case where the UE is configured with the higher layer parameter spatialRelationInfo including the ID (srs) of a reference SRS, the UE may transmit the target SRS resource having the same spatial domain transmission filter as that used to transmit a reference P-SRS. When the higher layer parameter spatialRelationInfo is not configured, the UE may assume that the RS of the spatial relation of the specific UL transmission is the same as the RS of QCL type D in the active TCI state of the specific DL transmission. When the higher layer parameter spatialRelationInfo is not configured, the UE may assume that the RS of the spatial relation of the specific UL transmission is the same as the RS of QCL type D in the TCI state or QCL assumption of the CORESET having the lowest CORESET-ID in the latest slot and associated with the monitored search space. <<<SP-SRS>>> A case where the SRS resource information (SRS-Resource) indicates the SP-SRS to the UE in which one or more SRS resource configurations are configured (a case where the higher layer parameter (resourceType) of the resource type in the SRS resource information indicates “semi-persistent”) will be described. When the UE receives an activation command for the SRS resource and when the HARQ-ACK corresponding to the PDSCH carrying a selection command is transmitted in a slot n, the corresponding operation and the UE's assumption on the SRS transmission corresponding to the configured SRS resource set may be applied from a slot n+3N+1 (where N is the number of slots in the subframe). The activation command may include a spatial relation assumption provided by a list of references to one reference signal ID per element of the activated SRS resource set. Each ID in the list may refer to a reference SS/PBCH block, NZP CSI-RS resource, or SRS resource. The reference NZP CSI-RS resource may be an NZP CSI-RS resource configured on the serving cell indicated by a resource serving cell ID field if such exists in the activation command, or may be an NZP CSI-RS resource configured on the same serving cell as the SRS resource set if such does not exist. The reference SRS resource may be an SRS resource configured on the serving cell and the UL BWP indicated by a resource serving cell ID and a resource BWP ID if such exist in the activation command, or may be an SRS resource configured on the same serving cell and the BWP as the SRS resource set if such do not exist. In a case where the UE is configured with the higher layer parameter spatialRelationInfo including the ID (ssb-Index) of a reference SS/PBCH block, the UE may transmit the target SRS resource having the same spatial domain transmission filter as that used to receive the reference SS/PBCH block. In a case where the UE is configured with the higher layer parameter spatialRelationInfo including the ID (csi-RS-Index) of a reference CSI-RS, the UE may transmit the target SRS resource having the same spatial domain transmission filter as that used to receive a reference periodic CSI-RS or a reference semi-persistent CSI-RS. In a case where the UE is configured with the higher layer parameter spatialRelationInfo including the ID (srs) of a reference SRS, the UE may transmit the target SRS resource having the same spatial domain transmission filter as that used to transmit a reference SP-SRS or a reference SP-SRS. When no higher layer parameter spatialRelationInfo is configured or when no higher layer parameter spatialRelationInfo is activated, the UE may assume that the RS of the spatial relation of the specific UL transmission is the same as the RS of QCL type D in the active TCI state of the specific DL transmission. When no higher layer parameter spatialRelationInfo is configured or no higher layer parameter spatialRelationInfo is activated, the UE may assume that the RS of the spatial relation of the specific UL transmission is the same as the RS of QCL type D in the TCI state or QCL assumption of the CORESET having the lowest CORESET-ID in the latest slot and associated with the monitored search space. <<<A-SRS>>> A case where the SRS resource information (SRS-Resource) indicates the A-SRS to the UE in which one or more SRS resource configurations are configured (a case where the higher layer parameter (resourceType) of the resource type in the SRS resource information indicates “aperiodic”) will be described. In a case where the UE is configured with the higher layer parameter spatialRelationInfo including the ID (ssb-Index) of a reference SS/PBCH block, the UE may transmit the target SRS resource having the same spatial domain transmission filter as that used to receive the reference SS/PBCH block. In a case where the UE is configured with the higher layer parameter spatialRelationInfo including the ID (csi-RS-Index) of the reference CSI-RS, the UE may transmit the target SRS resource having the same spatial domain transmission filter as that used to receive a reference periodic (P)-CSI-RS, a reference semi-persistent (SP)-CSI-RS, or the latest reference aperiodic (A)-CSI-RS. In a case where the UE is configured with the higher layer parameter spatialRelationInfo including the ID (srs) of a reference SRS, the UE may transmit the target SRS resource having the same spatial domain transmission filter as that used to transmit a reference P-SRS, a reference SP-SRS, or a reference A-SRS. When the higher layer parameter spatialRelationInfo is not configured, the UE may assume that the RS of the spatial relation of the specific UL transmission is the same as the RS of QCL type D in the active TCI state of the specific DL transmission. When the higher layer parameter spatialRelationInfo is not configured, the UE may assume that the RS of the spatial relation of the specific UL transmission is the same as the RS of QCL type D in the TCI state or QCL assumption of the CORESET having the lowest CORESET-ID in the latest slot and associated with the monitored search space. When the higher layer parameter spatialRelationInfo is not configured, the UE may assume that the RS of the spatial relation of the specific UL transmission is the same as the RS of QCL type D in the TCI state or QCL assumption of the PDCCH that triggers the A-SRS. <<Case 5>> Case 5 may be a case where the SRS resource or the SRS resource set for the PUSCH or the SRS does not provide the RS of the spatial relation. This SRS resource set may be an SRS resource set with usage that is not beam management (beamManagement) (is codebook based transmission (codebook), non-codebook based transmission (nonCodebook), or antenna switching (antennaSwitching)). Case 5 may be a case where the SRS resource indicated by the SRI field in the DCI format0_1for scheduling the PUSCH does not provide the RS of the spatial relation. The case where the SRS resource does not provide the RS of the spatial relation may be a case where the spatial relation information (for example, spatialRelationInfo and SRS-SpatialRelationInfo) is not provided by the SRS resource (for example, SRS-Resource), a case where the reference signal (for example, referenceSignal, ssb-Index, csi-RS-Index, srs) is not provided by the spatial relation information in the SRS resource, or a case where it is configured by the SRS resource that the spatial relation is the default spatial relation (RS of the default spatial relation). For example, it is assumed that the SRS resource set includes SRS resources #0and #1, the SRS resource #0does not include the spatial relation information, and the SRS resource #1includes the spatial relation information. When the SRS resource #0is indicated by the SRI field in the DCI format0_1for scheduling the PUSCH, the UE may use the default spatial relation for the spatial relation of the PUSCH. When the SRS resource #1is indicated by the SRI field in the DCI format0_1for scheduling the PUSCH, the UE may use the spatial relation information of the SRS resource #1for the spatial relation of the PUSCH. For example, it is assumed that the SRS resource set includes one SRS resource #0and the SRS resource #0does not include the spatial relation information. The UE may use the default spatial relation for the spatial relation of the PUSCH. In this case, the DCI (DCI format0_1,0_0) for scheduling the PUSCH may not include the SRI field (may be a DCI format0_0, or may be a DCI format0_1in which the size of the SRI field is 0 bits). Case 5 may be a case where at least one SRS resource in the SRS resource set does not provide the RS of the spatial relation. When the SRS resource or the SRS resource set for the PUSCH or the SRS does not provide the RS of the spatial relation, the UE may assume that the spatial relation with respect to the SRS resource or the SRS resource set is the same as the default spatial relation. In FR2, when the SRS resource or the SRS resource set for the PUSCH or the SRS does not provide the RS of the spatial relation, the UE may assume that the spatial relation with respect to the SRS resource or the SRS resource set is the same as the default spatial relation. Case 5 may be a case where an indicated SRS resource in an SRS resource set with usage that is not beam management (beamManagement) (is codebook based transmission (codebook), non-codebook based transmission (nonCodebook), or antenna switching (antennaSwitching)) does not provide the RS of the spatial relation, or may be a case where at least one SRS resource in an SRS resource set with usage that is not beam management does not provide the RS of the spatial relation. In this case, the UE may assume that the spatial relation of all the SRS resources in the SRS resource set is the same as the default spatial relation. In this case, the size of the SRI field in the DCI format0_1for scheduling the PUSCH may be log2(the number of SRS resources in the SRS resource set) bits, 0 bits, or log2(the number of SRS resources in which the RS of the spatial relation is configured in the SRS resource set) bits. <<Case 6>> Case 6 may be a case where a specific parameter (information regarding the TCI state or the QCL assumption) is configured by the specific higher layer parameter (case where the specific higher layer parameter indicates the specific parameter, or the specific higher layer parameter includes a field of the specific parameter). The specific higher layer parameter may be SRS configuration information (SRS-Config), PUCCH configuration information (PUCCH-Config), spatial relation information (for example, spatialRelationInfo, PUCCH-SpatialRelationInfo), reference signal information (referenceSignal) in the spatial relation information, a type in the spatial relation information, or the like. In addition, the specific parameter may be one of options of the reference signal information or type. The specific parameter may be a parameter (for example, a TCI state) indicating that the TCI state of the specific DL transmission is used for the spatial relation of the specific UL transmission, a parameter (for example, default) indicating that the RS of the spatial relation of the specific UL transmission is the default spatial relation, a parameter (for example, CORESET) indicating that the spatial relation of the specific UL transmission is the same as the TCI state of the CORESET, or a parameter (for example, ControlRS) indicating that the RS of the spatial relation of the specific UL transmission is the same as the RS of QCL type D in the TCI state of the specific DL transmission. For example, when a CORESET is configured by the spatial relation information (when the spatial relation information indicates the CORESET, when the spatial relation information includes a field of the CORESET), the UE may assume that the RS of the spatial relation of the specific UL transmission is the same as the RS of QCL type D in the TCI state of the specific DL transmission. When a specific parameter is configured by the SRS resource information (SRS-Resource) in the SRS configuration information (SRS-Config), the UE may assume that the RS of the spatial relation of the specific UL transmission is the same as the RS of QCL type D in the active TCI state of the specific DL transmission. In FR2, when an SRS resource set (or an SRS resource in the SRS resource set) whose usage is codebook based transmission or non-codebook based transmission includes the specific parameter, the UE may assume that the spatial relation with respect to the SRS resource set (or the SRS resource in the SRS resource set) is the same as the default spatial relation. When it is indicated that the SRS resource set information (SRS-ResourceSet) in the SRS configuration information (SRS-Config) is used for codebook based transmission or non-codebook based transmission (the usage in the SRS resource set information indicates codebook based transmission or non-codebook based transmission), and a specific parameter is configured by the SRS resource information (SRS-Resource) (or spatial relation information (spatialRelationInfo)) indicating the SRS resource in the SRS resource set, the UE may assume that the RS of the spatial relation of the PUSCH is the same as the RS of QCL type D in the active TCI state of the specific DL transmission. When a specific parameter is configured by the PUCCH configuration information (PUCCH-Config), the UE may assume that the RS of the spatial relation of the PUCCH is the same as the RS of QCL type D in the active TCI state of the specific DL transmission. The specific parameter may be in an element of the list (spatialRelationInfoToAddModList). The element may be PUCCH spatial relation information (PUCCH-SpatialRelationInfo) used to configure a spatial setting for PUCCH transmission. When a CORESET is configured by the PUCCH configuration information (PUCCH-Config), the UE may assume that the RS of the spatial relation of the PUCCH is the same as the RS of QCL type D in the TCI state of the CORESET. <<Effects>> According to Embodiment 1 described above, when the active TCI state of the specific DL transmission is updated by the MAC CE or the DCI, the spatial relation of the specific UL transmission can be updated. Since there is no need to perform RRC reconfiguration and the spatial relation of the specific UL transmission can be quickly controlled, the communication characteristics of the specific UL transmission can be enhanced. In addition, since the base station does not require configuration and activation of the spatial relation information, signaling overheads for the spatial relation and communication interruption can be avoided. In the UE capability information, it has been studied that the maximum number of the sum of active spatial relations for each CC and each BWP, which are (aperiodic NZP CSI-RS) unique DL-RS, SRS without spatial relation configuration, and TCI states available for DCI triggering of aperiodic NZP CSI-RS, for indicating a spatial domain transmission filter for the SRS for PUCCH and PUSCH, is at least one. Furthermore, it has been studied to support one additional active spatial relation for the PUCCH when the maximum number of active spatial relations is 1. According to Embodiment 1, the sum of active spatial relations can be kept at 1, and the UE can operate along this UE capability information. Embodiment 2 The UE may compare the time offset related to the specific UL transmission with a threshold. The UE may determine the spatial relation of the specific UL transmission based on of a result of the comparison. The time offset may be a time between a timing related to the specific UL transmission and the specific UL transmission. The threshold may be replaced with a time length for QCL (timeDulationForQCL), a time offset threshold, a threshold for offset between a DCI indicating a TCI state and a PDSCH scheduled by the DCI, a scheduling offset threshold (Threshold-Sched-Offset), or the like. The threshold may be configured (notified) to the UE by higher layer signaling, may be reported from the UE by the UE capability information, may be the same as a threshold for a time offset for the TCI state of the PDSCH or the A-CSI-RS, or may be specified by the specifications. Different thresholds depending on subcarrier spacing (SCS) may be set or notified or reported or defined. The threshold may be represented by the number of symbols or time (for example, milliseconds (ms)). The UE may apply the default spatial relation to the specific UL transmission based on a result of comparison between the time offset and the threshold. The default spatial relation may be the default spatial relation in Embodiment 1. The conditions for applying the default spatial relation may include a condition for applying the default spatial relation in Embodiment 1. The UE may use one of the following time offsets1to3for the specific UL transmission. <<Time Offset1>> The time offset may be a time between reception of DCI (PDCCH) related to the specific UL transmission and the specific UL transmission. When the specific UL transmission is the A-SRS, the time offset may be the time between reception of the DCI triggering the A-SRS and transmission of the A-SRS (FIG.4A). When the specific UL transmission is the PUSCH, the time offset may be a time between reception of DCI for scheduling the PUSCH and transmission of the PUSCH (FIG.4B). The UE may use this time offset when referring to the spatial relation of the A-SRS indicated by the SRI in the DCI. When the specific UL transmission is the PUSCH, the time offset may be a time between reception of the DCI that triggers the A-SRS and transmission of the PUSCH in which the spatial relation of the A-SRS is specified by the SRI in the DCI for scheduling the PUSCH (FIG.4B). When the specific UL transmission is the PUCCH, the time offset may be a time between reception of DCI (for example, DCI indicating a resource of the PUCCH or DCI for scheduling a PDSCH corresponding to HARQ-ACK carried by the PUCCH) corresponding to the PUCCH and transmission of the PUCCH. <<Time Offset2>> The time offset may be a time between a MAC CE (activation MAC CE, activation/deactivation MAC CE) for activation of the spatial relation information and the specific UL transmission. The spatial relation information of the SRS may be updated (activated) or the spatial relation information of the PUCCH may be updated (activated) by the activation MAC CE. The SRS may be at least one of A-SRS, SP-SRS, and P-SRS. The time offset may be a time between the activation MAC CE and the specific UL transmission, or may be a time between the activation MAC CE and the specific UL transmission using the spatial relation information switched thereby. The threshold may be a time required from the activation MAC CE of the spatial relation information to switching of the spatial relation information (beam). When the specific UL transmission is the SRS, the time offset may be a time between reception of the activation MAC CE of the spatial relation information of the SRS and transmission of the SRS (FIG.5A). When the specific UL transmission is the PUSCH, the time offset may be a time between reception of the activation MAC CE of the spatial relation information of the SRS and transmission of the PUSCH (FIG.5B). When the spatial relation information of the SRS indicated by the SRI in the DCI for scheduling the PUSCH is activated by the activation MAC CE, the UE may use this time offset. When the specific UL transmission is the PUCCH, the time offset may be a time between reception of the activation MAC CE of the spatial relation information of the PUCCH and transmission of the PUCCH. <<Time Offset3>> The time offset may be a time between the update of the TCI state used for the default spatial relation and the specific UL transmission. For example, the TCI state used for the default spatial relation may be the TCI state for the PDCCH, or may be the TCI state for QCL assumption of the PDSCH. The update of the TCI state may be the reception of the DCI (TCI field) indicating the TCI state of the PDCCH, may be the reception of a MAC CE (activation MAC CE) indicating the TCI state of the PDCCH or activating the TCI state of the PDSCH, or may be the update of the TCI state corresponding to the default spatial relation. For example, when the default spatial relation is the TCI state or QCL assumption of the CORESET having the lowest CORESET-ID in the most recent slot and associated with the monitored search space, the default spatial relation may be updated in response to a change in the most recent slot. The specific UL transmission may be at least one of the SRS, the PUSCH, and the PUCCH. The SRS may be at least one of A-SRS, SP-SRS, and P-SRS. When the specific UL transmission is the SRS, the time offset may be a time between the update of the TCI state (for example, reception of an update indication and a slot boundary where the update has occurred) and the transmission of the SRS (FIG.6A). When the specific UL transmission is the PUSCH, the time offset may be a time between the update of the TCI state (for example, reception of an update indication and a slot boundary where the update has occurred) and the transmission of the PUSCH (FIG.6B). When the TCI state used for the default spatial relation of the SRS indicated by the SRI in the DCI for scheduling the PUSCH is updated, the UE may determine the time offset by a threshold. When the specific UL transmission is the PUCCH, the time offset may be a time between the update of the TCI state (for example, reception of an update indication and a slot boundary where the update has occurred) and the transmission of the PUCCH. <<Effects>> According to Embodiment 2 described above, the spatial relation of the specific UL transmission can be appropriately determined by using the time offset related to the specific UL transmission. Embodiment 3 The spatial relation (spatial domain transmission filter) of the specific UL transmission may be different depending on whether the time offset of Embodiment 2 is less than the threshold or equal to or greater than the threshold. The UE may assume that the spatial relation (spatial domain transmission filter) of the specific UL transmission varies depending on whether the time offset is less than the threshold or equal to or greater than the threshold. Whether the time offset is less than the threshold or equal to or greater than the threshold may be replaced with whether the time offset is equal to or less than the threshold or greater than the threshold. When the time offset is less than the threshold, the UE may apply the default spatial relation to the specific UL transmission. When the time offset is equal to or greater than the threshold, the UE may apply the indicated spatial relation information to the specific UL transmission. This spatial relation information may be indicated (activated, configured) by at least one of DCI, MAC CE, and RRC signaling. The UE may apply the default spatial relation to the specific UL transmission based on a result of comparison between the time offset and the threshold. The default spatial relation may be the default spatial relation in Embodiment 1. The conditions for applying the default spatial relation may include a condition for applying the default spatial relation in Embodiment 1. When the usage of the SRS resource set used for the spatial relation of the specific UL transmission is specific usage and the time offset is less than the threshold, the UE may apply the default spatial relation to the specific UL transmission. Otherwise, the UE may apply the indicated spatial relation information to the specific UL transmission. The specific usage may be at least one of beam management (beamManagement), codebook based transmission (codebook), non-codebook based transmission (nonCodebook), or antenna switching (antennaSwitching), or may be the beam management, the codebook based transmission, and the non-codebook based transmission, or may be antenna switching, or may be codebook based transmission and non-codebook based transmission. As illustrated inFIG.7A, when the time offset is a time between the reception of the DCI and the specific UL transmission (for example, SRS or PUSCH) triggered or scheduled by the DCI, and the time offset is less than the threshold, the UE may apply the default spatial relation to the specific UL transmission. As illustrated inFIG.7B, when the time offset is a time between the reception of the DCI and the specific UL transmission (for example, SRS or PUSCH) triggered or scheduled by the DCI, and the time offset is equal to or greater than the threshold, the UE may apply the indicated spatial relation information to the specific UL transmission. <<Effects>> According to Embodiment 3, for example, in a case where the indicated spatial relation information is applicable to the specific UL transmission, the spatial relation information is applied to the specific UL transmission, and in a case where the indicated spatial relation information is not applicable to the specific UL transmission, the default spatial relation is applied to the specific UL transmission, so that the spatial relation appropriate for the specific UL transmission can be used. Embodiment 4 The UE may assume that a spatial relation of an SRS resource set (associated with specific usage) having specific usage is an exception. The specific usage may be beam management (beamManagement (BM)). The UE may apply at least one of Embodiments 2 and 3 when using an SRS resource set with usage other than beam management, and apply this Embodiment 4 when using an SRS resource set with usage of beam management. The SRS resource set having the usage of beam management is used for SRS-based UL beam management (SRS based UL beam management, UL beam sweep), and thus the spatial relation information does not need to be configured in the SRS resource in the SRS resource set. The UE may determine (may sweep) UL beams for SRS-based UL beam management without using the spatial relation information. The UE may apply the default spatial relation to the specific UL transmission based on the SRS resource in the SRS resource set with the usage of beam management under specific conditions. The default spatial relation may be the default spatial relation in Embodiment 1. The conditions for applying the default spatial relation may include a condition for applying the default spatial relation in Embodiment 1. When the specific UL transmission is the A-SRS, the UE may determine the spatial relation of the A-SRS by using any one of the time offsets1,2, and3in Embodiment 2. When the specific UL transmission is the SRS (SP-SRS or P-SRS) other than the A-SRS, the UE may determine the spatial relation of the SRS using one of the time offsets1and2of Embodiment 2. The UE may determine the spatial relation according to any one of the following methods1and2for determining the spatial relation. <<Method1for Determining Spatial Relation>> In FR2, when the spatial relation information is not configured on the SRS resource in the SRS resource set with the usage of beam management (the RS of the spatial relation information is not configured), when the time offset is less than the threshold, the UE may determine the spatial relation information of the specific UL transmission depending on the UE implementation, or when the time offset is equal to or greater than the threshold, the UE may apply the default spatial relation to the spatial relation of the specific UL transmission. In FR2, when the SRS resource in the SRS resource set with the usage of beam management is used for the A-SRS, and spatial relation information is not configured on the SRS resource (the RS of the spatial relation information is not configured), when a time offset is less than the threshold (FIG.8A), the UE may determine the spatial relation information of the specific UL transmission depending on UE implementation, or when the time offset is equal to or greater than the threshold (FIG.8B), the UE may apply the default spatial relation to the spatial relation of the specific UL transmission. The time offset may be a time between reception of DCI (indicating beam switching) that triggers the A-SRS and transmission of the A-SRS. In FR2, when the SRS resource in the SRS resource set with the usage of beam management is used for the A-SRS, and the spatial relation information is not configured in the SRS resource (the RS of the spatial relation information is not configured), when a time offset is less than the threshold, the UE may determine the spatial relation information of the specific UL transmission depending on UE implementation, or when the time offset is equal to or greater than the threshold, the UE may apply the default spatial relation to the spatial relation of the symbol (for example, the first symbol or the last symbol) that satisfies the given condition among the plurality of SRS symbols, and determine the spatial relation of remaining symbols depending on the UE implementation. Consequently, the UE can sweep a beam corresponding to beam correspondence and a beam using another spatial relation in the plurality of SRS symbols. <<Method2for Determining Spatial Relation>> In FR2, when the spatial relation information is not configured on the SRS resource in the SRS resource set with the usage of beam management (the RS of the spatial relation information is not configured), when the time offset is less than the threshold, the UE may apply the default spatial relation to the spatial relation of the specific UL transmission, or when the time offset is equal to or greater than the threshold, the UE may determine the spatial relation information of the specific UL transmission depending on the UE implementation. In FR2, when the SRS resource in the SRS resource set with the usage of beam management is used for the A-SRS, and the spatial relation information is not configured on the SRS resource (the RS of the spatial relation information is not configured), when the time offset is less than the threshold (FIG.9A), the UE may apply the default spatial relation to the spatial relation of the specific UL transmission, or when the time offset is equal to or greater than the threshold (FIG.9B), the UE may determine the spatial relation information of the specific UL transmission depending on the UE implementation. The time offset may be a time between reception of DCI (indicating beam switching) that triggers the A-SRS and transmission of the A-SRS. In FR2, when the SRS resource in the SRS resource set with the usage of beam management is used for the A-SRS, and the spatial relation information is not configured in the SRS resource (the RS of the spatial relation information is not configured), when the time offset is less than the threshold, the UE may apply the default spatial relation to the spatial relation of the specific UL transmission, or when the time offset is equal to or greater than the threshold, the UE may apply the default spatial relation to the spatial relation of the symbol (for example, the first symbol or the last symbol) that satisfies the given condition among the plurality of SRS symbols, and determine the spatial relation of remaining symbols depending on the UE implementation. Consequently, the UE can sweep a beam corresponding to beam correspondence and a beam using another spatial relation in the plurality of SRS symbols. <<Effects>> According to Embodiment 4 described above, by applying the default spatial relation to the SRS resource in the SRS resource set having the usage of beam management under specific conditions, accuracy of the SRS-based beam management and beam sweeping can be improved as compared with a case where the spatial relation depends on UE implementation. (Radio Communication System) Hereinafter, a configuration of a radio communication system according to one embodiment of the present disclosure will be described. In this radio communication system, communication is performed using any one of the radio communication methods according to the embodiments of the present disclosure or a combination thereof. FIG.10is a diagram illustrating an example of a schematic configuration of the radio communication system according to one embodiment. A radio communication system1may be a system that implements communication using long term evolution (LTE), 5th generation mobile communication system New Radio (5G NR), and the like drafted as the specification by third generation partnership project (3GPP). Further, the radio communication system1may support dual connectivity (multi-RAT dual connectivity (MR-DC)) between a plurality of radio access technologies (RATs). The MR-DC may include dual connectivity between LTE (evolved universal terrestrial radio access (E-UTRA)) and NR (E-UTRA-NR dual connectivity (EN-DC)), dual connectivity between NR and LTE (NR-E-UTRA dual connectivity (NE-DC)), and the like. In EN-DC, an LTE (E-UTRA) base station (eNB) is a master node (MN), and an NR base station (gNB) is a secondary node (SN). In NE-DC, an NR base station (gNB) is MN, and an LTE (E-UTRA) base station (eNB) is SN. The radio communication system1may support dual connectivity between a plurality of base stations in the same RAT (for example, dual connectivity in which both MN and SN are NR base stations (gNBs) (NR-NR dual connectivity (NN-DC)). The radio communication system1may include a base station11that forms a macro cell C1with a relatively wide coverage, and base stations12(12ato12c) that are arranged in the macro cell C1and that form small cells C2narrower than the macro cell C1. A user terminal20may be positioned in at least one cell. The arrangement, number, and the like of cells and the user terminals20are not limited to the aspects illustrated in the drawings. Hereinafter, the base stations11and12will be collectively referred to as base stations10unless specified otherwise. The user terminal20may be connected to at least one of the plurality of base stations10. The user terminal20may use at least one of carrier aggregation (CA) using a plurality of component carriers (CC) and dual connectivity (DC). Each CC may be included in at least one of a first frequency range1(FR1) and a second frequency range2(FR2). The macro cell C1may be included in FR1, and the small cell C2may be included in FR2. For example, FR1may be a frequency range of 6 GHz or less (sub-6 GHz), and FR2may be a frequency range higher than 24 GHz (above-24 GHz). Note that the frequency ranges, definitions, and the like of FR1and FR2are not limited to these, and for example, FR1may be a frequency range higher than FR2. Further, the user terminal20may perform communication on each CC using at least one of time division duplex (TDD) or frequency division duplex (FDD). The plurality of base stations10may be connected by wire (for example, an optical fiber or an X2 interface in compliance with common public radio interface (CPRI)) or by radio (for example, NR communication). For example, when NR communication is used as a backhaul between the base stations11and12, the base station11corresponding to a higher-level station may be referred to as an integrated access backhaul (IAB) donor, and the base station12corresponding to a relay station (relay) may be referred to as an IAB node. A base station10may be connected to a core network30via another base station10or directly. The core network30may include, for example, at least one of evolved packet core (EPC), 5G core network (5GCN), next generation core (NGC), and the like. The user terminal20may be a terminal corresponding to at least one of communication methods such as LTE, LTE-A, and 5G. In the radio communication system1, a radio access method based on orthogonal frequency division multiplexing (OFDM) may be used. For example, in at least one of downlink (DL) and uplink (UL), cyclic prefix OFDM (CP-OFDM), discrete Fourier transform spread OFDM (DFT-s-OFDM), orthogonal frequency division multiple access (OFDMA), single carrier frequency division multiple access (SC-FDMA), and the like may be used. The radio access method may be referred to as a waveform. Note that, in the radio communication system1, another radio access method (for example, another single carrier transmission method and another multi-carrier transmission method) may be used as UL and DL radio access methods. In the radio communication system1, as a downlink channel, a physical downlink shared channel (PDSCH) shared by each user terminal20, a physical broadcast channel (PBCH), a physical downlink control channel (PDCCH), or the like may be used. In the radio communication system1, an uplink shared channel (physical uplink shared channel (PUSCH)) shared by each user terminal20, an uplink control channel (physical uplink control channel (PUCCH)), a random access channel (physical random access channel (PRACH)), and the like may be used as uplink channels. User data, higher layer control information, and a system information block (SIB) and the like are transmitted by the PDSCH. The PUSCH may transmit user data, higher layer control information, and the like. Further, the PBCH may transmit a master information block (MIB). Lower layer control information may be transmitted by PDCCH. The lower layer control information may include, for example, downlink control information (DCI) including scheduling information of at least one of the PDSCH and the PUSCH. Note that, the DCI for scheduling the PDSCH may be referred to as DL assignment, DL DCI, and the like, and the DCI for scheduling the PUSCH may be referred to as UL grant, UL DCI, and the like. Note that PDSCH may be replaced with DL data, and PUSCH may be replaced with UL data. A control resource set (CORESET) and a search space may be used to detect the PDCCH. The CORESET corresponds to a resource that searches for DCI. The search space corresponds to a search area and a search method for PDCCH candidates. One CORESET may be associated with one or a plurality of search spaces. The UE may monitor CORESET associated with a certain search space based on search space configuration. One search space may correspond to a PDCCH candidate corresponding to one or more aggregation levels. One or a plurality of search spaces may be referred to as a search space set. Note that “search space”, “search space set”, “search space configuration”, “search space set configuration”, “CORESET”, “CORESET configuration”, and the like in the present disclosure may be replaced with each other. Uplink control information (UCI) including at least one of channel state information (CSI), delivery confirmation information (which may be referred to as, for example, hybrid automatic repeat request acknowledgement (HARQ-ACK), ACK/NACK, or the like), scheduling request (SR), or the like may be transmitted on the PUCCH. A random access preamble for establishing a connection with a cell may be transmitted on PRACH. Note that in the present disclosure, downlink, uplink, and the like may be expressed without “link”. Furthermore, various channels may be expressed without “physical” at the beginning thereof. In the radio communication system1, a synchronization signal (SS), a downlink reference signal (DL-RS), and the like may be transmitted. In the radio communication systems1, a cell-specific reference signal (CRS), a channel state information reference signal (CSI-RS), a demodulation reference signal (DMRS), a positioning reference signal (PRS), a phase tracking reference signal (PTRS), and the like may be transmitted as the DL-RS. The synchronization signal may be at least one of, for example, a primary synchronization signal (PSS) and a secondary synchronization signal (SSS). A signal block including SS (PSS or SSS) and PBCH (and DMRS for PBCH) may be referred to as an SS/PBCH block, an SS Block (SSB), and the like. Note that the SS, the SSB, or the like may also be referred to as a reference signal. In the radio communication system1, a sounding reference signal (SRS), a demodulation reference signal (DMRS), and the like may be transmitted as an uplink reference signal (UL-RS). Note that, DMRSs may be referred to as “user terminal-specific reference signals (UE-specific Reference Signals)”. (Base Station) FIG.11is a diagram illustrating an example of a configuration of a base station according to one embodiment. The base station10includes a control unit110, a transmission/reception unit120, a transmission/reception antenna130, and a transmission line interface140. Note that one or more of the control units110, one or more of the transmission/reception units120, one or more of the transmission/reception antennas130, and one or more of the transmission line interfaces140may be provided. Note that, although this example will primarily illustrate functional blocks that pertain to characteristic parts of the present embodiment, it may be assumed that the base station10has other functional blocks that are necessary for radio communication as well. A part of processing of each unit described below may be omitted. The control unit110controls the entire base station10. The control unit110can be constituted by a controller, a control circuit, or the like, which is described based on common recognition in the technical field to which the present disclosure relates. The control unit110may control signal generation, scheduling (for example, resource allocation or mapping), and the like. The control unit110may control transmission/reception, measurement, and the like using the transmission/reception unit120, the transmission/reception antenna130, and the transmission line interface140. The control unit110may generate data to be forwarded as a signal, control information, a sequence, and the like, and may transfer the data, the control information, the sequence, and the like to the transmission/reception unit120. The control unit110may perform call processing (such as configuration or release) of a communication channel, management of the state of the base station10, and management of a radio resource. The transmission/reception unit120may include a base band unit121, a radio frequency (RF) unit122, and a measurement unit123. The base band unit121may include a transmission processing unit1211and a reception processing unit1212. The transmission/reception unit120can be implemented by a transmitter/receiver, an RF circuit, a base band circuit, a filter, a phase shifter, a measurement circuit, a transmission/reception circuit, and the like, which are described based on common recognition in the technical field related to the present disclosure. The transmission/reception unit120may be constituted as an integrated transmission/reception unit, or may be constituted by a transmitting unit and a reception unit. The transmitting unit may be configured by the transmission processing unit1211and the RF unit122. The reception unit may be constituted by the reception processing unit1212, the RF unit122, and the measurement unit123. The transmission/reception antenna130can be implemented by an antenna described based on common recognition in the technical field related to the present disclosure, for example, an array antenna. The transmission/reception unit120may transmit the above-described downlink channel, synchronization signal, downlink reference signal, and the like. The transmission/reception unit120may receive the above-described uplink channel, uplink reference signal, and the like. The transmission/reception unit120may form at least one of a transmission beam and a reception beam by using digital beam forming (for example, precoding), analog beam forming (for example, phase rotation), and the like. The transmission/reception unit120(transmission processing unit1211) may perform packet data convergence protocol (PDCP) layer processing, radio link control (RLC) layer processing (for example, RLC retransmission control), medium access control (MAC) layer processing (for example, HARQ retransmission control), and the like, for example, on data or control information acquired from the control unit110to generate a bit string to be transmitted. The transmission/reception unit120(transmission processing unit1211) may perform transmission processing such as channel encoding (which may include error correction encoding), modulation, mapping, filtering processing, discrete Fourier transform (DFT) processing (if necessary), inverse fast Fourier transform (IFFT) processing, precoding, or digital-analog transform on the bit string to be transmitted, and may output a base band signal. The transmission/reception unit120(RF unit122) may perform modulation to a radio frequency range, filtering processing, amplification, and the like on the base band signal, to transmit a signal in the radio frequency range via the transmission/reception antenna130. Meanwhile, the transmission/reception unit120(RF unit122) may perform amplification, filtering processing, demodulation to a base band signal, and the like on the signal in the radio frequency band received by the transmission/reception antenna130. The transmission/reception unit120(reception processing unit1212) may apply reception processing such as analog-digital transform, fast Fourier transform (FFT) processing, inverse discrete Fourier transform (IDFT) processing (if necessary), filtering processing, demapping, demodulation, decoding (which may include error correction decoding), MAC layer processing, RLC layer processing, and PDCP layer processing on the acquired base band signal to acquire user data and the like. The transmission/reception unit120(measurement unit123) may perform measurement on the received signal. For example, the measurement unit123may perform radio resource management (RRM) measurement, channel state information (CSI) measurement, and the like based on the received signal. The measurement unit123may measure received power (for example, reference signal received power (RSRP)), received quality (for example, reference signal received quality (RSRQ), a signal to interference plus noise ratio (SINR), or a signal to noise ratio (SNR)), signal strength (for example, received signal strength indicator (RSSI)), propagation path information (for example, CSI), and the like. The measurement result may be output to the control unit110. The transmission line interface140may transmit/receive a signal (backhaul signaling) to and from an apparatus included in the core network30, other base stations10, and the like, and may acquire, transmit, and the like user data (user plane data), control plane data, and the like for the user terminal20. Note that the transmitting unit and the reception unit of the base station10in the present disclosure may include at least one of the transmission/reception unit120, the transmission/reception antenna130, and the transmission line interface140. Note that the transmission/reception unit120may transmit a reference signal (for example, SSB, CSI-RS, or the like). The transmission/reception unit120may transmit information (MAC CE or DCI) indicating the TCI state for the specific DL transmission. The TCI state may indicate at least one of a reference signal (for example, SSB, CSI-RS, or the like), a QCL type, and a cell that transmits the reference signal. The TCI state may indicate one or more reference signals. The one or more reference signals may include a reference signal of QCL type A or a reference signal of QCL type D. The control unit110may assume that a first reference signal of the spatial relation of the specific uplink transmission (for example, SRS, PUCCH, PUSCH, or the like) is a second reference signal (for example, SSB, CSI-RS) of QCL type D in the transmission control indication (TCI) state or the quasi-co-location (QCL) assumption of the specific downlink channel (for example, PDCCH, PDSCH, or the like). (User Terminal) FIG.12is a diagram illustrating an example of a configuration of a user terminal according to one embodiment. The user terminal20includes a control unit210, a transmission/reception unit220, and a transmission/reception antenna230. Note that one or more of the control units210, one or more of the transmission/reception units220, and one or more of the transmission/reception antennas230may be included. Note that, although this example mainly describes functional blocks of a characteristic part of the present embodiment, it may be assumed that the user terminal20includes other functional blocks that are necessary for radio communication as well. A part of processing of each unit described below may be omitted. The control unit210controls the entire user terminal20. The control unit210can be constituted by a controller, and a control circuit, which are described based on common recognition in the technical field according to the present disclosure. The control unit210may control signal generation, mapping, and the like. The control unit210may control transmission/reception, measurement, and the like using the transmission/reception unit220and the transmission/reception antenna230. The control unit210may generate data to be transmitted as a signal, control information, a sequence, and the like, and may transfer the data, the control information, the sequence, and the like to the transmission/reception unit220. The transmission/reception unit220may include a base band unit221, an RF unit222, and a measurement unit223. The base band unit221may include a transmission processing unit2211and a reception processing unit2212. The transmission/reception unit220can include a transmitter/receiver, an RF circuit, a base band circuit, a filter, a phase shifter, a measurement circuit, a transmission/reception circuit, and the like that are described based on common recognition in the technical field related to the present disclosure. The transmission/reception unit220may be configured as an integrated transmission/reception unit, or may be configured by a transmitting unit and a reception unit. The transmitting unit may be configured by the transmission processing unit2211and the RF unit222. The reception unit may be constituted by the reception processing unit2212, the RF unit222, and the measurement unit223. The transmission/reception antenna230can be constituted by an antenna described based on common recognition in the technical field to which the present disclosure relates, for example, an array antenna. The transmission/reception unit220may receive the above-described downlink channel, synchronization signal, downlink reference signal, and the like. The transmission/reception unit220may transmit the above-described uplink channel, uplink reference signal, and the like. The transmission/reception unit220may form at least one of a transmission beam and a reception beam by using digital beam forming (for example, precoding), analog beam forming (for example, phase rotation), and the like. The transmission/reception unit220(transmission processing unit2211) may perform PDCP layer processing, RLC layer processing (for example, RLC retransmission control), MAC layer processing (for example, HARQ retransmission control), and the like, for example, on data acquired from the control unit210or control information to generate a bit string to be transmitted. The transmission/reception unit220(transmission processing unit2211) may perform transmission processing such as channel encoding (which may include error correction encoding), modulation, mapping, filtering processing, DFT processing (if necessary), IFFT processing, precoding, or digital-analog transform on a bit string to be transmitted, and may output a base band signal. Note that whether or not to apply DFT processing may be determined based on configuration of transform precoding. When transform precoding is enabled for a channel (for example, PUSCH), the transmission/reception unit220(transmission processing unit2211) may perform DFT processing as the above-described transmission processing in order to transmit the channel by using a DFT-s-OFDM waveform, and if not, the DFT processing does not have to be performed as the transmission processing. The transmission/reception unit220(RF unit222) may perform modulation to a radio frequency band, filtering processing, amplification, and the like on the base band signal, and may transmit a signal in the radio frequency band via the transmission/reception antenna230. Meanwhile, the transmission/reception unit220(RF unit222) may perform amplification, filtering processing, demodulation to a base band signal, and the like on the signal in the radio frequency band received by the transmission/reception antenna230. The transmission/reception unit220(reception processing unit2212) may acquire user data and the like by applying reception processing such as analog-digital transform, FFT processing, IDFT processing (if necessary), filtering processing, demapping, demodulation, decoding (which may include error correction decoding), MAC layer processing, RLC layer processing, or PDCP layer processing on the acquired base band signal. The transmission/reception unit220(measurement unit223) may perform measurement on the received signal. For example, the measurement unit223may perform RRM measurement, CSI measurement, and the like based on the received signal. The measurement unit223may measure received power (for example, RSRP), received quality (for example, RSRQ, SINR, or SNR), signal strength (for example, RSSI), propagation path information (for example, CSI), and the like. A measurement result may be output to the control unit210. Note that the transmitting unit and the reception unit of the user terminal20in the present disclosure may include at least one of the transmission/reception unit220and the transmission/reception antenna230. Note that the transmission/reception unit220may receive a reference signal (for example, SSB, CSI-RS, or the like). The control unit210may determine, based on at least one of comparison of a time offset with a threshold between a timing related to a spatial relation of uplink transmission and the uplink transmission, and that usage of the sounding reference signal (SRS) resource set for the uplink transmission is a specific usage, one of a transmission control indication (TCI) state or a quasi-co-location (QCL) assumption (for example, a default spatial relation) for downlink transmission and indicated spatial relation information (for example, spatial relation information based on at least one of RRC signaling, a MAC CE, and DCI) as the spatial relation. The transmission/reception unit220may perform the specific uplink transmission using the spatial relation. The timing may be one of reception (for example, time offset1) of downlink control information (DCI) related to the uplink transmission, reception (for example, time offset2) of a media access control (MAC) control element (CE) for activation of the spatial relation, and update (for example, time offset3) of the TCI state or the QCL assumption. The threshold may be one of a value configured for the user terminal by higher layer signaling, a value reported by the user terminal as capability information, a threshold of time used to determine a TCI state of a downlink shared channel or an aperiodic channel state information reference signal (A-CSI-RS), a value specified by the specifications, and a value different according to a subcarrier spacing (Embodiment 2). In a case where the time offset is less than the threshold, the control unit210may determine the TCI state or the QCL assumption as the spatial relation. When the time offset is equal to or greater than the threshold, the control unit210may determine the indicated spatial relation information as the spatial relation (Embodiment 3). When usage of the SRS resource set is beam management, the control unit210may determine the TCI state or the QCL assumption as the spatial relation based on at least one of whether or not the time offset is less than the threshold or whether or not the time offset is a specific location within a period of the uplink transmission (Embodiment 4). (Hardware Configuration) Note that the block diagrams that have been used to describe the above embodiments illustrate blocks in functional units. These functional blocks (configuration units) may be implemented in arbitrary combinations of at least one of hardware or software. Further, the method for implementing each functional block is not particularly limited. That is, each functional block may be implemented by a single apparatus physically or logically aggregated, or may be implemented by directly or indirectly connecting two or more physically or logically separate apparatuses (using wire, radio, or the like, for example) and using these plural apparatuses. The functional blocks may be implemented by combining software with the above-described single apparatus or the above-described plurality of apparatuses. Here, the function includes, but is not limited to, deciding, determining, judging, calculating, computing, processing, deriving, investigating, searching, ascertaining, receiving, transmitting, outputting, accessing, solving, selecting, choosing, establishing, comparing, assuming, expecting, regarding, broadcasting, notifying, communicating, forwarding, configuring, reconfiguring, allocating, mapping, assigning, and the like. For example, a functional block (configuration unit) that causes transmission to function may be referred to as a transmitting unit, a transmitter, and the like. In any case, as described above, the implementation method is not particularly limited. For example, the base station, the user terminal, or the like according to one embodiment of the present disclosure may function as a computer that executes processing a radio communication method in the present disclosure.FIG.13is a diagram illustrating an example of a hardware configuration of the base station and the user terminal according to one embodiment. Physically, the above-described base station10and user terminal20may be formed as a computer apparatus that includes a processor1001, a memory1002, a storage1003, a communication apparatus1004, an input apparatus1005, an output apparatus1006, a bus1007, and the like. Note that in the present disclosure, the terms such as an apparatus, a circuit, a device, a section, or a unit can be replaced with each other. The hardware configuration of the base station10and the user terminal20may be configured to include one or a plurality of apparatuses illustrated in the drawings, or may be configured without including some apparatuses. For example, although only one processor1001is illustrated, a plurality of processors may be provided. Further, the processing may be executed by one processor, or the processing may be executed in sequence or using other different methods by two or more processors. Note that the processor1001may be implemented with one or more chips. Each function of the base station10and the user terminal20is implemented by, for example, controlling communication via the communication apparatus1004by causing given software (program) to be read on hardware such as the processor1001and the memory1002and thereby causing the processor1001to perform operation, or by controlling at least one of reading and writing of data in the memory1002and the storage1003. The processor1001may control the whole computer by, for example, running an operating system. The processor1001may be configured by a central processing unit (CPU) including an interface with peripheral equipment, a control device, an operation device, a register, and the like. For example, at least a part of the above-described control unit110(210), transmission/reception unit120(220), and the like may be implemented by the processor1001. Furthermore, the processor1001reads programs (program codes), software modules, data, and so on from at least one of the storage1003or the communication apparatus1004into the memory1002, and executes various processing according to these. As the program, a program to cause a computer to execute at least a part of the operation described in the above-described embodiment is used. For example, the control unit110(210) may be implemented by control programs that are stored in the memory1002and that operate on the processor1001, and other functional blocks may be implemented likewise. The memory1002is a computer-readable recording medium, and may be implemented by, for example, at least one of a read only memory (ROM), an erasable programmable ROM (EPROM), an electrically EPROM (EEPROM), a random access memory (RAM), and/or other appropriate storage media. The memory1002may be referred to as a register, a cache, a main memory (primary storage apparatus), and the like. The memory1002can store a program (program code), a software module, and the like, which are executable for implementing the radio communication method according to one embodiment of the present disclosure. The storage1003is a computer-readable recording medium, and may be constituted by, for example, at least one of a flexible disk, a floppy (registered trademark) disk, a magneto-optical disk (for example, a compact disc ROM (CD-ROM) and the like), a digital versatile disc, a Blu-ray (registered trademark) disk), a removable disk, a hard disk drive, a smart card, a flash memory device (for example, a card, a stick, a key drive), a magnetic stripe, a database, a server, and other appropriate storage media. The storage1003may be referred to as “secondary storage apparatus”. The communication apparatus1004is hardware (transmitting/receiving device) for performing inter-computer communication via at least one of a wired network or a wireless network, and for example, is referred to as “network device”, “network controller”, “network card”, “communication module”, and the like. The communication apparatus1004may include a high frequency switch, a duplexer, a filter, a frequency synthesizer, and the like in order to implement, for example, at least one of frequency division duplex (FDD) and time division duplex (TDD). For example, the transmission/reception unit120(220), the transmission/reception antenna130(230), and the like described above may be implemented by the communication apparatus1004. The transmission/reception unit120(220) may be mounted in a physically or logically separated manner with the transmitting unit120a(220a) and the reception unit120b(220b). The input apparatus1005is an input device for receiving input from the outside (for example, a keyboard, a mouse, a microphone, a switch, a button, a sensor and so on). The output apparatus1006is an output device that performs output to the outside (for example, a display, a speaker, a light emitting diode (LED) lamp, and the like). Note that the input apparatus1005and the output apparatus1006may be provided in an integrated structure (for example, a touch panel). Furthermore, these pieces of apparatus, including the processor1001, the memory1002and so on are connected by the bus1007so as to communicate information. The bus1007may be formed with a single bus, or may be formed with buses that vary between pieces of apparatus. Furthermore, the base station10and user terminal20may include hardware such as a microprocessor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), or a Field Programmable Gate Array (FPGA), and some or all of the functional blocks may be implemented by using the hardware. For example, the processor1001may be implemented with at least one of these pieces of hardware. (Modifications) Note that terms described in the present disclosure and terms necessary for understanding the present disclosure may be replaced with terms that have the same or similar meanings. For example, a channel, a symbol, and a signal (signal or signaling) may be replaced interchangeably. Further, the signal may be a message. The reference signal can be abbreviated as an RS, and may be referred to as a pilot, a pilot signal, and the like, depending on which standard applies. Further, a component carrier (CC) may be referred to as a cell, a frequency carrier, a carrier frequency, and the like. A radio frame may include one or a plurality of durations (frames) in the time domain. Each of the one or plurality of periods (frames) included in the radio frame may be referred to as a subframe. Further, the subframe may include one or more slots in the time domain. A subframe may be a fixed time duration (for example, 1 ms) that is not dependent on numerology. Here, the numerology may be a communication parameter used for at least one of transmission or reception of a certain signal or channel. For example, the numerology may indicate at least one of subcarrier spacing (SCS), a bandwidth, a symbol length, a cyclic prefix length, a transmission time interval (TTI), the number of symbols per TTI, a radio frame configuration, specific filtering processing performed by a transceiver in a frequency domain, specific windowing processing performed by a transceiver in the time domain, and the like. The slot may include one or a plurality of symbols (for example, orthogonal frequency division multiplexing (OFDM) symbol and single carrier frequency division multiple access (SC-FDMA) symbol) in the time domain. Also, a slot may be a time unit based on numerology. A slot may include a plurality of mini slots. Each mini slot may include one or a plurality of symbols in the time domain. Further, the mini slot may be referred to as a sub slot. Each mini slot may include fewer symbols than a slot. PDSCH (or PUSCH) transmitted in a time unit larger than a mini slot may be referred to as PDSCH (PUSCH) mapping type A. A PDSCH (or PUSCH) transmitted using a mini slot may be referred to as “PDSCH (PUSCH) mapping type B”. A radio frame, a subframe, a slot, a mini slot and a symbol all represent the time unit in signal communication. The radio frame, the subframe, the slot, the mini slot, and the symbol may be called by other applicable names, respectively. Note that time units such as a frame, a subframe, a slot, a mini slot, and a symbol in the present disclosure may be replaced with each other. For example, one subframe may be referred to as a TTI, a plurality of consecutive subframes may be referred to as a TTI, or one slot or one mini slot may be referred to as a TTI. That is, at least one of the subframe and TTI may be a subframe (1 ms) in the existing LTE, may be a period shorter than 1 ms (for example, one to thirteen symbols), or may be a period longer than 1 ms. Note that the unit to represent the TTI may be referred to as a “slot”, a “mini slot” and so on, instead of a “subframe”. Here, a TTI refers to the minimum time unit of scheduling in radio communication, for example. For example, in the LTE system, a base station performs scheduling to allocate radio resources (a frequency bandwidth and transmission power that can be used in each user terminal and the like) to each user terminal in TTI units. Note that the definition of TTIs is not limited to this. The TTI may be the transmission time unit of channel-encoded data packets (transport blocks), code blocks, or codewords, or may be the unit of processing in scheduling, link adaptation, or the like. Note that when TTI is given, a time interval (for example, the number of symbols) in which the transport blocks, the code blocks, the codewords, and the like are actually mapped may be shorter than TTI. Note that, when one slot or one mini slot is referred to as a “TTI”, one or more TTIs (that is, one or more slots or one or more mini slots) may be the minimum time unit of scheduling. Also, the number of slots (the number of mini slots) to constitute this minimum time unit of scheduling may be controlled. A TTI having a period of 1 ms may be referred to as usual TTI (TTI in 3GPP Rel. 8 to 12), normal TTI, long TTI, a usual subframe, a normal subframe, a long subframe, a slot, or the like. TTI shorter than normal TTI may also be referred to as shortened TTI, short TTI, partial TTI (or fractional TTI), a shortened subframe, a short subframe, a mini slot, a subslot, a slot, or the like. Note that a long TTI (for example, a normal TTI, a subframe, or the like) may be replaced with a TTI having a time duration exceeding 1 ms, and a short TTI (for example, a shortened TTI) may be replaced with a TTI having a TTI duration less than the TTI duration of a long TTI and not less than 1 ms. A resource block (RB) is the unit of resource allocation in the time domain and the frequency domain, and may include one or a plurality of contiguous subcarriers in the frequency domain. The number of subcarriers included in the RB may be the same regardless of the numerology, and may be twelve, for example. The number of subcarriers included in the RB may be determined based on numerology. Also, an RB may include one or more symbols in the time domain, and may be one slot, one mini slot, one subframe or one TTI in length. One TTI, one subframe, and the like each may be composed of one or more resource blocks. Note that one or a plurality of RBs may be referred to as a physical resource block (PRB), a sub-carrier group (SCG), a resource element group (REG), a PRB pair, an RB pair, and the like. A resource block may include one or a plurality of resource elements (REs). For example, one RE may be a radio resource field of one subcarrier and one symbol. A bandwidth part (BWP) (which may be referred to as a partial bandwidth or the like) may represent a subset of contiguous common resource blocks (RBs) for a certain numerology in a certain carrier. Here, the common RB may be specified by the index of the RB based on a common reference point of the carrier. The PRB may be defined in a certain BWP and be numbered within the BWP. BWP may include BWP for UL (UL BWP) and BWP for DL (DL BWP). For the UE, one or a plurality of BWPs may be configured within one carrier. At least one of the configured BWPs may be active, and the UE does not need to assume to transmit or receive a given signal/channel outside the active BWP. Note that “cell”, “carrier”, and the like in the present disclosure may be replaced with “BWP”. Note that the structures of radio frames, subframes, slots, mini slots, symbols and so on described above are merely examples. For example, configurations such as the number of subframes included in a radio frame, the number of slots per subframe or radio frame, the number of mini slots included in a slot, the number of symbols and RBs included in a slot or a mini slot, the number of subcarriers included in an RB, the number of symbols in a TTI, the symbol duration, the length of cyclic prefix (CP), and the like can be variously changed. Furthermore, information, a parameter, or the like described in the present disclosure may be represented in absolute values, represented in relative values with respect to given values, or represented by using another corresponding information. For example, a radio resource may be indicated by a given index. The names used for parameters and the like in the present disclosure are in no respect limiting. Further, any mathematical expression or the like that uses these parameters may differ from those explicitly disclosed in the present disclosure. Since various channels (PUCCH, PDCCH, and the like) and information elements can be identified by any suitable names, various names assigned to these various channels and information elements are not restrictive names in any respect. The information, signals, and the like described in the present disclosure may be represented by using a variety of different technologies. For example, data, instructions, commands, information, signals, bits, symbols and chips, all of which may be referenced throughout the herein-contained description, may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or photons, or any combination of these. Further, information, signals and the like can be output in at least one of a direction from higher layers to lower layers and a direction from lower layers to higher layers. Information, signals and so on may be input and output via a plurality of network nodes. The information, signals and so on that are input and/or output may be stored in a specific location (for example, in a memory), or may be managed in a management table. The information, signal, and the like to be input and output can be overwritten, updated or appended. The output information, signal, and the like may be deleted. The information, signals and so on that are input may be transmitted to other pieces of apparatus. Notification of information may be performed not only by using the aspects/embodiments described in the present disclosure but also using another method. For example, notification of information in the present disclosure may be performed by using physical layer signaling (for example, downlink control information (DCI), uplink control information (UCI)), higher layer signaling (for example, radio resource control (RRC) signaling, broadcast information (master information block (MIB), system information block (SIB), or the like), medium access control (MAC) signaling), another signal, or a combination thereof. Note that the physical layer signaling may be referred to as Layer 1/Layer 2 (L1/L2) control information (L1/L2 control signal), L1 control information (L1 control signal), and the like. Further, the RRC signaling may be referred to as an RRC message, and may be, for example, an RRC connection setup message, an RRC connection reconfiguration message, and the like. Further, notification of MAC signaling may be performed using, for example, a MAC control element (MAC CE). Further, notification of given information (for example, notification of “being X”) is not limited to explicit notification but may be performed implicitly (for example, by not performing notification of the given information or by performing notification of another piece of information). Decisions may be made in values represented by one bit (0 or 1), may be made in Boolean values that represent true or false, or may be made by comparing numerical values (for example, comparison against a given value). Software, whether referred to as “software”, “firmware”, “middleware”, “microcode” or “hardware description language”, or called by other names, should be interpreted broadly, to mean instructions, instruction sets, code, code segments, program codes, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executable files, execution threads, procedures, functions and so on. Also, software, commands, information and so on may be transmitted and received via communication media. For example, when software is transmitted from a website, a server, or another remote source by using at least one of a wired technology (coaxial cable, optical fiber cable, twisted pair, digital subscriber line (DSL), or the like) and a wireless technology (infrared rays, microwaves, and the like), at least one of the wired technology and the wireless technology is included within the definition of a transmission medium. The terms “system” and “network” used in the present disclosure can be used interchangeably. The “network” may mean an apparatus (for example, a base station) included in the network. In the present disclosure, terms such as “precoding”, “precoder”, “weight (precoding weight)”, “quasi-co-location (QCL)”, “transmission configuration indication state (TCI state)”, “spatial relation”, “spatial domain filter”, “transmission power”, “phase rotation”, “antenna port”, “antenna port group”, “layer”, “number of layers”, “rank”, “resource”, “resource set”, “resource group”, “beam”, “beam width”, “beam angle”, “antenna”, “antenna element”, and “panel” can be used interchangeably. In the present disclosure, terms such as “base station (BS)”, “radio base station”, “fixed station”, “NodeB”, “eNodeB (eNB)”, “gNodeB (gNB)”, “access point”, “transmission point (TP)”, “reception point (RP)”, “transmission/reception point (TRP)”, “panel”, “cell”, “sector”, “cell group”, “carrier”, and “component carrier” can be used interchangeably. A base station may be referred to as a term such as a macro cell, a small cell, a femto cell, a pico cell, and the like. The base station can accommodate one or more (for example, three) cells. In a case where the base station accommodates a plurality of cells, the entire coverage area of the base station can be partitioned into a plurality of smaller areas, and each smaller area can provide communication services through a base station subsystem (for example, small remote radio head (RRH) for indoors). The term “cell” or “sector” refers to a part or the whole of a coverage area of at least one of a base station and a base station subsystem that perform a communication service in this coverage. In the present disclosure, the terms such as mobile station (MS)”, “user terminal”, “user equipment (UE)”, and “terminal” can be used interchangeably. A mobile station may be referred to as a subscriber station, a mobile unit, a subscriber unit, a wireless unit, a remote unit, a mobile device, a wireless device, a wireless communication device, a remote device, a mobile subscriber station, an access terminal, a mobile terminal, a wireless terminal, a remote terminal, a handset, a user agent, a mobile client, a client, or by some other appropriate terms. At least one of the base station and the mobile station may be referred to as a transmitting apparatus, a receiving apparatus, a radio communication apparatus, and the like. Note that at least one of the base station and the mobile station may be a device mounted on a moving body, a moving body itself, and the like. The moving body may be a transportation (for example, a car, an airplane and the like), an unmanned moving body (for example, a drone, an autonomous car, and the like), or a (manned or unmanned) robot. Note that at least one of the base station and the mobile station also includes an apparatus that does not necessarily move during a communication operation. For example, at least one of the base station and the mobile station may be an Internet of Things (IoT) device such as a sensor. Furthermore, a base station in the present disclosure may be interpreted as a user terminal. For example, each aspect/embodiment of the present disclosure may be applied to a configuration in which communication between the base station and the user terminal is replaced with communication among a plurality of user terminals (which may be referred to as, for example, device-to-device (D2D), vehicle-to-everything (V2X), and the like). In the case, the user terminal20may have the function of the above-described base station10. Further, terms such as “uplink” and “downlink” may be replaced with terms corresponding to communication between terminals (for example, “side”). For example, the uplink channel, the downlink channel, and the like may be replaced with a side channel. Similarly, the user terminal in the present disclosure may be replaced with a base station. In this case, the base station10may be configured to have the above-described functions of the user terminal20 In the present disclosure, an operation performed by a base station may be performed by an upper node thereof in some cases. In a network including one or a plurality of network nodes including the base station, it is clear that various operations performed to communicate with terminals may be performed by the base station, one or more network nodes other than the base station (for example, mobility management entity (MME), serving-gateway (S-GW), and the like are conceivable, but there is no limitation), or a combination thereof. The aspects/embodiments illustrated in the present disclosure may be used individually or in combinations, which may be switched depending on the mode of implementation. Further, the order of processing procedures, sequences, flowcharts, and the like of the aspects/embodiments described in the present disclosure may be re-ordered as long as there is no inconsistency. For example, regarding the methods described in the present disclosure, elements of various steps are presented using an illustrative order, and are not limited to the presented specific order. Each aspect/embodiment described in the present disclosure may be applied to a system using long term evolution (LTE), LTE-advanced (LTE-A), LTE-beyond (LTE-B), SUPER 3G, IMT-Advanced, 4th generation mobile communication system (4G), 5th generation mobile communication system (5G), future radio access (FRA), new radio access technology (RAT), new radio (NR), new radio access (NX), future generation radio access (FX), global system for mobile communications (GSM (registered trademark)), CDMA 2000, ultra mobile broadband (UMB), IEEE 802.11 (Wi-Fi (registered trademark)), IEEE 802.16 (WiMAX (registered trademark)), IEEE 802.20, Ultra-WideBand (UWB), Bluetooth (registered trademark), or another appropriate radio communication method, a next generation system expanded based on these, and the like. Further, a plurality of systems may be combined and applied (for example, a combination of LTE or LTE-A and 5G, and the like). The phrase “based on” as used in the present disclosure does not mean “based only on”, unless otherwise specified. In other words, the phrase “based on” means both “based only on” and “based at least on”. Any reference to an element using designations such as “first” and “second” used in the present disclosure does not generally limit the amount or order of these elements. These designations can be used in the present disclosure, as a convenient way of distinguishing between two or more elements. In this way, reference to the first and second elements does not imply that only two elements may be employed, or that the first element must precede the second element in some way. The term “determining” as used in the present disclosure may include a wide variety of operations. For example, “determining (deciding)” may be regarded as “determining (deciding)” of judging, calculating, computing, processing, deriving, investigating, looking up, search, inquiry (for example, looking up in a table, database, or another data structure), ascertaining, and the like. Furthermore, to “judge” and “determine” as used herein may be interpreted to mean making judgements and determinations related to receiving (for example, receiving information), transmitting (for example, transmitting information), inputting, outputting, accessing (for example, accessing data in a memory) and so on. In addition, to “judge” and “determine” as used herein may be interpreted to mean making judgements and determinations related to resolving, selecting, choosing, establishing, comparing and so on. In other words, to “judge” and “determine” as used herein may be interpreted to mean making judgements and determinations related to some action. Furthermore, “determining” may be replaced with “assuming”, “expecting”, “considering”, and the like. As used in the present disclosure, the terms “connected” and “coupled”, or any variation of these terms, mean all direct or indirect connections or coupling between two or more elements, and may include the presence of one or more intermediate elements between two elements that are “connected” or “coupled” to each other. The coupling or connection between the elements may be physical, logical or a combination of these. For example, “connection” may be replaced with “access”. As used in the present disclosure, when two elements are connected, these elements may be considered to be “connected” or “coupled” to each other by using one or more electrical wires, cables, printed electrical connections, and the like, and, as some non-limiting and non-inclusive examples, by using electromagnetic energy and the like having wavelengths in the radio frequency, microwave, and optical (both visible and invisible) domains. In the present disclosure, the phrase “A and B are different” may mean “A and B are different from each other”. Note that the description may mean that “A and B are different from C”. Terms such as “leave”, “coupled”, or the like may also be interpreted in the same manner as “different”. When the terms such as “include”, “including”, and variations of these are used in the present disclosure, these terms are intended to be inclusive, in a manner similar to the way the term “comprising” is used. Furthermore, the term “or” as used in the present disclosure is intended to be not an exclusive-OR. In the present disclosure, when articles, such as “a”, “an”, and “the” are added in English translation, the present disclosure may include the plural forms of nouns that follow these articles. Now, although invention according to the present disclosure has been described above in detail, it is obvious to those skilled in the art that the invention according to the present disclosure is by no means limited to the embodiments described in the present disclosure. The invention according to the present disclosure can be embodied with various corrections and in various modified aspects, without departing from the spirit and scope of the invention defined based on the description of claims. Consequently, the description of the present disclosure is provided only for the purpose of explaining examples, and should by no means be construed to limit the invention according to the present disclosure in any way. This application is based on Japanese Patent Application No. 2019-094118 filed on May 17, 2019. The contents of this are all incorporated herein.
147,806
11943158
DETAILED DESCRIPTION OF THE INVENTION 1. Data Communication Apparatus According to FIG.17 FIG.17shows a block schematic diagram of a data communication apparatus1700, according to an embodiment of the present invention. The data communication apparatus1700receives one or more blocks of data1710and provides, on the basis thereof, a transmission signal or modulation signal1720. The data communication apparatus is configured to transmit the one or more blocks of data within a frame comprising a two-dimensional grid of transmission symbol positions, as illustrated at reference numeral1730. The data communication apparatus1700is configured to select a group of transmission symbol positions1740(a length of which may be a so-called “short transmission time interval” sTTI), which is a subset of the two-dimensional grid of transmission symbol positions (shown at reference numeral1730), for transmission of a data portion (for example, a block of data). The data communication apparatus is configured to select one or more reference symbol positions1750,1752,1754associated to the selected group of transmission symbol positions (for example, lying within the selected group1740of transmission symbol positions), out of a plurality of possibilities, based on an information describing a desired relative position of the reference symbol position with respect to (or with reference to, or relative to) the selected group of transmission symbol positions. An example is illustrated inFIG.17. For example, if the selected group1740of transmission symbol positions comprises a given number of transmission symbol positions (for example, three transmission symbol positions), there are, for example, (at least) four possibilities for the allocation of reference symbol positions, wherein the data communication apparatus1700may make a choice between at least two of these possibilities, or even between all four of these possibilities. For example, the data communication apparatus may be configured to select one or more reference symbol positions (for example, the reference symbol position1750at the beginning of the selected group of transmission symbol positions (as shown at reference numeral1740a), the reference symbol position1752at the end of the selected group of transmission symbol positions, as shown at reference numeral1740b, the reference symbol position1754in an inner (or at the middle) of the selected group of transmission symbol positions, as shown at reference numeral1740c, or two reference symbol positions1756,1758, both at the beginning at the end of the selected group of transmission symbol positions, as shown at reference numeral1740d). In other words, the data communication apparatus may be configured to variably select one or more reference symbol positions within the selected group of transmission symbol positions, wherein the data communication apparatus may use an information describing a relative position of the reference symbol position(s) with respect to the selected group of transmission symbol positions. This information describing the relative positions may, for example, represent two or more of the relative positions “at the beginning”, “at the end”, “in an inner part”, “in the middle”, “both at the beginning and at the end”. Thus, the reference symbol position can be encoded efficiently and selected by the data communication apparatus1700. Moreover, it should be noted that the data communication apparatus1700uses the selected group of transmission symbol positions in order to obtain a transmission signal or a modulated signal1720. In particular, a data block to be transmitted may be provided with a channel coding (which, for example, allows for a detection and/or correction of transmission errors), and the resulting channel-coded data may then be represented in the form of modulation symbols (e.g. complex-valued OFDM modulation symbols) which are included in a transmission signal, or into a modulation signal, at transmission symbol positions within the selected group of transmission symbol positions. In this case, the reference symbol position, which typically lies within the selected group of transmission symbol positions, is occupied with a modulation symbol, which is independent from a data content of the block of data to be transmitted, to be used (or usable) for a channel estimation. In other words, the useful data of the block of data to be transmitted are reflected by modulation symbols placed at non-reference-symbol positions within the selected group of transmission symbol positions. Thus, the data communication apparatus1700has a flexible scheme of selecting a group of transmission symbol positions for the transmission of a data block, and also has a flexible concept for selecting which symbol position (or which symbol positions) within the selected group of transmission symbol positions should be used for the transmission of one or more reference symbols. By flexibly assigning the reference symbol positions at different (relative) locations within the selected group of transmission symbol positions, it is possible to adapt the data communication to different scenarios, for example to scenarios in which a sharing of reference symbol positions between different data communication apparatuses is desired and to scenarios in which there is no sharing of reference symbol positions between data communication apparatuses. Further details regarding the data communication apparatus1700, and regarding the allocation of groups of transmission symbol positions will be described below. In other words, the apparatus1700according toFIG.17can be supplemented by any of the features and functionalities described herein, either individually or in combination. 2. Data Communication Apparatus According to FIG.18 FIG.18shows a block schematic diagram of a data communication apparatus1800, according to an embodiment of the present invention. The data communication apparatus1800is configured to receive a block of data1810and to provide, on the basis thereof, a transmission signal or a modulated signal1820. The data communication apparatus1800is configured to transmit one or more blocks of data within a frame comprising a two-dimensional grid of transmission symbol positions, which is represented at reference numeral1830. The data communication apparatus is configured to select a group of transmission symbol positions (for example, a group1840, a group1841, a group1842, a group1843, a group1844, a group1845and/or a group1846), which is a subset of the two-dimensional grid of transmission symbol positions, for a transmission of a data portion (for example, of the block of data). The selected group of transmission symbol positions may have a length which is designated as “short transmission time interval”. The data communication apparatus is configured to determine which one or more symbol positions (for example, within one or more of the groups1840to1846) are used as one or more reference symbol positions based on the selection of the group of transmission symbol positions or together with the selection of the group of transmission symbol positions. The reference symbol positions are variable with respect to the one or more borders of time slots of the frame or with respect to one or more borders of subframes of the frame. Alternatively or in addition, the one or more reference symbol positions may be variable with respect to the selected group of transmission symbol positions (to which the reference symbol positions are associated). In other words, the data communication apparatus may be very flexible in selecting the groups1840to1846of transmission symbol positions, and the data communication apparatus1800is also very flexible in selecting the reference symbol positions. Taking reference now to the example ofFIG.18, it can be seen that the selected groups1840,1841of transmission symbol positions comprise a same length (in terms of a number of transmission symbol positions) but a shift in time with respect to each other. It should be noted that it is not necessary that the selected groups1840,1841are associated with different frequencies or frequency bins or (sub-)carrier frequencies. Rather, the selected groups1840,1841may both be arranged at the same frequency or frequency bin or frequency carrier and may be used alternatively in dependence on a configuration information. As can be seen, a reference symbol position1840awithin the group1840has a distance of two symbol positions from a (left-sided) time border of a subframe1850. In contrast, a reference symbol position1841ahas a distance of three reference symbol positions from said left-sided border of the subframe1850. Thus, it can be seen that the data communication apparatus1800is configured to select groups of transmission symbol positions (for example, groups1840,1841) (either simultaneously or successively), wherein the reference symbol positions1840a,1841aassociated with these selected groups1840,1841have different relative positions with respect to the closest border of a respective subframe in which the selected groups1840,1841are located. Similarly, it can be seen that the reference symbol position1842aand the reference symbol position1843ahave different distance (or relative positions) with respect to a time border of a slot1860in which the groups1842,1843are located. The (relative) position of the reference symbol position1842awith respect to the time border of the slot1860can be defined as being separated from the border by one reference symbol position. In contrast, the (relative) position of the reference symbol position1843arelative to the time border of the slot1860can be defined as having a different number of symbol positions (for example, two symbol positions) in between. Accordingly, it is apparent that the data communication apparatus1800is adapted to choose the reference symbol positions such that the reference symbol positions are variable with respect to borders of time slots of the frame or with respect to borders of subframes of the frame. Taking reference now to groups1844,1845and1846, it can be seen that the associated reference symbol positions1844a,1845aand1846avary with respect to the respective groups1844,1845,1846. As can be seen, the reference symbol position1844ais at the beginning of the corresponding group1844, the reference symbol position1845ais in an inner part of the corresponding group1845, and the reference symbol position1846ais at an end of the corresponding group1846. Thus, it can be said that the reference symbol positions are variable with respect to the corresponding selected group of transmission symbol positions (and with respect to the selected group of transmission symbol positions to which the reference symbol position is associated). To conclude, the data communication apparatus1800may very flexibly choose both the groups of transmission symbol positions (which are associated to the transmission of a block of data) and the reference symbol positions within these selected groups of transmission symbol positions. In particular, the reference symbol positions are not bound to a fixed grid within a subframe or within a slot. Rather, the reference symbol positions can vary even for groups of transmission symbol positions having the same lengths or sizes. The variation of the reference symbol positions can be both relative with respect to (adjacent) borders of time slots of the respective frame or with respect to (adjacent) borders of subframes of the respective frame or with respect to borders of the corresponding group of transmission symbol positions (i.e. with respect to the selected group of transmission symbol positions itself). Moreover, it should be noted that the selected groups of transmission symbol positions and the selected reference symbol positions may be used for the generation of the transmission signal/modulated signal1820in the same manner as described above with respect to the apparatus1700. Moreover, it should be noted that the data communication apparatus1800can be supplemented by any of the features and functionalities described herein, either individually or in combination. 3. Data Communication Apparatus According to FIG.19 FIG.19shows a block schematic diagram of a data communication apparatus1900, according to an embodiment of the present invention. The data communication apparatus1900is similar to the data communication apparatuses1700,1800as described herein. It should be noted that the data communication apparatus can therefore be supplemented by any of the features and functionalities described with respect to the data communication apparatuses1700and1800. The data communication apparatus1900is configured to receive a block of data1910and to provide, on the basis thereof, a transmission signal or a modulated signal1920. The data communication apparatus1900is configured to transmit one or more blocks of data within a frame comprising a two-dimensional grid of transmission symbol positions. The data communication apparatus is configured to select a group of transmission symbol positions, which is a subset of the two-dimensional grid of transmission symbol positions, for a transmission of a data portion. However, the data communication apparatus is configured to allocate, within the selected group of transmission symbol positions, one or more reference symbol positions. For example, a selected group of transmission symbol positions is designated with1940and an associated reference symbol position is designated with1940a. Another possible group of transmission symbol positions is designated with1942, and the associated reference symbol position is designated with1942a. To allow for a sharing of a reference symbol position with another data communication apparatus, the data communication apparatus1900is configured to apply a multiplexing scheme to one or more reference symbols transmitted on a reference symbol position. The data communication apparatus1900comprises one or more of the mechanisms for the selection of the multiplexing characteristic which will be discussed in the following. According to an aspect, the data communication apparatus is configured to select a multiplexing characteristic of one or more reference symbols, which are associated to the selected group of transmission symbol positions, in dependence on the selection of the group of transmission symbol positions. For example, the data communication apparatus1900may choose a first multiplexing characteristic for one or more reference symbols transmitted at reference symbol position1940a, which is associated with the group1940of transmission symbol positions. On the other hand, the data communication apparatus1900may select a different multiplexing characteristic for one or more reference symbols transmitted at reference symbol position1942a, which is associated to the group1942. Even if the reference symbol positions1940aand1942awere at the same position within the two-dimensional time-frequency-grid, different multiplexing characteristics would be chosen by the data communication apparatus1900in dependence on whether the data communication apparatus has selected the group1940or the group1942. In other words, the data communication apparatus1900may, for example, use a different multiplexing characteristics for a reference symbol which is located at an end of a selected group of transmission symbol positions when compared to a reference symbol which is located at a beginning of a selected group of reference symbol positions. By using such a mechanism (which may be implemented in the same manner in a plurality of data communication apparatuses), it can be ensured that there is no significant distortion between reference symbols transmitted at the same reference symbol position by different data communication apparatuses. In particular, by using such a concept, a large signaling overhead can be avoided. According to another aspect, the data communication apparatus may be configured to select a multiplexing characteristic of one or more reference symbols, which are associated to the selected group of transmission symbol positions, in dependence on a multiplexing characteristic group selection information defining a group of multiplexing characteristics, and in dependence on a (individual) multiplexing characteristic selection information defining which multiplexing characteristic out of a plurality of multiplexing characteristics contained in a group of multiplexing characteristics defined by the multiplexing group selection information should be used. As shown at reference numeral1960, the data communication apparatus1900may, for example, have a table of multiplexing characteristics, wherein this table comprises at least two groups of multiplexing characteristics. Thus, a multiplexing characteristic group selection information may be used to select a group of multiplexing characteristics for use by the data communication apparatus1900. On the other hand, a multiplexing characteristic selection information1964may be used to select which characteristic out of the selected group of multiplexing characteristics (as defined by the multiplexing characteristic groups selection information1962) is to be used. Thus, there can be a two-step selection of a multiplexing characteristic. The multiplexing characteristic group selection information1962may, for example, be derived in a different manner when compared to the multiplexing characteristic selection information1964. For example, the multiplexing characteristic groups selection information1964may be derived (or changed) only “rarely”, for example when handing over from one communication cell to another communication cell. Also, an information item representing the multiplexing characteristic group selection information1962may, for example, only be included in a control information at comparatively large time intervals. In contrast, an information item representing the multiplexing characteristic selection information1964may be determined (or changed, or updated) more frequently than the multiplexing characteristic group selection information1962. For example, an information item representing the multiplexing characteristic selection information1964may be included in a control information more frequently than an information item defining the multiplexing characteristic groups selection information1962. Moreover, the multiplexing characteristics within the first multiplexing characteristic group and the multiplexing characteristic within the second multiplexing characteristic group may be chosen such that any multiplexing characteristic within the first multiplexing characteristic group does not substantively interfere with any of the multiplexing characteristics contained in the second multiplexing characteristic group. Thus, if a data communication apparatus transmits a reference symbol using any of the multiplexing characteristics of the multiplexing characteristic group and, at the same time, another data communication apparatus transmits a reference symbol using any of the multiplexing characteristics defined by the second multiplexing characteristic group, there will be only little or no interference between said transmissions. Thus, if it is ensured that two data communication apparatuses use multiplexing characteristics of different multiplexing characteristic groups, it is not necessary to strictly coordinate which data communication apparatus uses which multiplexing characteristics of its multiplexing characteristic group at which time. Thus, the definition of multiplexing characteristic groups, and the usage of such multiplexing characteristic groups on the side of data communication apparatuses helps to avoid interference between different data communication apparatuses sharing a reference symbol position. Moreover, it should be noted that the multiplexing characteristics may, for example, be a code division scheme, a frequency division scheme and/or a space division scheme, which has the effect that reference symbols transmitted using different multiplexing characteristics exhibit little or no interference (even if transmitted at the same reference symbol position or groups of reference symbol positions). According to yet another aspect, the data communication apparatus is configured to select a multiplexing characteristic out of at least a first multiplexing pattern describing transmission symbol positions and a second multiplexing pattern describing transmission symbol positions. The first multiplexing pattern comprises at least one shared transmission symbol position, which is also used by the second multiplexing pattern, and at least one exclusive transmission symbol position which is not used by the second multiplexing pattern. Similarly, the second multiplexing pattern comprises the shared transmission symbol position and at least one exclusive transmission symbol position which is not used by the first multiplexing pattern. Examples for the first multiplexing pattern (which can be understand as a first multiplexing characteristic) and for the second multiplexing pattern (which can be understood as a second multiplexing characteristic) are shown at reference numerals1980and1990. For example, the first multiplexing pattern and the second multiplexing pattern1980,1990comprise, at a first transmission symbol position, a shared transmission symbol position1982,1992. However, a second transmission symbol position1984is a “exclusive” transmission symbol position, such that there is only a transmission at the second transmission symbol position1984if the first multiplexing pattern1980is chosen. In contrast, if the second multiplexing pattern1990is chosen, there is no transmission at the transmission symbol position1984(as shown by a blank rectangle in the multiplexing pattern1990. Similarly, the first multiplexing pattern1986does not comprise a transmission at a third multiplexing pattern position1986, which is indicated by a blank rectangle of the first multiplexing pattern1980. On the other hand, the transmission symbol position1986is associated with the second multiplexing pattern1990as a “exclusive” transmission symbol position. To conclude, the transmission symbol position1982is shared between the first multiplexing pattern1980and the second multiplexing pattern1990, and can therefore be considered as a shared transmission symbol position. The second transmission symbol position1982is only associated with the first multiplexing pattern1980but not to the second multiplexing pattern1990, and is therefore an exclusive transmission symbol position for the first multiplexing pattern1980. The third transmission symbol position1986is exclusively associated to the second multiplexing pattern1990but not associated to the first multiplexing pattern1980. It should be noted that, for example, the transmission symbol positions1982,1984,1986may be associated with a same time but different frequencies. Moreover, it should be noted that usage of such multiplexing patterns may, on the one hand, reduce interference and, on the other hand, allow for a good estimation of channel characteristics (for example, even at an upper frequency bound or at a lower frequency bound). It should be noted here that the different aspects regarding the selection of the multiplexing characteristic can be used individually, or can be combined. Moreover, it should be noted that the data communication apparatus1900can be supplemented by any of the features and functionalities described herein, either individually or in combination. 4. Data Communication Apparatus According to FIG.20 FIG.20shows a block schematic diagram of a data communication apparatus2000. The data communication apparatus2000may, for example, be configured to receive a reception signal (for example, for one or more other data communication apparatuses, like, for example, user equipments) and provide, on the basis thereof, one or more data blocks. The reception signal is designated with2010and the data blocks are designated with2020. For example, the data communication apparatus2000may comprise a demodulation and/or data block extraction2030, which is configured to receive the reception signal2010and to provide the one or more data blocks2020on the basis thereof. The demodulation/data block extraction2030also uses information on a communication resource allocation for the demodulation/data block extraction. In other words, the data communication apparatus2000is configured for receiving a plurality of data blocks2020from other data communication apparatuses, for example via the reception signal2010. The data blocks are represented by transmission symbols of multiple groups of transmission symbol positions within a frame comprising a two-dimensional grid of transmission symbol positions. The data communication apparatus is configured to provide a signaling information2040, and may be configured to signal to the other data communication apparatuses (for example, to user equipments) which group of transmission symbol positions should be used by which of the other data communication apparatuses. In order to perform such a signaling, the data communication apparatus2000may, for example, comprise a communication resource allocation2050which may, for example, receive a communication state information2052. The communication state information2052may, for example, describe a number of other data communication apparatuses (for example, a number of user equipments) which are linked to the data communication apparatus2000(which may serve as a base station or as a coordinating entity). The communication state information2052may also comprise an information about latency requirements of the other data communication apparatuses. Furthermore, the communication state information2052may also comprise an information about an amount of data to be transmitted by the other data communication apparatuses and/or an information about data rates that may be used by the other data communication apparatuses. The communication resource allocation2050(or, generally speaking, the data communication apparatus2000) may use one or more concepts for determination and signaling of a resource allocation to the other data communication apparatuses. For example, the data communication apparatus2000may be configured to provide an information describing a group of transmission symbol positions to be used by a given one of the other data communication apparatuses (for example, by a given user equipment). Moreover, the data communication apparatus2000may also be configured to provide an information describing a desired relative position of reference symbol positions with respect to the group of transmission symbol positions to be used by the given one of the other data communication apparatuses. Accordingly, a communication resource information2040, which is transmitted to other data communication apparatuses (for example, to user equipments) for the signaling of groups of transmission symbol positions may carry both the information describing the group of transmission symbol positions to be used and the information describing a desired relative position of reference symbol positions with respect to the signaled group of transmission symbol positions. Thus, the data communication apparatus2000may efficiently signal both a group of transmission symbol positions allocated to a given one of the other data communication devices and an allocation of the reference symbol positions within the signaled group of transmission symbol positions. According to another aspect, the communication resource information may represent an allocation of transmission symbol positions and describe which group of transmission symbol positions should be used by which of the other data communication apparatuses and which of the transmission symbol positions should be used for a transmission of reference symbols by which of the other data communication apparatuses. The data communication apparatus2000may be configured to signal different allocations of transmission symbol positions in dependence on a current communication state (for example, in dependence on the communication state information2052). The transmission symbol positions to be used for the transmission of reference symbols may be variable with respect to borders of time slots of the frame or with respect to borders of subframes of the frame between different allocations of transmission symbol positions or even for a single allocation of transmission symbol positions. Alternatively or in addition, the transmission symbol positions to be used for the transmission of reference symbols may be variable with respect to a corresponding group of transmission symbol positions between different allocations of transmission symbol positions or even for a single allocation of transmission symbol positions. In other words, the data communication apparatus2000may be configured to allocate the resources such that the transmission symbol positions to be used for the transmission of reference symbols vary between different resource allocation states or even between different groups of reference symbol positions in a single resource allocation state. Also, different reference symbol positions (for example, relative to one or more adjacent borders of a respective time slot or relative to one or more adjacent borders of a respective subframe, and/or relative to one or more borders of a selected group of transmission symbol positions) may vary. Also, even in a single communication state, these relative positions of the transmission symbol positions to be used for the transmission of reference symbols may vary among reference symbol positions associated with different groups of transmission symbol positions. For example, even in a single communication state, the data communication apparatus2000may make the resource allocation such that a first group of a given number of transmission symbol positions may have its associated reference symbol at the beginning, and another group of the same given number of transmission symbol positions may have its associated reference symbol in an inner part or at the end of the group. Furthermore, the data communication apparatus2000may have the flexibility to vary said (relative) positions of the reference symbol positions in the case that the current communication state changes. Accordingly, the data communication apparatus2000may provide the signaling information2040such that the signaling information instructs other data communication apparatuses (for example, the data communication apparatuses1700,1800,1900) to use different allocations of reference symbol positions, as described above with respect toFIGS.17,18and19. To further conclude, the data communication apparatus2000is configured to provide flexible communication resource information, which allows to flexible bring the data communication apparatuses1700,1800,1900into a plurality of different states as described above. According to an aspect of the invention, the data communication apparatus2000is configured to provide the communication resource information to at least two other data communication apparatuses (for example, user equipments), to cause a sharing between the at least two other data communication apparatuses of a transmission symbol position for a transmission of a reference symbol. In other words, the data communication apparatus2000may be configured to provide the signaling information or communication resource information2040such that the signaling information or communication resource information2040instructs at least two other data communication devices (user equipments) to transmit at least one reference symbol in a shared (identical) transmission symbol positions (for example, at the same time). Thus, the data communication apparatus2000may coordinate a sharing of a transmission symbol position by at least two other data communication apparatuses (for example, user equipments), which results in a substantial saving of physical resources. In this regard, the data communication apparatus2000may be configured to recognize under which situations a sharing of a transmission symbol position for the transmission of a reference symbol makes sense (for example, by providing a substantial saving of physical resources without excessively degrading a quality of a channel estimation). For example, different approaches may be used by the data communication apparatus to decide when such a sharing should be used. For example, the data communication apparatus2000may cause a sharing of a transmission symbol position when it is found that it is acceptable, in terms of an obtainable channel estimation quality, to place a reference symbol position at the end of a first group of transmission symbol positions associated with a first other data communication apparatus and to place a reference symbol position at a beginning of an immediately subsequent (but overlapping) group of transmission symbol positions associated with a second other data communication apparatus. Moreover, different strategies for deciding how such a sharing of transmission symbol positions should be made can also be used. Moreover, it should be noted that the data communication apparatus2000may also be configured to signal an appropriate selection of multiplexing characteristics in case that a sharing of a transmission symbol position is instructed by the data communication apparatus2000via the communication resource information2040. For example, the data communication apparatus2000may use the communication resource information2040to instruct two other data communication apparatuses sharing a transmission symbol position for the transmission of a reference symbol to use different multiplexing characteristics. For example, the data communication apparatus2000my provide the communication resource information2040such that a multiplexing characteristic selection information signaled to a first of the other data communication apparatuses indicates a different multiplexing characteristic when compared to a multiplexing characteristic selection information signaled to another one of the other data communication apparatuses. Consequently, the data communication apparatus2000may control the other data communication apparatuses to use such multiplexing characteristics which cause small or negligible mutual interference when a transmission symbol position is shared. Alternatively or in addition, the data communication apparatus2000may be configured to instruct two other data communication apparatuses to use the multiplexing patterns1980,1990(wherein one of the other data communication apparatuses is instructed to use the multiplexing pattern1980, and wherein another data communication apparatus is instructed to use the multiplexing pattern1990). To conclude, the data communication apparatus2000may not only instruct the other data communication apparatuses to share a transmission symbol position, but may also instruct the other data communication apparatuses to use multiplexing characteristics which reduce or avoid mutual distortions. To conclude, it should be noted that the data communication apparatus2000may provide the communication resource information2040, which may, for example, comprise the information describing the relative position of a reference symbol position with respect to a selected group of transmission symbol positions, as described with respect to the data communication apparatus1700. Alternatively or in addition, the communication resource information2040may be used to signal the variable allocation of reference symbol positions as described with respect to the data communication apparatus1800. Moreover, the communication resource information2040may, for example, comprise any of the information that may be used by the data communication apparatus1900performing the allocation of transmission symbol positions and the decision about the multiplexing characteristics to be used. Thus, the communication resource information2040may comprise one or several of the information items used by data communication apparatuses1700,1800,1900. In addition, it should be noted that the data communication apparatus2000may be supplemented or amended to perform any of the functionalities described herein, either individually or in combination. Also, the data communication apparatus2000may be adapted to control any of the functionalities of the data communication apparatuses1700,1800,1900by providing an appropriate communication resource information2040. 5. Overview Over the Proposed Technical Solutions In the following, the proposed technical solution will be discussed in some more detail. In particular, an overview of important points will be provided, and different important aspects of the invention will be described. According to an aspect of the invention, novel short-transmission-time-interval patterns (sTTI patterns) for the uplink are created. Details will be described below taking reference toFIG.3. According to another aspect, sTTI designs and overlap DMRS symbols are proposed. In particular, it is proposed to multiplex them in a suitable manner (for example, using code-division, and/or frequency division, and/or space division. Moreover, according to an aspect of the invention, patterns for a simplified scheduling are defined. For example, a pattern from a predefined codebook may be signaled. Alternatively, or in addition, it is possible to semi-statically signal a pattern. According to an aspect, they can be dynamically scheduled with an uplink (UL) grant. For example, they can be dynamically scheduled from a subset depending on the sTTI length. According to another aspect, the patterns can be matched with SRS scheduling settings. According to another aspect, control information is provided to the user equipment to schedule DMRS positions to use in a given grant (or in several given grants). The control information can be before or after PUSCH (1a-1b;2a-2c;3a-3d;4a-4e). An sTTI length may be defined otherwise. According to another aspect of the invention, a control information for DMRS multiplexing mode may be provided and/or used. The control information may, for example, define a multiplexing mode to use (for example, code, frequency, and/or spatial, and/or non-orthogonal). Optionally, the control information may comprise additional parameters for a mode: for example, DMRS cyclic shift or frequency pattern. According to another aspect of the invention, predefined patterns, as defined above (as sTTI designs an overlap DMRS symbols) may be used for statically assigning the positions and lengths of data and DMRS symbols. These predefined patterns may, for example, depend on sTTI lengths, assigned PRBs (physical resource blocks), subcarriers, and/or a signaled mode. According to another aspect, control information is provided or used to signal a subset of possible sTTI structures for a user equipment (UE) to use in a given UL grant or in several given UL grants. This concept may be used while defining the sTTI length otherwise. The concept may also be used while (semi)-statically defining the sTTI length. Reference is made, for example, to the cases2d,3a,3dor3a,3b,3c,3d,3e, as described herein. Novel sTTI Patterns for the Uplink In the following, some novel sTTI patterns will be described, which can be used for an uplink from a user equipment (for example, designated as a data communication apparatus for transmitting one or more blocks of data) to a base station or coordinating station (for example, designated as a data communication apparatus for receiving one or more blocks of data). The sTTI patterns described here may, for example, be used by the user equipment to determine the physical resources to be used for transmitting a data block and for transmitting the demodulation reference signal. Similarly, the novel sTTI blocks may be used by the base station or coordinating device to properly extract data blocks from a reception signal. In other words, the knowledge of the novel sTTI patterns can be used by a base station to decide which of the received symbols are reference symbols (and can therefore be used for a channel estimation) and which of the received symbols are data symbols and can be evaluated for an extraction of data blocks after the channel estimation. Coming from legacy LTE, each TTI (transmission time interval) contains its own demodulation reference symbols (DMRS reference symbols). However, possible positions for the reference symbols are shown inFIG.3(FIGS.3aand3b). It is proposed to support various patterns of PUSCH and DMRS symbols and combinations thereof. Reference is now made toFIGS.3aand3b, wherein each line (from the line designated with1ato the line designated with6p) shows a group of transmission symbol positions to be associated to a user equipment or, generally, to a data communication apparatus. It should be noted that each column (labeled from0to6) designates a (temporal) symbol position for an OFDM symbol. However, even though the different alternatives are shown in one line only, each of the alternatives (from1ato6g) can be expanded to a plurality of frequency bins or frequency subcarriers. In each of the lines inFIGS.3aand3b, the transmission symbol positions “PUSCH” and “DMRS” together may represent a “selected group of transmission symbol positions”, and the transmission symbol position “DMRS” may represent a reference symbol position. The data communication apparatuses described herein may choose any of the cases shown in the lines ofFIGS.3aand3b, i.e. any of the combinations of a selected group of transmission symbol positions together with an associated reference symbol position. Cases1a,1band1cdescribe groups of transmission symbol positions which each comprises one PUSCH (physical uplink shared channel) OFDM symbol time interval. For example, a first OFDM symbol time interval312comprises one or more reference symbol positions (in the case of multiple reference symbol positions, extending in the frequency direction). A second OFDM symbol time interval314comprises one or more symbol positions for uplink data (in the case of multiple transmission symbol positions, extending in the frequency direction). Thus, in the case1aa first OFDM symbol time interval312is associated with one or more reference symbols, a second OFDM time interval314is associated with one or more “useful data” symbols. In a case1b, the order of the useful data and of the reference symbol or reference symbol(s) is reversed. Thus, it can be flexibly decided whether the reference symbol or reference symbols should be arranged at the beginning or at the end of the respective group of transmission symbol positions (each having a temporal extension of two of the OFDM symbol time intervals). In the case1c, one OFDM symbol time interval is allocated to “useful data” and two of the time intervals (of the group of symbol positions having a temporal extension of three OFDM symbol time intervals) are allocated to reference symbols. In the cases2ato2e, two of the OFDM symbol time intervals are associated to “useful data”. As can be seen in cases2a,2band2c, it can be chosen as to whether as a symbol position at the beginning of the group (case2a), a symbol position in an inner part of the group (case2b) or a symbol position at the end of the group (case2c) should be used for the one or more reference symbols. As can be seen in case2d, it is also possible to choose a case in which there is a reference symbol position both at the beginning of the group and at the end of a group. Case2eis another special case, wherein there is a reference symbol at the beginning of the group and at the end of the group and in an inner part of the group. This causes a relatively large overhead, but may be advantageous in the case of a strongly varying channel. Cases3ato3kshow situations in which there are three OFDM symbol time intervals used for “useful data”. The reference symbols can be arranged at the beginning of a group (case3a) or at the end of a group (case3b) or within an inner part of the group (cases3band3c) or both at the beginning and at the end of the group (case3e). In addition, some additional special cases are shown. The reference symbol position can be both at the beginning of the group and in an inner part of the group (cases3fand3g) or both in the inner part of a group and at the end of the group (cases3hand3i). Also, reference symbol positions can be at the beginning of a group, in an inner part of the group and at an end of the group (cases3gand3k). Similar resource allocations can also be made for the case of four OFDM symbol intervals being used for “useful data”. The reference symbol positions can be at the beginning (case4a) or at the end (case4e) or in an inner part (cases4b-4d) of the selected group. The reference symbol positions can also be both at the beginning and at the end of the selected group (case4f). Also, there can be several reference symbol positions within the inner part (4g). The cases of reference symbol positions at the beginning and in an inner part of the group (cases4a-4j), reference symbols in an inner part and at the end of the group (cases4k-4m) and reference symbols at the beginning and in an inner part and at the end of the group (cases4n-4p) are also possible. Resource allocations for 5 OFDM symbol time intervals with useful data are designated as cases5a-5g, and resource allocations with six OFDM symbol time intervals being used for useful data are designated with6a-6g. It should be noted here that in the above examples, it was assumed that there should usually be one OFDM symbol time interval associated with useful data between two OFDM symbol time intervals associated with reference symbols, in order to obtain a reasonable resource efficiency. Also, it was assumed that, for realistic group length, it is typically sufficient to have only one OFDM symbol time interval within an inner part of the group used for a reference symbol position, provided that there is a reference symbol position at the beginning or at the end of the group (cf. the special case4g). To conclude, both the user equipment and the base station mentioned herein should be capable to handle all of the cases1ato6gas shown inFIG.3b, or at least a reasonable subset of these cases, in order to provide a sufficient variation possibility for the assignment of reference symbol positions. It should be noted that, without multiplexing (i.e., without sharing a reference symbol position between multiple data communication apparatuses), the variants with the DMRS in the middle are advantageous (for example, cases2b,3b,3c,4c). When multiplexing (sharing) the DMRS for more than one transmission time interval (or for more than one user equipment, or for more than one group of transmission symbol positions), it is beneficial to place the DMRS symbol more to the (common) center of the two (subsequent) multiplexed (shared) TTIs. For example, this means that the DMRS symbol should be placed at an end of a first group of transmission symbol positions and at the beginning of a subsequent (overlapping) second group of transmission symbol position. An example is shown inFIG.4. A first line (designated with a) shows a sequence of two groups of transmission symbol positions of type2b. In a second line (in the example b), UE1(for example, a first user equipment) is assigned pattern2c(as shown inFIG.3a), and UE2(for example, a second user equipment) is assigned pattern2a(for example, as shown inFIG.3a). Accordingly, the DMRS for both user equipment is multiplexed “in OFDM symbol2” (for example, in a third OFDM symbol time interval of a sub-frame). Thus, the base station may instruct the first user equipment to use a group of transmission symbol positions having a temporal extension of three OFDM symbol time intervals and having a reference symbol position at its end. Furthermore, the base station may instruct a second user equipment to use a group of transmission symbol positions covering there OFDM symbol time intervals and having the reference symbol position at the beginning. Also, it can be signaled that the group of transmission symbol positions used by the second user equipment should start two OFDM symbol time intervals later than the group of transmission symbol positions associated with a first user equipment. The multiplexing of the DMRS symbols can be done in any suitable manner. For example, a code division and/or frequency division and/or space division can be used. This can be done in the beginning and/or in the beginning of a transmission. In the following, some further examples will be described. FIG.6shows UE1and UE3(user equipment1and user equipment3) sending a short transmission with one shared DMRS. UE2(user equipment2) having longer transmission shares two DMRS symbols at the beginning and at the end of the transmission. This can also be done for other reasons, e.g. bad signal-to-signal ratio (SNR) or fast changing channel conditions. Worded differently, it actually makes sense to have groups of transmission symbol positions having a reference symbol position both at the beginning and at the end. On the other hand, it also makes sense to have groups of transmission symbol positions only having a reference symbol position at the end or only having a reference symbol position at the beginning. For example, a group of transmission symbol positions according to example1binFIG.3amay be associated to user equipment1, a group of transmission symbol positions according to example3einFIG.3amay be associated with user equipment2, and a group of transmission symbol positions according to example1ainFIG.3amay be associated to user equipment3. To conclude,FIG.6shows a flexible positioning of a shared DMRS symbols with different transmission lengths. FIG.5shows a multiplexing of user equipment with different bandwidth allocations. It can be seen that one user equipment can be multiplexed with more than one other user equipment when the frequency allocations are different. Taking reference now toFIG.5, a group of transmission symbol positions comprising a temporal length of (for example) three OFDN symbol time intervals and having a frequency extension of (for example) two frequency bins or frequency subcarriers may be associated with user equipment1. This may correspond to a frequency-spread version of the example2cshown inFIG.3a(adjusted to a frequency extension of two frequency bins or frequency subcarriers). Similarly, a group of transmission symbol positions comprising a temporal extension of, for example, three OFDM symbol time intervals and having a frequency extension of two frequency bins or frequency subcarriers may be associated with user equipment2. It also corresponds to configuration2cas shown inFIG.3a, extended to a frequency extension of two frequency bins or frequency subcarriers. The groups of transmission symbol positions associated with user equipment1and2both comprise the reference symbols at the end. User equipment3has associated a group of transmission symbol positions having a temporal extension of, for example, three OFDM symbol time intervals and a frequency extension of three frequency bins or subcarriers. It may correspond to example2aas shown inFIG.3a, with the frequency extension adjusted to three frequency bins or subcarriers. User equipment4may have associated a group of transmission symbol positions having a temporal extension of, for example, 3 OFDM symbol time intervals and a frequency extension of one frequency bin or frequency subcarrier. Both user equipment3and user equipment4may be configured such that the reference symbol positions are at the beginning of the respective groups of transmission symbol positions. Accordingly, the group of transmission symbol positions associated with user equipment1only overlaps with the group of transmission symbol positions associated with user equipment3. On the other hand, the group of transmission symbol positions associated with user equipment2overlaps with the group of transmission symbol positions associated with user equipment3and with the group of transmission symbol positions associated with user equipment4. Again, it can be seen that the base station may provide communication resource information which allows for a very flexible allocation of groups of transmission symbol positions to the user equipment. The combination of different transmission lengths and bandwidth allocations is shown inFIG.7. Here, user equipment1(UE1) and user equipment3(UE3) multiplex their DRMS, as well as user equipment2,3and4(UE2, UE3and UE4). Taking reference toFIG.7a, it can be seen that, in a first frequency range, a first group of transmission symbol positions is allocated to a first user equipment UE1, and that a second group of transmission symbol positions, which overlaps with the previous group of transmission symbol positions, is allocated to a user equipment UE3. Thus, there is a sharing of one or more reference symbol positions between user equipment1and user equipment3. In another, second frequency range, a group of transmission symbol positions is associated to a second user equipment2. Moreover, a temporal end of the group of transmission symbol positions associated to the user equipment3is identical to a temporal end of a group of transmission symbol positions associated to the user equipment2. However, it should be noted that the group of transmission symbol positions associated to user equipment1comprises reference symbol positions at the end. The group of transmission symbol positions associated with user equipment3comprises reference symbol positions both at the beginning and at the end. The group of transmission symbol positions associated with user equipment2comprises reference symbol positions only at the end. Moreover, a group of transmission symbol positions covering both the first frequency range and the second frequency range is associated to a fourth user equipment UE4. The group of transmission symbol positions associated to the fourth user device only comprises reference symbol positions at the beginning. Thus, the reference symbol positions at the end of the group of transmission symbol positions associated to the third user equipment overlap with the reference symbol positions at the beginning of the group of transmission symbol positions associated with the fourth user equipment in the first frequency range. Similarly, the reference symbol positions at the end of the group of transmission symbol positions associated with the second user equipment overlap the reference symbol positions associated with the fourth user equipment in the second frequency range. Accordingly, it can be seen that there is a possibility for a flexible resource allocation. In particular, a different number of groups of transmission symbol positions may be arranged in a first frequency range when compared to a second frequency range, while still sharing reference symbol positions with a group of transmission symbol positions covering both frequency ranges and having the reference symbol positions at the beginning. However, the allocation scheme shown inFIG.7acan naturally be mirrored with respect to time and/or with respect to frequency. A scenario shown inFIG.7bdiffers from a scenario shown inFIG.7ain that a group of transmission symbol positions lying between the group of transmission symbol positions associated with the first user equipment and the group of transmission symbol positions associated with the fourth user equipment only overlaps with the group of transmission symbol positions associated with the first user equipment, but does not overlap with the group of transmission symbol positions associated with the fourth user equipment. Accordingly, the reference symbol positions associated with the fourth user equipment and lying between the first frequency range are not shared, but only used by the fourth user equipment. However, the fourth user equipment still shares the reference symbol positions in the second frequency range with the second user equipment. Thus, it is apparent that it is not necessary to use each and every possibility for sharing reference symbol positions. Rather, in some cases, it may be unnecessary to share reference symbol positions, even if there would be a possibility to do so without creating additional overhead. To conclude, both the base station and the user equipment can use a very flexible scheme for the allocation of groups of transmission symbol positions, wherein, at some points, there may be a sharing of transmission symbol positions and wherein, at other points, there may be no sharing of reference symbol positions between neighboring groups of transmission symbol positions. It should also be noted that the frame structure is shown inFIG.7, as well as any other frame structures described herein, can be assigned semi-statically to certain subcarriers or signaled dynamically with an uplink resource grant. sTTI Design and Overlap of DMRS Symbols Embodiments according to the invention create or use sub-frame designs that allow multiplexing of DMRS symbols between multiple STTIs with a length of 1, 2, 3 and 4 OFDM PUSCH symbols. Some of the possible variations are shown inFIG.8. However, it should be noted thatFIG.8shows some samples only, and that it may be desirable in some embodiments, that they contain each and every possible variation. However, it is naturally not necessary that each and every possible variation is contained. However, it may, in some embodiments, be sufficient to implement some or all of the configurations shown inFIG.8, or even different configurations. Taking reference toFIG.8, it should be noted that the individual lines (numbered1to8) describe different allocations of transmission symbol positions to multiple user equipment. However, it should be noted that, as an example, each line comprises (or represents) 14 OFDM symbol time intervals (labeled from 0-13), such that each line represents a duration of an LTE sub-frame. As can be seen in a first line, a group of transmission symbol positions of type3dhaving a reference symbol position at an end and a group of transmission symbol positions of type3a, having a reference symbol position at a beginning, are allocated to the first seven OFDM symbol time intervals, wherein there is an overlap of the reference symbol positions. The same structure is repeated for the next seven OFDM symbol time intervals, wherein there is no overlap between the second group of transmission time intervals and the third group of transmission time intervals. Accordingly, the reference symbol positions are in the fourth OFDM symbol time interval (interval “3”) and in the 11thOFDM symbol time interval (interval “10”), with a sharing in both cases. In a second line, yet another sequence is shown. A first group of transmission symbol positions is of type4e, having a temporal extension of five OFDM symbol time intervals with the reference symbol position at the end. A second group of transmission symbol positions is of type3e, and comprises a total temporal extension of five OFDM symbol time intervals, and comprises a reference symbol position at the beginning and a reference symbol position at the end. A third group of transmission symbol positions is of type4a, having a total temporal extension of five OFDM symbol time intervals. There is an overlap between the first and the second group of transmission symbol positions, and there is also an overlap in the second and the third group of transmission symbol positions. Moreover, there is a SRS symbol at the end (OFDM symbol index13). A similar scenario is also shown in the third row, with some variation in the length of the groups of transmission symbol positions. In the fourth row, a scenario is shown in which there are three groups of transmission symbol positions, each comprising four useful bits. A first group of transmission symbol positions comprises a reference symbol position at the end, a second group of transmission symbol positions (type4f) comprises reference symbol positions both at the beginning and at the end, and a third group of transmission symbol positions (4a) comprises a reference symbol position only at the beginning. Accordingly, a particularly efficient channel estimation can be made, with a comparatively small overhead in terms of reference symbol positions. A similar case is also shown in a sixth row, but for temporally shorter groups of transmission symbol positions. A seventh row and an eighth row show cases which are similar to the example shown in the fourth row, but with unequal length of groups of transmission symbol positions having a shared reference symbol position. In other words, in the examples shown in the first row, in the seventh row and in the eighth row, a first and a second group of transmission symbol positions have a shared reference symbol position, and a third and a fourth group of transmission symbol positions also have a shared reference symbol position. However, there is no sharing of a reference symbol position between the second group of transmission symbol positions and the third group of transmission symbol positions. Also, there is no reference symbol at all at a transition from the second group of transmission symbol positions to the third group of transmission symbol positions in the examples of the first, seventh and eighth row. Taking reference now to the example in the fifth row, there is a sharing of a reference symbol position between the first group of transmission symbol positions and the second group of transmission symbol positions. There is no sharing of a reference symbol position between the second group of transmission symbol positions and the third group of transmission symbol positions. On the other hand, there is a sharing between the third group of transmission symbol positions and the fourth group of transmission symbol positions, and there is also a sharing of a reference symbol between the fourth group of transmission symbol positions and the fifth group of transmission symbol positions. Thus, it can be seen that there can also be an irregular pattern between which groups of transmission symbol positions there is a sharing of transmission symbol positions. Moreover, it should be noted that, in the above example ofFIG.8, the groups of transmission symbol positions have been numbered in accordance with their temporal position (first group, second group, third group, etc.). Moreover, it should be noted that, in some embodiments, different of the configurations shown in the different lines ofFIG.8can be used in a common physical resource block (PRB) in some embodiments. In some embodiments, it is sufficient that there is the possibility to switch between the usage of the different configurations shown in the different lines ofFIG.8(or between at least some of the different configurations shown in the lines ofFIG.8). For example, the base station may signal which of the configurations should be used (or which combination of configurations should be used). On the other hand, the user of equipment should be able to react to the signaling from the base station, and should be able to adapt to the different allocations signaled by the base stations. Accordingly, the user equipment should effectively be able to handle at least some of the configurations shown in the lines ofFIG.8, and the base station should be able to control the usage of different configurations. Control Information to User Equipment to Signal DMRS Position to Use in a Given Grant or in Several Given Grants To dynamically design the position of DMRS symbols, control information can be added signaling a user equipment where the DMRS symbol is to be transmitted. In a simple case, this can be implemented as a 1-bit Boolean signaling the DMRS position to be in the beginning or at the end of the transmission (for example, in the beginning of a selected group of transmission symbol positions or at the end of a selected group of transmission symbol positions). An example for such signaling is shown in the table ofFIG.13. FIG.13shows, in the form of a table, a DCI message (downlink control information). Some of the fields of the downlink control information may have a meaning as defined in a current LTE standard, for example in a version of the LTE standard effective at the date of filing of the present application. However, a 1-bit DMRS position information is added to the downlink control information. This 1-bit DMRS position information indicates, for example, a DMRS symbol position at a beginning or at an end of a transmission (for example, at the beginning or at the end of a group of transmission symbol positions associated with a user equipment). Alternatively, a scheme with an enumerated field allowing the definition of several DMRS positions (for example, more than two DMRS positions) may be used. This can be implemented by choosing a subset of transmission designs as shown inFIG.4. An example of such a concept is shown in the table ofFIG.14. FIG.14shows, in the form of a table, a downlink control information message (DCI message) with an added 1-bit (or 2-bit) DCI position field. As can be seen inFIG.14, a field titled “DMRS position” is added to the downlink control information message (for example, to a downlink control information messages as defined by a version of the LTE standard effective at the date of filing of the present application). Other contents of the downlink control information message can be seen fromFIG.14. As can be seen inFIG.14, the DMRS position information may comprise one bit in the case of a sTTI having two symbols (or having a length of two symbol positions). In this case, the DMRS position information may distinguish between a case in which the DMRS position is at the “front” and a case in which the DMRS position is at the back (for example, of a selected group of transmission symbol positions). However, in the case that a sTTI comprises three or more symbols (or comprises a length of three or more symbols), two bits may be used for the DMRS position information. For example, a DMRS position at the front, at the back, in the middle or both at the front and at the back may be signaled by a 2-bit DMRS position information. For example, the DMRS position information may switch between cases6a,6g,6dand5gas described above (for example, as shown inFIG.3). It should be noted here that the “DMRS position” information may correspond to the “information describing a relative position of a reference symbol position with respect to a selected group”, as mentioned above. However, it should be noted that different signaling options for the DMRS position information are also possible. Control Information for DMRS Multiplexing Mode When multiplexing the DMRS between two users, it is beneficial for a base station (or for an “eNB”) to transmit additional parameters such as which orthogonal code or frequency pattern should be used by each user equipment. When keeping the so-called “Zadoff-Chu” sequences, a cyclic shift can be used to distinguish the users. In other words, a cyclic shift information which may be contained in the DCI message, may be used to determine an appropriate multiplexing characteristic for the transmission of reference symbols in a shared reference symbol position. For other multiplexing schemes, the DMRS cyclic shift field can be reused to signal the multiplexing pattern to use. This becomes especially relevant when one or more other users are using MIMO. Then, the sequences from each antenna need to be separated. An example inFIG.9shows two different modes (or multiplexing characteristics) (“mode A and mode B”) that can be signaled by one bit. The sources between users can also be shared, so that the DRMS is multiplexed on the same resources. Thus, interpolation performance is increased especially at the edge positions of the allocated subband. It is sufficient to share the resources, so that the superposition of the signal is exploited at the receiver. That means, a simple addition of the DRMS symbols is sufficient. In the following, a frequency multiplexing of DMRS, as shown inFIG.9, will be briefly explained.FIG.9shows a frequency multiplexing of DMRS, with shared symbol at the edges for better interpolation. A first representation “a” with reference numeral910describes which OFDM symbols over frequency (for a given instant in time) are used as a first alternative when multiplexing a plurality of reference symbols. A second alternative is shown in the reference numeral920(“b”). As can be seen at reference numeral910, an OFDM symbol912having a highest frequency (within a frequency range used for transmission of a set of DMRS symbols) is a shared OFDM symbol, which is also used in the second alternative920. Similarly, an OFDM symbol914having a lowest frequency (within the frequency range used for the provision of a plurality of DMRS symbols) is also a shared OFDM symbol, which is used in the second alternative as well. Between the OFDM symbols914and912, there is an alternation of unused OFDM symbols916a-916dand of “exclusive” OFDM symbols918a-918d, which are only used in the first alternative910, but not in the second alternative920. The second alternative also comprises shared OFDM symbols914′,912′ at the lowest and at the highest frequency. These shared OFDM symbols914′,912′ correspond to the shared OFDM symbols914,912(in that they have the same frequency). Moreover, the second alternative also comprises, between the OFDM symbols914′ and912′, an alternation of exclusive OFDM symbols926a-926d, which are only used in the second alternative but not in the first alternative, and of unused OFDM symbols928a-928d. It should be noted that the OFDM symbols926a-926dcorrespond in frequency to the OFDM symbols916a-916b. Similarly, the OFDM symbols928a-928dcorrespond in frequency to the OFDM symbols918a-918d. Accordingly, it should be noted that OFDM symbols914,914′ and912,912′ are shared between the first alternative “a” and the second alternative “b” of a frequency multiplexing. Between the shared OFDM symbols914,914′,912,912′, there are exclusively used OFDM symbols, which are either used in the first alternative “a” or in the second alternative “b”. To conclude,FIG.9shows a frequency multiplexing of a sequence of DMRS symbols, which can be used in case that a selected group of transmission symbol positions comprises a reasonably large number of frequency bins or subcarriers (i.e., a sufficiently large frequency range such that a frequency spreading or multiplexing as shown inFIG.9is possible). The frequency multiplexing as shown inFIG.9can then be applied to include reference symbols at a selected reference symbol position (for example, at the end of the group of transmission symbol positions and at the beginning of a subsequent and overlapping group of transmission symbol positions, with one alternative being used by a first user equipment and another alternative being used by a second user equipment). Inter-Cell Interference Coordination (ICIC) for DMRS Multiplexing When a DMRS symbol is transmitted, it can cause interference in neighboring cells. This can be minimized by transmitting known and different patterns at the same time. It is proposed to do this by specifying DMRS groups. Each group contains a subset of possible multiplexing patterns. The group can either be explicitly signaled or implicitly derived from other sets of parameters such as a cell identifier (cell ID). FIG.10shows a concept for inter-cell interference coordination for DMRS sequences by introducing orthogonal groups.FIG.10shows two neighboring eNBs1010,1020(eNB1and eNB2), each sending a UL assignment to one or more UEs (UE1and UE2in this case). The eNBs can be considered as base stations or as data communication devices. The user equipment1030,1040can also be considered as data communication devices. Thus, base station1010sends a first uplink assignment1012to user equipment1030, and base station1020sends a second uplink assignment1022to user equipment1040. Both uplink assignments (UL assignments) specify the DMRS pattern (or DMRS multiplexing characteristic) as #1(no.1). At the user equipment side, pattern #1is taken from the group subset. In this case for user equipment1(UE1), this is pattern “a” (#1) from group g1. In other words, group1is assigned to user equipment1, wherein this assignment can be made in different manners, as will be described below. Moreover, it should be noted that the user equipment11030transmits uplink data and DMRS data to the first base station1010, wherein a multiplexing pattern “a” defined by group1, pattern #1is used for the transmission of the DMRS. Similarly, the user equipment1040transmits uplink data and DMRS, wherein a multiplexing pattern defined by group2, pattern #1is used for the DMRS. Taking reference now toFIG.15, a grouping of DMRS multiplexing pattern is shown. In other words, table3ofFIG.15shows a possible grouping (groups 1-3) of patterns (patterns “a” to “l”). Patterns “a” to “l” can be chosen to keep interference reasonably small. In particular, the patterns may be chosen such that the interference between patterns of different groups are particularly small. It should be noted that this concept reduces signaling that may be used by the base station eNB as only a pattern number (pattern #1-4) (2-bit) has to be signaled while maintaining inter-cell orthogonality of the DMRS sequences. An example implementation of two such patterns is shown inFIG.11. In the example implementation of two patterns, “A” to “D” are different users transmitting. This equally spreads the four users sharing DMRS (A-D) on a time frequency resource to three simultaneously transmitting users. Taking reference now toFIG.11, two multiplexing patterns are shown. A first multiplexing pattern1110defines, for example, at which transmission symbol position which user equipment (data communication apparatus) should transmit. It should be noted that the transmission symbol positions defined by the first multiplexing pattern1110are typically temporally simultaneous, but at different frequencies (different frequency bins or different subcarriers). For example, it can be seen that a user equipment “A” should transmit (e.g. an OFDM modulation symbol) at a second frequency1114, a third frequency1116and a fourth frequency1118. In contrast, user equipment “B” should transmit at a first frequency1112, at the second frequency1114and at the fourth frequency1118. The multiplexing pattern for device “A” (second frequency, third frequency, fourth frequency) may, for example, correspond to multiplexing pattern “a”. Similarly, the multiplexing pattern for user equipment “B” (first frequency, second frequency, fourth frequency) may correspond to multiplexing pattern “b”. A third multiplexing pattern1120defines that user equipment “C” transmits at the first frequency1112′, at the third frequency1116′ and at the fourth frequency1118′. Moreover, the second multiplexing pattern1120defines that the user equipment “D” should transmit at the first frequency1112′, at the second frequency1114′ and at the third frequency1116′. Thus, the multiplexing pattern of device “C” (first frequency, third frequency, fourth frequency) may correspond to multiplexing pattern “c” (or, alternatively to multiplexing pattern “e”). The multiplexing pattern for device “D” (first frequency, second frequency, third frequency) may correspond to multiplexing pattern “d” (or, alternatively, to multiplexing pattern “f”). Thus, the multiplexing patterns1110,1120avoid, even under worst case conditions, that there are four user equipment transmitting at the same time. However, it should be noted that different multiplexing patterns naturally can also be used. Predefined Patterns Predefined patterns as defined in the section “sTTI design an overlap of DMRS symbols” for statically assigning the positions and lengths of data and DMRS symbols will be described in the following. To reduce the signaling overhead, configured or predefined patterns can be used. These can be defined depending on e.g. the transmission length (sTTI), assigned frequency (PRBs), component carriers or subcarriers used, transmission time or mode signaled to the transmitter (user equipment UE). The user equipment can then use the transmission pattern according to the assigned resources. The example inFIG.12shows an assignment. Depending on which TTI (transmission time intervals) and which frequency a user is scheduled, it will use the defined transmission time interval length and DMRS position. In other words, a base station may signal which overall resource allocation scheme out of a plurality of possible overall resource allocation schemes should be used. This selected resource allocation scheme then forms the basis for the resource allocation to the individual user equipment. For example, an individual information, describing the resources to be used by an individual user equipment may, for example, define, in some manner, which of the groups of transmission symbol positions defined by the overall resource allocation scheme should be used by the given user equipment. On the basis of this information, a user equipment may obtain additional information (for example, the position of the reference symbols) from the overall resource allocation scheme. Thus, it is no longer necessary to signal each and every detail of a resource allocation, as long as a user equipment can determine, on the basis of his knowledge of the overall resource allocation scheme, these features (for example, the reference symbol positions, the temporal extension of its group of transmission symbol positions and/or the frequency extension of its allocated group of transmission symbol positions. An example assignment of TTI length and DMRS position in frequency and time on an LTE resource grid is shown inFIG.12. In other words,FIG.12shows a representation of an overall resource allocation in a two-dimensional grid of transmission symbol positions. It can be seen that in that a first time portion1210there is a plurality of different groups of transmission symbol positions having different temporal extensions. In a first frequency range1220, there are four groups, wherein a first pair of groups1230a,1230bshares reference symbol positions, and wherein a second pair of groups1230c,1230dalso shares reference symbol positions. In a second frequency range1222, there are two groups of transmission symbol positions, also sharing reference symbol positions. In a third frequency range1224, there is only one group of transmission symbol positions. It can be seen that, even in the first time portion1210, the reference symbol positions vary over frequency, i.e., do to follow a fixed grid (for example, even within a physical resource block). In a second time portion1212, there is a similar resource allocation. However, there are two groups of transmission symbol positions in the first frequency range, there is one group of transmission symbol positions in the second frequency range, and there are four groups of transmission symbol positions in the third frequency range. Thus, for example, one may only assign which of the predefined groups of transmission symbol positions should be used by which of the user equipment. On the basis of its knowledge of the overall resource allocation scheme, user equipment continues to derive other parameters that may be used (temporal extension, frequency extension, position of the reference symbols). General Remarks Regarding sTTI Concept For FDD and TDD systems, a novel radio frame structure is currently under discussion for a better support traffic for ultra-reliable low latency communications (URLLC). However, by introducing the short TTI (sTTI) concept in future LTE releases, see e.g. the 3GPP working item on latency reduction, the restriction to subframe sizes can be overcome. The current working assumption for future LTE Release 14 is to allow the sTTI concept with the following configurations for: Fdd Systems:Downlink (PDSCH), working assumption: sTTI with 2, 3-4, 7 OFDM symbols (OS)Uplink (PUSCH), working assumption: sTTI with 2, 3-4 OFDM symbols (OS) TDD System Working Assumption:1-slot (=7 OFDM symbols) sTTI for sPDSCH/sPDCCH/sPUSCH/sPUCCH In future mobile communication standards referred to as New Radio (NR) or 5G, the length of a TTI may be reduced to support a shortened version of only 1 OFDM symbol or at least the configurations described above, which are proposed for URLLC in LTE Rel. 14. General remarks regarding waveform generation/waveform analysis in the data communication apparatuses In embodiments, the transceiver (or the data communication apparatus) can be a base station (or a user equipment) in the wireless communication system, and the data signal (or transmission signal, or modulated signal) is an IFFT based signal, the IFFT based signal having a plurality of frames, the frame including a plurality of sub-frames. For example, the IFFT (inverse fast Fourier transform) based signal may include OFDM with CP or DFT-s-OFDM with CP and IFFT-based waveforms without CP. For example, OFDM with CP may be used for downlink transmission. For example, DFT-s-OFDM with CP may be used for uplink transmission. Conclusions To conclude, embodiments according to the invention create a flexible DMRS mapping for sTTI in the uplink. Aspects according to the present invention can be summarized as follows:E1) Novel sTTI patterns for the uplink (seeFIG.3)E2) sTTI designs and overlap DMRS symbolsmultiplexing them in a suitable manner (e.g. Code-division, frequency division, space division)define patterns for simplified scheduling(signal a) pattern from a predefined codebooksemi-statically signal the patterndynamically schedule them with the UL grant.From a subset depending on the sTTI lengthMatch the patterns with SRS scheduling settingsE3) control information to UE to signal DMRS position to use in a (or several) given Grantbefore or after PUSCH (1a-1b;2a-2c;3a-3d;4a-4e)sTTI length is defined otherwiseE4) control information for DMRS multiplexing modee.g. multiplexing mode to use e.g. code, frequency, special or non-orthogonaladditional parameters for mode: e.g DMRS cyclic shift or frequency pattern.E5) predefined patterns as defined in E2 for statically assigning the positions and lengths of data and DMRS symbolsdepending on e.g sTTI length, assigned PRBs, subcarriers, signaled mode.E6) control information to signal a subset of possible sTTI structures for a UE to use in a (or several) given UL grantwhile defining the sTTI length otherwisewhile (semi)-statically defining the sTTI lengthe.g.2d,3a,3dor3a,3b,3c,3d,3e Embodiments according to the invention allow for a flexible positioning of sills of same or different lengths within a sub-frame. It is possible to reduce overhead by multiplexing DMRS of neighboring sills. This can be from the same user equipment or different user equipment. Some embodiments according to the invention create an improved signaling. Embodiments according to the invention may, for example, be used in latency-constraining (mission-critical) communication services. 6. Methods FIGS.21to25show flowcharts of a methods for transmitting one or more blocks of data, according to embodiments of the present invention. FIGS.26to28show flowcharts of a methods for receiving one or more blocks of data, according to embodiments of the present invention. These methods are based on the same considerations as the apparatuses described herein. The methods can be supplemented by any of the features and functionalities described herein. 7. Implementation Alternatives Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus. Some or all of the method steps may be executed by (or using) a hardware apparatus, like for example, a microprocessor, a programmable computer or an electronic circuit. In some embodiments, one or more of the most important method steps may be executed by such an apparatus. Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a Blu-Ray, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable. Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed. Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier. Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier. In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer. A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. The data carrier, the digital storage medium or the recorded medium are typically tangible and/or non-transitionary. A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet. A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein. A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein. A further embodiment according to the invention comprises an apparatus or a system configured to transfer (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver. The receiver may, for example, be a computer, a mobile device, a memory device or the like. The apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver. In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are advantageously performed by any hardware apparatus. The apparatus described herein may be implemented using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer. The apparatus described herein, or any components of the apparatus described herein, may be implemented at least partially in hardware and/or in software. The methods described herein may be performed using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer. The methods described herein, or any components of the apparatus described herein, may be performed at least partially by hardware and/or by software. While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations and equivalents as fall within the true spirit and scope of the present invention. 8. References [1]3GPP R1-163723, Ericsson, Qualcomm. WF on DMRS forsPUSCH. s.l.[2]3GPP TS 36.321 V13.1.0 (2016 March), p. 42ff.[3]3GPP TS 36.331 V13.1.0 (2016 March), p. 354.
89,608
11943159
DETAILED DESCRIPTION The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well known structures and components are shown in block diagram form in order to avoid obscuring such concepts. Generally, a demodulation reference signal (DMRS) may be specified for particular user equipment (UE), and used to estimate a radio channel used by the UE and a base station for communication. The DMRS may be used by a receiver for radio channel estimation for demodulation of a physical channel that carried the DMRS, for example, physical broadcast channel (PBCH), physical downlink control channel (PDCCH), physical downlink shared channel (PDSCH), physical uplink shared channel (PUSCH), etc. The 3GPP specifications currently provide that if at least one resource element (RE) of a physical downlink control channel (PDCCH) candidate for a user equipment (UE) on a serving cell overlaps with at least one RE of ite-CRSToMatchAround, or of LTE-CRS-PatternList, the UE is not required to monitor the PDCCH candidate. In other words, the UE does not need to monitor the PDCCH if at least one of the PDCCH REs overlap with a common (cell-specific) reference signal (CRS). In some examples, the CRS pattern (e.g., or a set of CRS REs) may be configured by higher-layer signaling (e.g., radio resource control (RRC) signaling or medium access control (MAC) control element (CE) provides CRS pattern) or by physical (PHY) layer signaling (e.g., downlink control information (DCI)). Here, based on the higher-layer signaling, the UE may assume that the CRS is present in REs according to the signaling, although the CRS may not always be present in the REs (e.g., the base station may not always transmit CRS). Thus, in an example where the CRS is not transmitted with a downlink transmission, the downlink transmission may still be punctured by the CRS pattern. In an example where the CRS is transmitted with a downlink transmission, the downlink transmission may be punctured by, or rate-matched around, the CRS pattern. Typically a PDCCH candidate may span 1-3 orthogonal frequency division multiplexing (OFDM) symbols. However, if an RE carries a CRS in one or more of those symbols, then those symbols are not expected to be a PDCCH candidate, and the UE will not monitor them. More specifically, the UE will not monitor those symbols for DMRS. In an example where a PDCCH candidate spans 3 symbols and the first two symbols carry CRS, the UE will only monitor the last symbol for PDCCH and DMRS. Here, because the PDCCH includes DMRS REs, channel estimation performance will be degraded because the PDCCH is only one symbol, resulting in low signal-to-noise ratio (SNR) of DMRS signals Even in a scenario where the PDCCH candidate is rate matched around the CRS REs of the first two symbols (e.g., the PDCCH candidate occupies symbols1-3, and is rate matched around the CRS REs of the first two symbols). Still, in this scenario, the CRS pattern may interfere with the DMRS pattern of the PDCCH (e.g., the CRS pattern may puncture the DMRS pattern). In this example, because the PDCCH includes DMRS REs, and because the PDDCH is punctured around the CRS REs, channel estimation performance will also be degraded because the puncturing may eliminate DMRS REs. Thus, in certain aspects, the disclosure is directed to techniques for transmitting reference signals over REG bundles that are part of the same superset as another REG bundle that is used to carry a downlink transmission. In this manner, the UE may monitor a broader spectrum of reference signals in order to improve channel estimation. Several aspects of telecommunication systems will now be presented with reference to various apparatus and methods. These apparatus and methods will be described in the following detailed description and illustrated in the accompanying drawings by various blocks, components, circuits, processes, algorithms, etc. (collectively referred to as “elements”). These elements may be implemented using electronic hardware, computer software, or any combination thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. By way of example, an element, or any portion of an element, or any combination of elements may be implemented as a “processing system” that includes one or more processors. Examples of processors include microprocessors, microcontrollers, graphics processing units (GPUs), central processing units (CPUs), application processors, digital signal processors (DSPs), reduced instruction set computing (RISC) processors, systems on a chip (SoC), baseband processors, field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. One or more processors in the processing system may execute software. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software components, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Throughout the disclosure, a “network node” may be used to refer to a base station (e.g., an aggregated base station as illustrated inFIG.4), or as a component of a base station such as one or more of a central unit (CU), a distributed unit (DU), a radio unit (RU), a near-real time (near-RT) radio access network (RAN) intelligent controller (RIC), or a non-real time (non-RT) RIC, illustrated in the disaggregated base station ofFIG.5. Accordingly, in one or more example embodiments, the functions described may be implemented in hardware, software, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise a random-access memory (RAM), a read-only memory (ROM), an electrically erasable programmable ROM (EEPROM), optical disk storage, magnetic disk storage, other magnetic storage devices, combinations of the aforementioned types of computer-readable media, or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer. FIG.1is a diagram illustrating an example of a wireless communications system and an access network100. The wireless communications system (also referred to as a wireless wide area network (WWAN)) includes base stations102, user equipment(s) (UE)104, an Evolved Packet Core (EPC)160, and another core network190(e.g., a 5G Core (5GC)). The base stations102may include macrocells (high power cellular base station) and/or small cells (low power cellular base station). The macrocells include base stations. The small cells include femtocells, picocells, and microcells. The base stations102configured for 4G Long Term Evolution (LTE) (collectively referred to as Evolved Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access Network (E-UTRAN)) may interface with the EPC160through first backhaul links132(e.g., 51 interface). The base stations102configured for 5G New Radio (NR) (collectively referred to as Next Generation RAN (NG-RAN)) may interface with core network190through second backhaul links184. In addition to other functions, the base stations102may perform one or more of the following functions: transfer of user data, radio channel ciphering and deciphering, integrity protection, header compression, mobility control functions (e.g., handover, dual connectivity), inter-cell interference coordination, connection setup and release, load balancing, distribution for non-access stratum (NAS) messages, NAS node selection, synchronization, radio access network (RAN) sharing, Multimedia Broadcast Multicast Service (MBMS), subscriber and equipment trace, RAN information management (RIM), paging, positioning, and delivery of warning messages. The base stations102may communicate directly or indirectly (e.g., through the EPC160or core network190) with each other over third backhaul links134(e.g., X2 interface). The first backhaul links132, the second backhaul links184, and the third backhaul links134may be wired or wireless. The base stations102may wirelessly communicate with the UEs104. Each of the base stations102may provide communication coverage for a respective geographic coverage area110. There may be overlapping geographic coverage areas110. For example, the small cell102′ may have a coverage area110′ that overlaps the coverage area110of one or more macro base stations102. A network that includes both small cell and macrocells may be known as a heterogeneous network. A heterogeneous network may also include Home Evolved Node Bs (eNB s) (HeNBs), which may provide service to a restricted group known as a closed subscriber group (CSG). The communication links120between the base stations102and the UEs104may include uplink (UL) (also referred to as reverse link) transmissions from a UE104to a base station102and/or downlink (DL) (also referred to as forward link) transmissions from a base station102to a UE104. The communication links120may use multiple-input and multiple-output (MIMO) antenna technology, including spatial multiplexing, beamforming, and/or transmit diversity. The communication links may be through one or more carriers. The base stations102/UEs104may use spectrum up to Y megahertz (MHz) (e.g., 5, 10, 15, 20, 100, 400, etc. MHz) bandwidth per carrier allocated in a carrier aggregation of up to a total of Yx MHz (x component carriers) used for transmission in each direction. The carriers may or may not be adjacent to each other. Allocation of carriers may be asymmetric with respect to DL and UL (e.g., more or fewer carriers may be allocated for DL than for UL). The component carriers may include a primary component carrier and one or more secondary component carriers. A primary component carrier may be referred to as a primary cell (PCell) and a secondary component carrier may be referred to as a secondary cell (SCell). Certain UEs104may communicate with each other using device-to-device (D2D) communication link158. The D2D communication link158may use the DL/UL WWAN spectrum. The D2D communication link158may use one or more sidelink channels, such as a physical sidelink broadcast channel (PSBCH), a physical sidelink discovery channel (PSDCH), a physical sidelink shared channel (PSSCH), and a physical sidelink control channel (PSCCH). D2D communication may be through a variety of wireless D2D communications systems, such as for example, WiMedia, Bluetooth, ZigBee, Wi-Fi based on the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard, LTE, or NR. The wireless communications system may further include a Wi-Fi access point (AP)150in communication with Wi-Fi stations (STAs)152via communication links154, e.g., in a 5 gigahertz (GHz) unlicensed frequency spectrum or the like. When communicating in an unlicensed frequency spectrum, the STAs152/AP150may perform a clear channel assessment (CCA) prior to communicating in order to determine whether the channel is available. The small cell102′ may operate in a licensed and/or an unlicensed frequency spectrum. When operating in an unlicensed frequency spectrum, the small cell102′ may employ NR and use the same unlicensed frequency spectrum (e.g., 5 GHz, or the like) as used by the Wi-Fi AP150. The small cell102′, employing NR in an unlicensed frequency spectrum, may boost coverage to and/or increase capacity of the access network. The electromagnetic spectrum is often subdivided, based on frequency/wavelength, into various classes, bands, channels, etc. In 5G NR, two initial operating bands have been identified as frequency range designations FR1 (410 MHz-7.125 GHz) and FR2 (24.25 GHz-52.6 GHz). The frequencies between FR1 and FR2 are often referred to as mid-band frequencies. Although a portion of FR1 is greater than 6 GHz, FR1 is often referred to (interchangeably) as a “sub-6 GHz” band in various documents and articles. A similar nomenclature issue sometimes occurs with regard to FR2, which is often referred to (interchangeably) as a “millimeter wave” band in documents and articles, despite being different from the extremely high frequency (EHF) band (30 GHz-300 GHz) which is identified by the International Telecommunications Union (ITU) as a “millimeter wave” band. With the above aspects in mind, unless specifically stated otherwise, it should be understood that the term “sub-6 GHz” or the like if used herein may broadly represent frequencies that may be less than 6 GHz, may be within FR1, or may include mid-band frequencies. Further, unless specifically stated otherwise, it should be understood that the term “millimeter wave” or the like if used herein may broadly represent frequencies that may include mid-band frequencies, may be within FR2, or may be within the EHF band. A base station102, whether a small cell102′ or a large cell (e.g., macro base station), may include and/or be referred to as an eNB, gNodeB (gNB), or another type of base station. Some base stations, such as gNB180may operate in a traditional sub 6 GHz spectrum, in millimeter wave frequencies, and/or near millimeter wave frequencies in communication with the UE104. When the gNB180operates in millimeter wave or near millimeter wave frequencies, the gNB180may be referred to as a millimeter wave base station. The millimeter wave base station180may utilize beamforming182with the UE104to compensate for the path loss and short range. The base station180and the UE104may each include a plurality of antennas, such as antenna elements, antenna panels, and/or antenna arrays to facilitate the beamforming. The base station180may transmit a beamformed signal to the UE104in one or more transmit directions182′. The UE104may receive the beamformed signal from the base station180in one or more receive directions182″. The UE104may also transmit a beamformed signal to the base station180in one or more transmit directions. The base station180may receive the beamformed signal from the UE104in one or more receive directions. The base station180/UE104may perform beam training to determine the best receive and transmit directions for each of the base station180/UE104. The transmit and receive directions for the base station180may or may not be the same. The transmit and receive directions for the UE104may or may not be the same. The EPC160may include a Mobility Management Entity (MME)162, other MMEs164, a Serving Gateway166, an MBMS Gateway168, a Broadcast Multicast Service Center (BM-SC)170, and a Packet Data Network (PDN) Gateway172. The MME162may be in communication with a Home Subscriber Server (HSS)174. The MME162is the control node that processes the signaling between the UEs104and the EPC160. Generally, the MME162provides bearer and connection management. All user Internet protocol (IP) packets are transferred through the Serving Gateway166, which itself is connected to the PDN Gateway172. The PDN Gateway172provides UE IP address allocation as well as other functions. The PDN Gateway172and the BM-SC170are connected to the IP Services176. The IP Services176may include the Internet, an intranet, an IP Multimedia Subsystem (IMS), a PS Streaming Service, and/or other IP services. The BM-SC170may provide functions for MBMS user service provisioning and delivery. The BM-SC170may serve as an entry point for content provider MBMS transmission, may be used to authorize and initiate MBMS Bearer Services within a public land mobile network (PLMN), and may be used to schedule MBMS transmissions. The MBMS Gateway168may be used to distribute MBMS traffic to the base stations102belonging to a Multicast Broadcast Single Frequency Network (MBSFN) area broadcasting a particular service, and may be responsible for session management (start/stop) and for collecting eMBMS related charging information. The core network190may include an Access and Mobility Management Function (AMF)192, other AMFs193, a Session Management Function (SMF)194, and a User Plane Function (UPF)195. The AMF192may be in communication with a Unified Data Management (UDM)196. The AMF192is the control node that processes the signaling between the UEs104and the core network190. Generally, the AMF192provides Quality of Service (QoS) flow and session management. All user IP packets are transferred through the UPF195. The UPF195provides UE IP address allocation as well as other functions. The UPF195is connected to the IP Services197. The IP Services197may include the Internet, an intranet, an IMS, a Packet Switch (PS) Streaming Service, and/or other IP services. The base station may include and/or be referred to as a gNB, Node B, eNB, an access point, a base transceiver station, a radio base station, a radio transceiver, a transceiver function, a basic service set (BSS), an extended service set (ESS), a transmit reception point (TRP), or some other suitable terminology. The base station102provides an access point to the EPC160or core network190for a UE104. Examples of UEs104include a cellular phone, a smart phone, a session initiation protocol (SIP) phone, a laptop, a personal digital assistant (PDA), a satellite radio, a global positioning system, a multimedia device, a video device, a digital audio player (e.g., MP3 player), a camera, a game console, a tablet, a smart device, a wearable device, a vehicle, an electric meter, a gas pump, a large or small kitchen appliance, a healthcare device, an implant, a sensor/actuator, a display, or any other similar functioning device. Some of the UEs104may be referred to as IoT devices (e.g., parking meter, gas pump, toaster, vehicles, heart monitor, etc.). The UE104may also be referred to as a station, a mobile station, a subscriber station, a mobile unit, a subscriber unit, a wireless unit, a remote unit, a mobile device, a wireless device, a wireless communications device, a remote device, a mobile subscriber station, an access terminal, a mobile terminal, a wireless terminal, a remote terminal, a handset, a user agent, a mobile client, a client, or some other suitable terminology. Referring again toFIG.1, in certain aspects, the UE104may include a channel estimating module198configured to receive, from a network node, communication parameters for a downlink transmission, the communication parameters indicative of a first resource element group (REG) bundle and a second REG bundle; receive the downlink transmission via the first REG bundle; and perform channel estimation of the downlink transmission based on reference signals transmitted over the first REG bundle and the second REG bundle. Referring again toFIG.1, in certain aspects, the base station102/180may include a reference signal module199be configured to determine a first superset of a plurality of supersets, wherein the first superset is uniquely mapped to at a first resource element group (REG) bundle and a second REG bundle; transmit communication parameters for a downlink transmission, the communication parameters indicative of the first REG bundle and the second REG bundle; transmit the downlink transmission via the first REG bundle; and transmit a reference signal for demodulating the first REG bundle via the first REG bundle and the second REG bundle. FIG.2Ais a diagram200illustrating an example of a first subframe within a 5G NR frame structure.FIG.2Bis a diagram230illustrating an example of DL channels within a 5G NR subframe.FIG.2Cis a diagram250illustrating an example of a second subframe within a 5G NR frame structure.FIG.2Dis a diagram280illustrating an example of UL channels within a 5G NR subframe. The 5G NR frame structure may be frequency division duplexed (FDD) in which for a particular set of subcarriers (carrier system bandwidth), subframes within the set of subcarriers are dedicated for either DL or UL, or may be time division duplexed (TDD) in which for a particular set of subcarriers (carrier system bandwidth), subframes within the set of subcarriers are dedicated for both DL and UL. In the examples provided byFIGS.2A,2C, the 5G NR frame structure is assumed to be TDD, with subframe4being configured with slot format28(with mostly DL), where D is DL, U is UL, and F is flexible for use between DL/UL, and subframe3being configured with slot format34(with mostly UL). While subframes3,4are shown with slot formats34,28, respectively, any particular subframe may be configured with any of the various available slot formats0-61. Slot formats0,1are all DL, UL, respectively. Other slot formats2-61include a mix of DL, UL, and flexible symbols. UEs are configured with the slot format (dynamically through DL control information (DCI), or semi-statically/statically through radio resource control (RRC) signaling) through a received slot format indicator (SFI). Note that the description infra applies also to a 5G NR frame structure that is TDD. Other wireless communication technologies may have a different frame structure and/or different channels. A frame, e.g., of 10 milliseconds (ms), may be divided into 10 equally sized subframes (1 ms). Each subframe may include one or more time slots. Subframes may also include mini-slots, which may include 7, 4, or 2 symbols. Each slot may include 7 or 14 symbols, depending on the slot configuration. For slot configuration0, each slot may include 14 symbols, and for slot configuration1, each slot may include 7 symbols. The symbols on DL may be cyclic prefix (CP) orthogonal frequency-division multiplexing (OFDM) (CP-OFDM) symbols. The symbols on UL may be CP-OFDM symbols (for high throughput scenarios) or discrete Fourier transform (DFT) spread OFDM (DFT-s-OFDM) symbols (also referred to as single carrier frequency-division multiple access (SC-FDMA) symbols) (for power limited scenarios; limited to a single stream transmission). The number of slots within a subframe is based on the slot configuration and the numerology. For slot configuration0, different numerologies id0to4allow for 1, 2, 4, 8, and 16 slots, respectively, per subframe. For slot configuration1, different numerologies0to2allow for 2, 4, and 8 slots, respectively, per subframe. Accordingly, for slot configuration0and numerology ii, there are 14 symbols/slot and 2μslots/subframe. The subcarrier spacing and symbol length/duration are a function of the numerology. The subcarrier spacing may be equal to 2μ*15 kilohertz (kHz), where μ is the numerology0to4. As such, the numerology μ=0 has a subcarrier spacing of 15 kHz and the numerology μ=4 has a subcarrier spacing of 240 kHz. The symbol length/duration is inversely related to the subcarrier spacing.FIGS.2A-2Dprovide an example of slot configuration0with 14 symbols per slot and numerology μ=2 with 4 slots per subframe. The slot duration is 0.25 ms, the subcarrier spacing is 60 kHz, and the symbol duration is approximately 16.67 μs. Within a set of frames, there may be one or more different bandwidth parts (BWPs) (seeFIG.2B) that are frequency division multiplexed. Each BWP may have a particular numerology. A resource grid may be used to represent the frame structure. Each time slot includes a resource block (RB) (also referred to as physical RBs (PRBs)) that extends12consecutive subcarriers. The resource grid is divided into multiple resource elements (REs). The number of bits carried by each RE depends on the modulation scheme. As illustrated inFIG.2A, some of the REs carry one or more reference (pilot) signals (RS) for the UE. As illustrated, REs in symbols0,1,4,7,8, and11carry a cell reference signal (CRS) for channel estimation at the UE. The REs may also carry a demodulation RS (DM-RS) (indicated as Rxfor one particular configuration, where100xis the port number, but other DM-RS configurations are possible). In some examples, one or more REs may carry channel state information reference signals (CSI-RS) instead of CRS for channel estimation at the UE. The RS may also include beam measurement RS (BRS), beam refinement RS (BRRS), and phase tracking RS (PT-RS). In the case of a downlink slot, one or more of the first three symbols (e.g., symbols0-2) may be used for PDCCH, according to parameters configured for information elements (IEs) SearchSpace and/or conrolResourceSet. In a downlink communication where CRS is transmitted with 4 ports, the CRS REs may be carried on OFDM symbols0,1,4,7,8, and11. As noted above, if at least one RE of a PDCCH candidate for a UE on the serving cell overlaps with at least one RE configured to carry a CRS, the UE is not required to monitor the PDCCH. In a scenario where the PDCCH is monitored within symbols0-2, the UE may only monitor symbol2for the PDCCH. However, PDCCH only on one symbol of a slot may be too restrictive in terms of PDCCH capacity. FIG.2Billustrates an example of various DL channels within a subframe of a frame. The physical downlink control channel (PDCCH) carries DCI within one or more control channel elements (CCEs), each CCE including nine RE groups (REGs), each REG including four consecutive REs in an OFDM symbol. A PDCCH within one BWP may be referred to as a control resource set (CORESET). Additional BWPs may be located at greater and/or lower frequencies across the channel bandwidth. A primary synchronization signal (PSS) may be within symbol2of particular subframes of a frame. The PSS is used by a UE104to determine subframe/symbol timing and a physical layer identity. A secondary synchronization signal (SSS) may be within symbol4of particular subframes of a frame. The SSS is used by a UE to determine a physical layer cell identity group number and radio frame timing. Based on the physical layer identity and the physical layer cell identity group number, the UE can determine a physical cell identifier (PCI). Based on the PCI, the UE can determine the locations of the aforementioned DM-RS. The physical broadcast channel (PBCH), which carries a master information block (MIB), may be logically grouped with the PSS and SSS to form a synchronization signal (SS)/PBCH block (also referred to as SS block (SSB)). The MIB provides a number of RB s in the system bandwidth and a system frame number (SFN). The physical downlink shared channel (PDSCH) carries user data, broadcast system information not transmitted through the PBCH such as system information blocks (SIBs), and paging messages. As illustrated inFIG.2C, some of the REs carry DM-RS (indicated as R for one particular configuration, but other DM-RS configurations are possible) for channel estimation at the base station. The UE may transmit DM-RS for the physical uplink control channel (PUCCH) and DM-RS for the physical uplink shared channel (PUSCH). The PUSCH DM-RS may be transmitted in the first one or two symbols of the PUSCH. The PUCCH DM-RS may be transmitted in different configurations depending on whether short or long PUCCHs are transmitted and depending on the particular PUCCH format used. The UE may transmit sounding reference signals (SRS). The SRS may be transmitted in the last symbol of a subframe. The SRS may have a comb structure, and a UE may transmit SRS on one of the combs. The SRS may be used by a base station for channel quality estimation to enable frequency-dependent scheduling on the UL. FIG.2Dillustrates an example of various UL channels within a subframe of a frame. The PUCCH may be located as indicated in one configuration. The PUCCH carries uplink control information (UCI), such as scheduling requests, a channel quality indicator (CQI), a precoding matrix indicator (PMI), a rank indicator (RI), and hybrid automatic repeat request (HARQ) acknowledgement (ACK)/non-acknowledgement (NACK) feedback. The PUSCH carries data, and may additionally be used to carry a buffer status report (BSR), a power headroom report (PHR), and/or UCI. FIG.3is a block diagram of a base station310(e.g., base station102/180ofFIG.1) in communication with a UE350(e.g., UE104ofFIG.1) in an access network. In the DL, IP packets from the EPC160may be provided to a controller/processor375. The controller/processor375implements layer3and layer2functionality. Layer3includes a radio resource control (RRC) layer, and layer2includes a service data adaptation protocol (SDAP) layer, a packet data convergence protocol (PDCP) layer, a radio link control (RLC) layer, and a medium access control (MAC) layer. The controller/processor375provides RRC layer functionality associated with broadcasting of system information (e.g., MIB, SIBs), RRC connection control (e.g., RRC connection paging, RRC connection establishment, RRC connection modification, and RRC connection release), inter radio access technology (RAT) mobility, and measurement configuration for UE measurement reporting; PDCP layer functionality associated with header compression/decompression, security (ciphering, deciphering, integrity protection, integrity verification), and handover support functions; RLC layer functionality associated with the transfer of upper layer packet data units (PDUs), error correction through ARQ, concatenation, segmentation, and reassembly of RLC service data units (SDUs), re-segmentation of RLC data PDUs, and reordering of RLC data PDUs; and MAC layer functionality associated with mapping between logical channels and transport channels, multiplexing of MAC SDUs onto transport blocks (TBs), demultiplexing of MAC SDUs from TBs, scheduling information reporting, error correction through HARQ, priority handling, and logical channel prioritization. The transmit (TX) processor316and the receive (RX) processor370implement layer1functionality associated with various signal processing functions. Layer1, which includes a physical (PHY) layer, may include error detection on the transport channels, forward error correction (FEC) coding/decoding of the transport channels, interleaving, rate matching, mapping onto physical channels, modulation/demodulation of physical channels, and MIMO antenna processing. The TX processor316handles mapping to signal constellations based on various modulation schemes (e.g., binary phase-shift keying (BPSK), quadrature phase-shift keying (QPSK), M-phase-shift keying (M-PSK), M-quadrature amplitude modulation (M-QAM)). The coded and modulated symbols may then be split into parallel streams. Each stream may then be mapped to an OFDM subcarrier, multiplexed with a reference signal (e.g., pilot) in the time and/or frequency domain, and then combined together using an Inverse Fast Fourier Transform (IFFT) to produce a physical channel carrying a time domain OFDM symbol stream. The OFDM stream is spatially precoded to produce multiple spatial streams. Channel estimates from a channel estimator374may be used to determine the coding and modulation scheme, as well as for spatial processing. The channel estimate may be derived from a reference signal and/or channel condition feedback transmitted by the UE350. Each spatial stream may then be provided to a different antenna320via a separate transmitter318TX. Each transmitter318TX may modulate an RF carrier with a respective spatial stream for transmission. At the UE350, each receiver354RX receives a signal through its respective antenna352. Each receiver354RX recovers information modulated onto an RF carrier and provides the information to the receive (RX) processor356. The TX processor368and the RX processor356implement layer1functionality associated with various signal processing functions. The RX processor356may perform spatial processing on the information to recover any spatial streams destined for the UE350. If multiple spatial streams are destined for the UE350, they may be combined by the RX processor356into a single OFDM symbol stream. The RX processor356then converts the OFDM symbol stream from the time-domain to the frequency domain using a Fast Fourier Transform (FFT). The frequency domain signal comprises a separate OFDM symbol stream for each subcarrier of the OFDM signal. The symbols on each subcarrier, and the reference signal, are recovered and demodulated by determining the most likely signal constellation points transmitted by the base station310. These soft decisions may be based on channel estimates computed by the channel estimator358. The soft decisions are then decoded and deinterleaved to recover the data and control signals that were originally transmitted by the base station310on the physical channel. The data and control signals are then provided to the controller/processor359, which implements layer3and layer2functionality. The controller/processor359can be associated with a memory360that stores program codes and data. The memory360may be referred to as a computer-readable medium. In the UL, the controller/processor359provides demultiplexing between transport and logical channels, packet reassembly, deciphering, header decompression, and control signal processing to recover IP packets from the EPC160. The controller/processor359is also responsible for error detection using an ACK and/or NACK protocol to support HARQ operations. Similar to the functionality described in connection with the DL transmission by the base station310, the controller/processor359provides RRC layer functionality associated with system information (e.g., MIB, SIB s) acquisition, RRC connections, and measurement reporting; PDCP layer functionality associated with header compression/decompression, and security (ciphering, deciphering, integrity protection, integrity verification); RLC layer functionality associated with the transfer of upper layer PDUs, error correction through ARQ, concatenation, segmentation, and reassembly of RLC SDUs, re-segmentation of RLC data PDUs, and reordering of RLC data PDUs; and MAC layer functionality associated with mapping between logical channels and transport channels, multiplexing of MAC SDUs onto TB s, demultiplexing of MAC SDUs from TB s, scheduling information reporting, error correction through HARQ, priority handling, and logical channel prioritization. Channel estimates derived by a channel estimator358from a reference signal or feedback transmitted by the base station310may be used by the TX processor368to select the appropriate coding and modulation schemes, and to facilitate spatial processing. The spatial streams generated by the TX processor368may be provided to different antenna352via separate transmitters354TX. Each transmitter354TX may modulate an RF carrier with a respective spatial stream for transmission. The UL transmission is processed at the base station310in a manner similar to that described in connection with the receiver function at the UE350. Each receiver318RX receives a signal through its respective antenna320. Each receiver318RX recovers information modulated onto an RF carrier and provides the information to a RX processor370. The controller/processor375can be associated with a memory376that stores program codes and data. The memory376may be referred to as a computer-readable medium. In the UL, the controller/processor375provides demultiplexing between transport and logical channels, packet reassembly, deciphering, header decompression, control signal processing to recover IP packets from the UE350. IP packets from the controller/processor375may be provided to the EPC160. The controller/processor375is also responsible for error detection using an ACK and/or NACK protocol to support HARQ operations. At least one of the TX processor368, the RX processor356, and the controller/processor359may be configured to perform aspects in connection with198ofFIG.1. At least one of the TX processor316, the RX processor370, and the controller/processor375may be configured to perform aspects in connection with199ofFIG.1. FIG.4illustrates an example monolithic (e.g., disaggregated) architecture of a distributed RAN400, which may be implemented in the wireless communications system and an access network100illustrated inFIG.1. As illustrated, the distributed RAN400includes core network (CN)402and a base station426. The CN402may host core network functions. CN402may be centrally deployed. CN402functionality may be offloaded (e.g., to advanced wireless services (AWS)), in an effort to handle peak capacity. The CN402may include an AMF404and a UPF406. The AMF404and UPF406may perform one or more of the core network functions. The base station426may communicate with the CN402(e.g., via a backhaul interface). The base station426may communicate with the AMF404via an N2 (e.g., NG-C) interface. The base station426may communicate with the UPF406via an N3 (e.g., NG-U) interface. The base station426may include a central unit-control plane (CU-CP)410, one or more central unit-user planes (CU-UPs)412, one or more distributed units (DUs)414-418, and one or more radio units (RUs)420-424. The CU-CP410may be connected to one or more of the DUs414-418. The CU-CP410and DUs414-418may be connected via a F1-C interface. As shown inFIG.4, the CU-CP410may be connected to multiple DUs, but the DUs may be connected to only one CU-CP. AlthoughFIG.4only illustrates one CU-UP412, the base station426may include multiple CU-UPs. The CU-CP410selects the appropriate CU-UP(s) for requested services (e.g., for a UE). The CU-UP(s)412may be connected to the CU-CP410. For example, the CU-UP(s)412and the CU-CP410may be connected via an E1 interface. The CU-UP(s)412may be connected to one or more of the DUs414-418. The CU-UP(s)412and DUs414-418may be connected via a F1-U interface. As shown inFIG.4, the CU-CP410may be connected to multiple CU-UPs, but the CU-UPs may be connected to only one CU-CP410. A DU, such as DUs414,416, and/or418, may host one or more TRP(s) (transmit/receive points, which may include an edge node (EN), an edge unit (EU), a radio head (RH), a smart radio head (SRH), or the like). A DU may be located at edges of the network with radio frequency (RF) functionality. A DU may be connected to multiple CU-UPs that are connected to (e.g., under the control of) the same CU-CP (e.g., for RAN sharing, radio as a service (RaaS), and service specific deployments). DUs may be configured to individually (e.g., dynamic selection) or jointly (e.g., joint transmission) serve traffic to a UE. Each DU414-416may be connected with one of RUs420/422/424. The CU-CP410may be connected to multiple DU(s) that are connected to (e.g., under control of) the same CU-UP412. Connectivity between a CU-UP412and a DU may be established by the CU-CP410. For example, the connectivity between the CU-UP412and a DU may be established using bearer context management functions. Data forwarding between CU-UP(s)412may be via a Xn-U interface. The distributed RAN400may support fronthauling solutions across different deployment types. For example, the RAN400architecture may be based on transmit network capabilities (e.g., bandwidth, latency, and/or jitter). The distributed RAN400may share features and/or components with LTE. For example, the base station426may support dual connectivity with NR and may share a common fronthaul for LTE and NR. The distributed RAN400may enable cooperation between and among DUs414-418, for example, via the CU-CP412. An inter-DU interface may not be used. Logical functions may be dynamically distributed in the distributed RAN400. FIG.5is a block diagram illustrating an example disaggregated base station500architecture. The disaggregated base station500architecture may include one or more CUs510that can communicate directly with a core network520via a backhaul link, or indirectly with the core network520through one or more disaggregated base station units (such as a near real-time (RT) RIC525via an E2 link, or a non-RT RIC515associated with a service management and orchestration (SMO) Framework505, or both). A CU510may communicate with one or more DUs530via respective midhaul links, such as an F1 interface. The DUs530may communicate with one or more RUs540via respective fronthaul links. The RUs540may communicate with respective UEs104via one or more radio frequency (RF) access links. In some implementations, the UE104may be simultaneously served by multiple RUs540. Each of the units, i.e., the CUs510, the DUs530, the RUs540, as well as the near-RT RICs525, the non-RT RICs515and the SMO framework505, may include one or more interfaces or be coupled to one or more interfaces configured to receive or transmit signals, data, or information (collectively, signals) via a wired or wireless transmission medium. Each of the units, or an associated processor or controller providing instructions to the communication interfaces of the units, can be configured to communicate with one or more of the other units via the transmission medium. For example, the units can include a wired interface configured to receive or transmit signals over a wired transmission medium to one or more of the other units. Additionally, the units can include a wireless interface, which may include a receiver, a transmitter or transceiver (such as a radio frequency (RF) transceiver), configured to receive or transmit signals, or both, over a wireless transmission medium to one or more of the other units. In some aspects, the CU510may host higher layer control functions. Such control functions can include radio resource control (RRC), packet data convergence protocol (PDCP), service data adaptation protocol (SDAP), or the like. Each control function can be implemented with an interface configured to communicate signals with other control functions hosted by the CU510. The CU510may be configured to handle user plane functionality (i.e., central unit—user plane (CU-UP)), control plane functionality (i.e., central unit—control plane (CU-CP)), or a combination thereof. In some implementations, the CU510can be logically split into one or more CU-UP units and one or more CU-CP units. The CU-UP unit can communicate bidirectionally with the CU-CP unit via an interface, such as the E1 interface when implemented in an 0-RAN configuration. The CU510can be implemented to communicate with the DU530, as necessary, for network control and signaling. The DU530may correspond to a logical unit that includes one or more base station functions to control the operation of one or more RUs540. In some aspects, the DU530may host one or more of a radio link control (RLC) layer, a medium access control (MAC) layer, and one or more high physical (PHY) layers (such as modules for forward error correction (FEC) encoding and decoding, scrambling, modulation and demodulation, or the like) depending, at least in part, on a functional split, such as those defined by the3rd Generation Partnership Project (3GPP). In some aspects, the DU530may further host one or more low PHY layers. Each layer (or module) can be implemented with an interface configured to communicate signals with other layers (and modules) hosted by the DU530, or with the control functions hosted by the CU510. Lower-layer functionality can be implemented by one or more RUs540. In some deployments, an RU540, controlled by a DU530, may correspond to a logical node that hosts RF processing functions, or low-PHY layer functions (such as performing fast Fourier transform (FFT), inverse FFT (iFFT), digital beamforming, physical random access channel (PRACH) extraction and filtering, or the like), or both, based at least in part on the functional split, such as a lower layer functional split. In such an architecture, the RU(s)540can be implemented to handle over the air (OTA) communication with one or more UEs104. In some implementations, real-time and non-real-time aspects of control and user plane communication with the RU(s)540can be controlled by the corresponding DU530. In some scenarios, this configuration can enable the DU(s)530and the CU510to be implemented in a cloud-based RAN architecture, such as a virtual RAN (vRAN) architecture. The SMO Framework505may be configured to support RAN deployment and provisioning of non-virtualized and virtualized network elements. For non-virtualized network elements, the SMO framework505may be configured to support the deployment of dedicated physical resources for RAN coverage requirements, which may be managed via an operations and maintenance interface (such as an 01 interface). For virtualized network elements, the SMO framework505may be configured to interact with a cloud computing platform (such as an open cloud (0-cloud)590) to perform network element life cycle management (such as to instantiate virtualized network elements) via a cloud computing platform interface (such as an 02 interface). Such virtualized network elements can include, but are not limited to, CUs510, DUs530, RUs540and near-RT RICs525. In some implementations, the SMO framework505can communicate with a hardware aspect of a 4G RAN, such as an open eNB (0-eNB)511, via an 01 interface. Additionally, in some implementations, the SMO Framework505can communicate directly with one or more RUs540via an 01 interface. The SMO framework505also may include the non-RT RIC515configured to support functionality of the SMO Framework505. The non-RT RIC515may be configured to include a logical function that enables non-real-time control and optimization of RAN elements and resources, artificial intelligence/machine learning (AI/ML) workflows including model training and updates, or policy-based guidance of applications/features in the near-RT RIC525. The non-RT RIC515may be coupled to or communicate with (such as via an A1 interface) the near-RT RIC525. The near-RT RIC525may be configured to include a logical function that enables near-real-time control and optimization of RAN elements and resources via data collection and actions over an interface (such as via an E2 interface) connecting one or more CUs510, one or more DUs530, or both, as well as an 0-eNB, with the near-RT RIC525. In some implementations, to generate AI/ML models to be deployed in the near-RT RIC525, the non-RT RIC515may receive parameters or external enrichment information from external servers. Such information may be utilized by the near-RT RIC525and may be received at the SMO Framework505or the non-RT RIC515from non-network data sources or from network functions. In some examples, the non-RT RIC515or the near-RT RIC525may be configured to tune RAN behavior or performance. For example, the non-RT RIC515may monitor long-term trends and patterns for performance and employ AI/ML models to perform corrective actions through the SMO Framework505(such as reconfiguration via01) or via creation of RAN management policies (such as A1 policies). Example Techniques for Channel Estimation Using Supersets FIG.6is a block diagram illustrating an example of a PDCCH candidate within a control-resource set (CORESET)602. The CORESET602is a set of physical resources (e.g., time and frequency resources) used to carry PDCCH and/or DCI. Here, the CORESET602spans two symbols (e.g., symbols1and2) and96resource blocks (RBs). The CORESET602includes a PDCCH candidate604with aggregation level4(e.g., consisting of four control channel elements (CCEs) where each CCE consists of six REGs) that also spans the two symbols, and includes twenty-four resource element groups (REGs), (e.g., twelve REGs on each symbol) in two discontinuous resources in frequency domain within the CORESET. The PDCCH candidate604also includes twelve REG bundles606a-6061(also referred to at REG bundles606) wherein each of the REG bundles606is made up of two symbol-adjacent REGs. As illustrated, each REG bundle606is distinguished by a hatch pattern. It should be noted that a base station (e.g., base station102/180ofFIG.1) may configure a UE (e.g., UE104ofFIG.1) with the CORESET602parameters (e.g., ControlResourceSet), and the PDCCH candidate604parameters (e.g., cce-REG-MappingType: REG bundle size, aggregation level (e.g., AL-4), non-interleaved or interleaved CCE-to-REG mapping, shift index, etc.) via radio resource control (RRC) messaging. As discussed above, a UE may refrain from monitoring one or more symbols of the PDCCH candidate604if the UE is configured with LTE-CRS patterns indicating that those symbols carry CRS. In the illustrated example, if symbols0and1are considered to carry CRS, then the UE may not monitor PDCCH candidate604. However, as described herein, the base station may puncture one or more REs of the candidate PDCCH with a CRS RE or rate-match the candidate PDCCH around the CRS RE. Conventionally, a precoder granularity is determined based on the size of the REG bundle. That is, the DMRS for the REG bundle is mapped in the REG bundle for all OFDM symbols for the PDCCH. However, as discussed above, such precoder granularity may result in degraded channel estimation performance (e.g., a DRMS signal of PDCCH candidate RE may be eliminated if the PDCCH candidate is punctured by CRS). In one example, a UE may not even monitor a PDCCH candidate if even one symbol of the PDCCH candidate includes a CRS RE. Even if the UE were to monitor only the one or more symbols (e.g., symbol2ofFIG.2A) of the PDCCH candidate (e.g., symbols0-2) that do not include a CRS RE, channel estimation performance may be degraded because of the relatively small number of DMRS signals (e.g., DMRS signals received on only one symbol instead of three). Thus, in certain aspects, the precoder granularity may be expanded to include one or more additional REG bundles that are consecutive (e.g., contiguous) in the frequency domain, forming a “superset” of REG bundles. That is, the UE may assume that the precoder is the same over the superset. As such, the DMRS for a REG bundle may be mapped in a superset of a particular OFDM symbol of the PDCCH candidate604for enhancing the performance of channel estimation. FIGS.7A-7Care block diagrams illustrating example implementations of superset REG bundles of a PDCCH candidate, using the same PDCCH candidate size and hatch pattern associated with each REG bundle as illustrated inFIG.6. Further, it is appreciated that the disclosure is not limited to the examples provided inFIGS.7A-7C, and that differently sized supersets may be used with differently sized PDCCH candidates, or one or more supersets that each include resources of multiple PDCCH candidates. A base station (e.g., base station102/180ofFIG.1) may configure a UE (e.g., UE104ofFIG.1) with superset parameters via RRC signaling. For example, the existing parameter for precoder granularity (e.g., precoderGranularity of ControlResourceSet) may be repurposed to provide the UE with an indication of the size of one or more supersets. In another example, a new parameter may be added to the ControlResourceSet IE (or another suitable IE) to provide the UE with an indication of the size of one or more supersets. In some examples, the base station may configure the UE with a table of multiple superset sizes and/or configurations, with each size and/or configuration identified by an index value. Thus, the indication of the size of one or more supersets may be provided via an index value. Superset with index i may cover RBs [(i)(L)+n], [(i)(L)+n+1], . . . , [(i)(L)+n+L−1], where L is the superset size (e.g., in terms of REGs or REG bundles) and n is a cyclic shift or offset. The value of n may be fixed to0or can be configured by RRC. FIG.7Aillustrates a first superset implementation700wherein each superset includes 2 REG bundles. Here, 6 REG bundles (e.g.,706a-706f, collectively referred to as REG bundles706) are illustrated. Instead of setting the precoder granularity to equal the size of the REG bundle (which occupies one RB over each of two symbols), the base station may configure the UE with a precoder granularity equal to the first superset implementation where the superset contains two REG bundles (e.g., 4 REGs, 2 REG bundles). Here, a first superset702includes the first REG bundle706aand the second REG bundle706b; a second superset704include the third REG bundle706cand the fourth REG bundle706d; and a third superset708includes the fifth REG bundle706eand the sixth REG bundle706f. It should be noted that, depending on an interleaver configuration, the 6 REG bundles706may correspond to the same PDCCH candidate, or may be spread across multiple PDCCH candidates (e.g., different PDCCH candidates for the same UE, or PDCCH candidates for different UEs). FIG.7Billustrates a second superset implementation710wherein the superset includes 3 REG bundles. Here, 6 REG bundles (e.g.,716a-716f, collectively referred to as716) are illustrated. The base station may configure the UE with a precoder granularity equal to the second superset implementation where the superset contains three REG bundles (e.g., 6 REGs, 3 REG bundles). Here, a first superset712includes the first REG bundle716a, the second REG bundle716b, and the third REG bundle716c; and a second superset714include the fourth REG bundle716d, the fifth REG bundle716e, and the sixth REG bundle716f. It should be noted that, depending on an interleaver configuration, the 6 REG bundles716may correspond to the same PDCCH candidate, or may be spread across multiple PDCCH candidates (e.g., different PDCCH candidates for the UE, or PDCCH candidates for different UEs). FIG.7Cillustrates a third superset implementation720wherein a first superset722includes 4 REG bundles and a second superset724includes two REG bundles. Here, 6 REG bundles (e.g.,726a-726f, collectively referred to as726) are illustrated. Here, the first superset722includes the first REG bundle726a, the second REG bundle726b, the third REG bundle726c, and the fourth REG bundle726d; and the second superset724includes the fifth REG bundle726eand the sixth REG bundle726f. Still referring toFIG.7C, in some examples, the base station may configure the UE with two separate precoder granularities, wherein a first precoder granularity is equal to the first superset722(e.g., 8 REGs, 4 REG bundles) and a second precoder granularity is equal the second superset724(e.g., four REGs, two REG bundles). In another example, the base station may configure the UE to use one superset (e.g., index i) and the UE may determine to bound the first superset and/or the last superset for a PDCCH candidate if the PDCCH candidate has a smaller number of RB s than L. For example, inFIG.7C, the base station may configure the UE to use a superset corresponding to index i that includes four REG bundles. However, because the size of the PDCCH candidate is only 6 REG bundles, the UE may automatically reduce the first/last superset to two REG bundles. However, in some examples, the base station may transmit DMRS signals in REG bundles outside of a superset. That is, if the superset is sized as illustrated inFIG.7C, wherein the second superset724only corresponds to two REG bundles despite being sized for four REG bundles, the UE may monitor adjacent REG bundles728to receive DMRS signals transmitted outside of the second superset724. Thus, the combination of the second superset724and the adjacent REG bundles728may provide the UE with the correct precoder granularity (e.g., where the UE is only configured with a superset size equal to the first precoder granularity). It should be noted that, depending on an interleaver configuration, the 6 REG bundles726may correspond to the same PDCCH candidate, or may be spread across multiple PDCCH candidates (e.g., different PDCCH candidates for the UE, or PDCCH candidates for different UEs). FIG.8is a call-flow diagram800illustrating example communications between a base station802(e.g., base station102/180ofFIG.1) and a UE804(e.g., UE104ofFIG.1). The communications may include one or more downlink transmissions using the PDCCH candidates within CORESET602ofFIG.6. At a first communication806, the base station802may transmit, via RRC signaling, a CORESET configuration, a search space configuration, and optionally, a superset configuration that includes one or more parameters not provided in the CORESET configuration. In other words, the UE804may receive communication parameters for downlink transmissions (e.g., candidate PDCCH), wherein the communication parameters are indicative of at least a first REG bundle and a second REG bundle. For example, based on the RRC signaling, the UE804may identify a CORESET location, including one or more of an REG bundle grid, a superset grid, and a mapping of CRS REs. Referring toFIG.7Bas an example, the UE804may determine to monitor the fifth REG bundle716eand the sixth REG bundle716ffor downlink transmissions. In some examples, the communication parameters may include a control channel element (CCE)-to-REG mapping, indicating one or more REG bundles for the UE804to monitor for downlink transmissions. In one example, the communication parameters include an indication of a superset configuration mapping the first REG bundle and the second REG bundle to a first superset wherein the first REG bundle and the second REG bundle share the same precoder. The communication parameters may also include an indication of a second superset mapped to at least a third REG bundle. That is, the first superset and the second superset may be uniquely mapped to REG bundles not shared by the supersets. In some examples, the first superset and the second superset may correspond to the same precoder, or each of the first superset and the second superset may correspond to a unique precoder. That is, one superset may map to more REG bundles than the second superset. At a first process808, the UE804may identify a candidate PDCCH including one or more REG bundles to monitor for control signaling. In some examples, the UE804may identify the candidate PDCCH by identifying corresponding CCEs and one or more REG bundles associated with each CCE based on the communication parameters received from the first communication806. In some examples, the communication parameters may include a PDCCH search space that indicates one or more CCE locations where the UE804may find its PDCCHs. In some examples, each CCE may include one or more REG bundles. At a second communication810, the base station802may transmit a PDCCH to the UE804via one or more REG bundles the UE804is monitoring for control signaling. Referring again to the example ofFIG.7B, the UE804may determine to monitor the fifth REG bundle716eand the sixth REG bundle716ffor the downlink transmission based on the communication parameters (e.g., based on CCE-to-REG mapping). Once the UE804receives the downlink transmission via one or both of the fifth REG bundle716eand/or the sixth REG bundle716f, the UE804may determine that the second superset714corresponds to the fifth and sixth REG bundles716e/716fin response to receiving the downlink transmission (e.g., the PDCCH may be received over only one or both of the REG bundles). Alternatively, the UE804may have already determined the corresponding superset(s) prior to receiving the downlink transmission. It should be noted that the PDCCH transmitted at the second communication810may be punctured by a CRS pattern that is also transmitted via the fifth REG bundle716eand the sixth REG bundle716f. At a second process812, the UE804may perform channel estimation of the downlink transmission of the second communication810based on reference signals (e.g., DMRS signals) transmitted over one or both of the fifth REG bundle716eand/or the sixth REG bundle716f. For example, if the UE804receives the downlink transmission via the fifth REG bundle716e, then the UE804may assume that the reference signals for channel estimation are present on all the REG-bundles within the second superset714, and the same precoder is applied to all of them. That is, the UE804may receive DMRS from the fourth REG bundle716d, the fifth REG bundle716e, and the sixth REG bundle716fwhile also receiving the downlink transmission from the fifth REG bundle716e. The UE804may then perform channel estimation for the downlink transmission using the reference signals received from all the REG bundles of the second superset714. That is, the UE804may perform channel estimation using reference signals from REG bundles that did not include the downlink transmission but are associated with the same superset. The UE may then demodulate the downlink transmission received via the fifth REG bundle714ebased on the channel estimation of the reference signals received over each REG bundle of the second superset714. It should be noted that if the UE804later receives another downlink transmission from the base station802via the sixth REG bundle716for the fourth REG bundle716d, the UE804may determine not to perform the channel estimation again. In other words, the UE804may determine not to perform multiple channel estimations for different REG bundles within the same superset, and may instead demodulate the other downlink transmission based on the previous channel estimation. FIG.9is a block diagram illustrating an example of a PDCCH candidate within a CORESET902and a virtual CORESET912. Here, the CORESET902spans two symbols (e.g., symbols1and2) and96resource blocks (RBs), and the virtual CORESET912is only one symbol (symbol2). The CORESET902includes a PDCCH candidate904that occupies only symbol2, and includes twenty-four REGs. The PDCCH candidate904also includes twelve REG bundles906a-9061(collectively referred to at REG bundles906) wherein each of the REG bundles906is made up of two frequency-adjacent REGs. As illustrated, each REG bundle906is distinguished by a hatch pattern. A superset may include two or more REG bundles906within symbol2and may also extend into adjacent frequency resources in symbol1. FIG.10is a call-flow diagram1000illustrating example communications between a base station1002(e.g., base station102/180ofFIG.1) and a UE1004(e.g., UE104ofFIG.1). The communications may include one or more downlink transmissions using the PDCCH candidates within CORESET902and virtual CORESET912ofFIG.9. At a first communication1006, the base station1002may transmit, via RRC signaling, a CORESET configuration, a search space configuration, and one or more parameters indicating a virtual CORESET. At a first process1008, the UE804may identify one or more supersets based on the virtual CORESET. For example, the virtual CORESET may be configured such that it bounds the resources that may be used as REG bundles for receiving downlink transmissions. The UE1004may determine the supersets based on one or more rules. For example, a superset may include 2RBs or 6 RMs in a single OFDM symbol as if it is a REG bundle of a 1-symbol CORESET. In some examples, existing RRC parameters may be repurposed to identify the superset(s). For example, the UE1004may use the one or more parameters indicating the virtual CORESET to identify virtual CCE(s) associated with the virtual CORESET. Using the virtual CCE(s), the UE1004may identify virtual REG-bundle(s) to process as superset(s). Once the supersets are identified, the UE1004may determine the REG bundles within the supersets at a second process1010. That is, the UE1004may assume that the same precoder is applied to reference signals (e.g., DMRS) transmitted by the base station1002in all REGs of an identified superset. At a second communication1012, the UE1004may receive a downlink transmission (e.g., PDCCH) from the base station1002. For example, the downlink transmission my be transmitted over one or more REG bundles of an identified superset. At a third process1014, the UE1004may perform channel estimation based on the reference signals (e.g., DMRS) received by the UE1004from the base station1002via all the REGs in the superset. The UE1004may demodulate the signal based on the channel estimation. FIG.11is a flowchart1100of a method of wireless communication. The method may be performed by a UE (e.g., the UE104ofFIG.1; the apparatus1202ofFIG.12). At1102, the UE may receive, from a network node, communication parameters for a downlink transmission, the communication parameters indicative of a first resource element group (REG) bundle and a second REG bundle. For example,1102may be performed by a receiving component1240. Here, the UE may receive the CORESET/superset configuration of the first communication806/1006ofFIGS.8and10. At1104, the UE may optionally identify a superset based on the indication of the virtual CORESET. For example,1104may be performed by an identifying component1242. Here, information about the virtual CORESET may be included in the first communication1006ofFIG.10. It should be noted that the downlink transmission may be received within a control resource set (CORESET), wherein the communication parameters indicative of the first REG bundle and the second REG bundle comprises an indication of a virtual CORESET within the CORESET. At1106, the UE may optionally identify the first REG bundle and the second REG bundle associated with the superset. For example, 1106 may be performed by the identifying component1242. Here, the UE may use the information about the virtual CORESET to determine the REG bundles as described in the second process1010ofFIG.10. At1108, the UE may receive the downlink transmission via the first REG bundle. For example,1108may be performed by the receiving component1240. Here, the UE may receive the downlink transmission (e.g., PDCCH) via a particular REG bundle that the UE is monitoring. However, because the downlink transmission may be punctured by CRS configured to be transmitted in the same REG bundle, the UE may monitor multiple REG bundles (including the first REG bundle) in a superset in order to properly perform channel estimation. Receiving the downlink transmission is described, for example in the second transmission810/1010ofFIGS.8and10. At1110, the UE may optionally receive anther downlink transmission via the third REG bundle. For example,1110may be performed by the receiving component1240. Here, the UE may receive multiple downlink transmissions over REG bundles it is configured to monitor. At1112, the UE may optionally receive reference signals transmitted over a fourth REG bundle, wherein the fourth REG bundle is contiguous in frequency with the third REG bundle and outside of the first superset and the second superset. For example,1112may be performed by the receiving component1240. Here, the other REG bundle of1110may be part of another superset that includes the third REG bundle. At1114, the UE may optionally perform channel estimation of the other downlink transmission based on reference signals transmitted over the third REG bundle and the fourth REG bundle. For example,1114may be performed by the channel estimating component1244. Here, the UE may perform channel estimation of all the reference signals (e.g., DRMS) received over the REG bundles of the superset in order to demodulate the other downlink transmission. At1116, the UE may perform channel estimation of the downlink transmission based on reference signals transmitted over the first REG bundle and the second REG bundle. For example,1116may be performed by the channel estimating component1244. Here, the UE may perform channel estimation using reference signals (e.g., DMRS) from multiple REG bundles in order to properly demodulate the downlink transmission received via one REG bundle. At1118, the UE may optionally determine a superset associated with the first REG bundle based on receiving the downlink transmission via the first REG bundle. For example,1118may be performed by a determining component1246. For example, while the UE may determine a superset associated with a particular REG bundle in response to receiving a downlink transmission via that particular REG bundle, it should be noted that the UE may determine the associated supersets for all monitored REG bundles prior to receiving any transmissions from the base station. At1120, the UE may optionally perform channel estimation based on reference signals transmitted over each REG bundle of the first superset. For example,1120may be performed by the channel estimating component1244. Here, the UE may perform channel estimation using reference signals received from REG bundles that are in the same superset as the REG bundle that carried the downlink transmission even though the other REG bundles did not carry the downlink transmission. At1122, the UE may optionally demodulate the downlink transmission received via the first REG bundle based on the channel estimation of the reference signals received over each REG bundle of the first superset. For example,1122may be performed by a demodulating component1248. In certain aspects, the downlink transmission is received within a control resource set (CORESET), and wherein the communication parameters for the downlink transmission further comprise an indication of a cell-specific reference signal (CRS) pattern transmitted within the CORESET. In certain aspects, the downlink transmission is rate-matched around the CRS pattern. In certain aspects, the communication parameters comprise an indication of a superset configuration mapping the first REG bundle and the second REG bundle to a first superset. In certain aspects, the first REG bundle and the second REG bundle share a same precoder. In certain aspects, the superset configuration comprises a plurality of supersets including the first superset and a second superset, wherein each of the first superset and the second superset are uniquely mapped to at least one REG bundle. In certain aspects, the UE is further configured to demodulate the downlink transmission received via the first REG bundle based on the channel estimation of the reference signals received over each REG bundle of the first superset. In certain aspects, the indication of the virtual CORESET comprises an indication of a size of the superset. In certain aspects, the communication parameters configure the UE to monitor: (i) the first REG bundle and the second REG bundle for the downlink transmission, and (ii) a third REG bundle for another downlink transmission, wherein the first REG bundle and the second REG bundle are mapped to a first superset, wherein the third REG bundle is mapped to a second superset, and wherein the first REG bundle, the second REG bundle, and the third REG bundle are contiguous in frequency. In certain aspects, the first superset comprises more REG bundles than the second superset. In certain aspects, the first REG bundle is defined by a first frequency channel, and wherein the second REG bundle is defined by a second frequency channel. In certain aspects, the communication parameters are received via a radio resource control (RRC) message. In certain aspects, the reference signals transmitted over the first REG bundle and the second REG bundle comprises a demodulation reference signal (DMRS). In certain aspects, the downlink transmission is a physical downlink control channel (PDCCH). FIG.12is a diagram1200illustrating an example of a hardware implementation for an apparatus1202. The apparatus1202is a UE and includes a cellular baseband processor1204(also referred to as a modem) coupled to a cellular RF transceiver1222and one or more subscriber identity modules (SIM) cards1220, an application processor1206coupled to a secure digital (SD) card1208and a screen1210, a Bluetooth module1212, a wireless local area network (WLAN) module1214, a Global Positioning System (GPS) module1216, and a power supply1218. The cellular baseband processor1204communicates through the cellular RF transceiver1222with the UE104and/or BS102/180. The cellular baseband processor1204may include a computer-readable medium/memory. The computer-readable medium/memory may be non-transitory. The cellular baseband processor1204is responsible for general processing, including the execution of software stored on the computer-readable medium/memory. The software, when executed by the cellular baseband processor1204, causes the cellular baseband processor1204to perform the various functions described supra. The computer-readable medium/memory may also be used for storing data that is manipulated by the cellular baseband processor1204when executing software. The cellular baseband processor1204further includes a reception component1230, a communication manager1232, and a transmission component1234. The communication manager1232includes the one or more illustrated components. The components within the communication manager1232may be stored in the computer-readable medium/memory and/or configured as hardware within the cellular baseband processor1204. The cellular baseband processor1204may be a component of the UE350and may include the memory360and/or at least one of the TX processor368, the RX processor356, and the controller/processor359. In one configuration, the apparatus1202may be a modem chip and include just the baseband processor1204, and in another configuration, the apparatus1202may be the entire UE (e.g., see350ofFIG.3) and include the aforediscussed additional modules of the apparatus1202. The communication manager1232includes a receiving component1240that is configured to receive, from a network node, communication parameters for a downlink transmission, the communication parameters indicative of a first resource element group (REG) bundle and a second REG bundle; receive the downlink transmission via the first REG bundle; receive the other downlink transmission via the third REG bundle; and receive reference signals transmitted over a fourth REG bundle, wherein the fourth REG bundle is contiguous in frequency with the third REG bundle and outside of the first superset and the second superset; e.g., as described in connection with the first communication806/1006and the second communication810/1010ofFIGS.8and10. The communication manager1232further includes an identifying component1242that is configured to identify a superset based on the indication of the virtual CORESET; and identify the first REG bundle and the second REG bundle associated with the superset, wherein the downlink transmission is transmitted within a control resource set (CORESET), wherein the communication parameters indicative of the first REG bundle and the second REG bundle comprise an indication of a virtual CORESET within the CORESET; e.g., as described in connection with the first process808/1008ofFIGS.8and10. The communication manager1232further includes a channel estimating component1244that is configured to perform channel estimation of the downlink transmission based on reference signals transmitted over the first REG bundle and the second REG bundle; perform channel estimation based on reference signals transmitted over each REG bundle of the first superset; and perform channel estimation of the other downlink transmission based on reference signals transmitted over the third REG bundle and the fourth REG bundle; e.g., as described in the second process812ofFIG.8and the third process1014ofFIG.10. The communication manager1232further includes a determining component1246that is configured to determine the first superset associated with the first REG bundle based on receiving the downlink transmission via the first REG bundle, e.g., as described in connection with the first process1008ofFIG.10. It should be noted that the determining may be performed at either of the first process808, second process812, or the second communication810ofFIG.8. The communication manager1232further includes a demodulating component1248that is configured to demodulate the downlink transmission received via the first REG bundle based on the channel estimation of the reference signals received over each REG bundle of the first superset, e.g., as described in connection with the second process812ofFIG.8and the third process1014ofFIG.10. The apparatus may include additional components that perform each of the blocks of the algorithm in the aforementioned flowcharts ofFIGS.8and10. As such, each block in the aforementioned flowcharts ofFIGS.8and10may be performed by a component and the apparatus may include one or more of those components. The components may be one or more hardware components specifically configured to carry out the stated processes/algorithm, implemented by a processor configured to perform the stated processes/algorithm, stored within a computer-readable medium for implementation by a processor, or some combination thereof. In one configuration, the apparatus1202, and in particular the cellular baseband processor1204, includes means for receiving, from a network node, communication parameters for a downlink transmission, the communication parameters indicative of a first resource element group (REG) bundle and a second REG bundle; means for identifying a superset based on the indication of the virtual CORESET; means for identifying the first REG bundle and the second REG bundle associated with the superset; means for receiving the downlink transmission via the first REG bundle; means for receiving the other downlink transmission via the third REG bundle; means for receiving reference signals transmitted over a fourth REG bundle, wherein the fourth REG bundle is contiguous in frequency with the third REG bundle and outside of the first superset and the second superset; means for performing channel estimation of the other downlink transmission based on reference signals transmitted over the third REG bundle and the fourth REG bundle; means for performing channel estimation of the downlink transmission based on reference signals transmitted over the first REG bundle and the second REG bundle; means for determining the first superset associated with the first REG bundle based on receiving the downlink transmission via the first REG bundle; means for performing the channel estimation based on reference signaling transmitted over at least the first REG bundle and the second REG bundle of the first superset; and means for demodulating the downlink transmission received via the first REG bundle based on the channel estimation of the reference signals received over each REG bundle of the first superset. The aforementioned means may be one or more of the aforementioned components of the apparatus1202configured to perform the functions recited by the aforementioned means. As described supra, the apparatus1202may include the TX Processor368, the RX Processor356, and the controller/processor359. As such, in one configuration, the aforementioned means may be the TX Processor368, the RX Processor356, and the controller/processor359configured to perform the functions recited by the aforementioned means. FIG.13is a flowchart1300of a method of wireless communication. The method may be performed by a base station (e.g., the base station102/180ofFIG.1; the apparatus1402ofFIG.14. At1302, the base station may determine a first superset of a plurality of supersets, wherein the first superset is uniquely mapped to at a first resource element group (REG) bundle and a second REG bundle. For example,1302may be performed by a determining component1440ofFIG.14. Here, the base station may generate a mapping between a superset of a plurality of supersets, and one or more REG bundles. The base station may then transmit the mapping to the UE as illustrated in the first communication806/1006ofFIGS.8and10. At1304, the base station may transmit communication parameters for a downlink transmission, the communication parameters indicative of the first REG bundle and the second REG bundle. For example,1304may be performed by a transmitting component1442ofFIG.14. Here,1304may be performed by the base station in the first communication804/1004ofFIGS.8and10. At1306, the base station may transmit the downlink transmission via the first REG bundle. For example,1306may be performed by the transmitting component1442ofFIG.14. Here, the base station transmits a downlink transmission to the UE, as illustrated in the second communication810/1012ofFIGS.8and10. At1308, the base station may transmit a reference signal for demodulating the first REG bundle via the first REG bundle and the second REG bundle. For example,1308may be performed by the transmitting component1442ofFIG.14. Here, the base station may transmit a DMRS in multiple REG bundles associated with the superset that is used to transmit the downlink transmission. For example, while the base station may only transmit the downlink transmission and DMRS via a single REG bundle, the base station may also transmit DMRS in all other REG bundles in the same superset so that the UE may perform channel estimation and demodulation. The base station may transmit the DMRS in all the REG bundles of the superset in, for example, the second communication810/1012ofFIGS.8and10. In certain aspects, the downlink transmission is transmitted within a control resource set (CORESET), and wherein the communication parameters for the downlink transmission further comprise an indication of a cell-specific reference signal (CRS) pattern transmitted within the CORESET. In certain aspects, the downlink transmission is rate-matched around the CRS pattern. In certain aspects, the communication parameters comprise an indication of a superset configuration mapping the first REG bundle and the second REG bundle to a first superset. In certain aspects, the first REG bundle and the second REG bundle share a same precoder. In certain aspects, the superset configuration comprises a plurality of supersets including the first superset and a second superset, wherein each of the first superset and the second superset are uniquely mapped to at least one REG bundle. In certain aspects, the downlink transmission is transmitted within a control resource set (CORESET), wherein the communication parameters indicative of the first REG bundle and the second REG bundle comprises an indication of a virtual CORESET within the CORESET. In certain aspects, the indication of the virtual CORESET comprises an indication of a size of the superset. In certain aspects, the first REG bundle is defined by a first frequency channel, and wherein the second REG bundle is defined by a second frequency channel. In certain aspects, the communication parameters are received via a radio resource control (RRC) message. In certain aspects, the reference signals transmitted over the first REG bundle and the second REG bundle comprises a demodulation reference signal (DMRS). FIG.14is a diagram1400illustrating an example of a hardware implementation for an apparatus1402. The apparatus1402is a BS and includes a baseband unit1404. The baseband unit1404may communicate through a cellular RF transceiver with the UE104. The baseband unit1404may include a computer-readable medium/memory. The baseband unit1404is responsible for general processing, including the execution of software stored on the computer-readable medium/memory. The software, when executed by the baseband unit1404, causes the baseband unit1404to perform the various functions described supra. The computer-readable medium/memory may also be used for storing data that is manipulated by the baseband unit1404when executing software. The baseband unit1404further includes a reception component1430, a communication manager1432, and a transmission component1434. The communication manager1432includes the one or more illustrated components. The components within the communication manager1432may be stored in the computer-readable medium/memory and/or configured as hardware within the baseband unit1404. The baseband unit1404may be a component of the BS310and may include the memory376and/or at least one of the TX processor316, the RX processor370, and the controller/processor375. The communication manager1432includes a determining component1440for determining a first superset of a plurality of supersets, wherein the first superset is uniquely mapped to at a first resource element group (REG) bundle and a second REG bundle, e.g., as described in connection with1302ofFIG.13. The communication manager1432further includes a transmitting component1442for transmitting communication parameters for a downlink transmission, the communication parameters indicative of the first REG bundle and the second REG bundle; transmit the downlink transmission via the first REG bundle; transmitting a reference signal for demodulating the first REG bundle via the first REG bundle and the second REG bundle; and transmitting a reference signal for demodulating the first REG bundle via the first REG bundle and the second REG bundle; e.g., as described in connection with1304,1306, and1308ofFIG.13. The apparatus may include additional components that perform each of the blocks of the algorithm in the aforementioned flowcharts ofFIGS.8and10. As such, each block in the aforementioned flowcharts ofFIGS.8and10may be performed by a component and the apparatus may include one or more of those components. The components may be one or more hardware components specifically configured to carry out the stated processes/algorithm, implemented by a processor configured to perform the stated processes/algorithm, stored within a computer-readable medium for implementation by a processor, or some combination thereof. In one configuration, the apparatus1402, and in particular the baseband unit1404, includes means for determining a first superset of a plurality of supersets, wherein the first superset is uniquely mapped to at a first resource element group (REG) bundle and a second REG bundle; means for transmitting communication parameters for a downlink transmission, the communication parameters indicative of the first REG bundle and the second REG bundle; means for transmitting the downlink transmission via the first REG bundle; and means for transmitting a reference signal for demodulating the first REG bundle via the first REG bundle and the second REG bundle. The aforementioned means may be one or more of the aforementioned components of the apparatus1402configured to perform the functions recited by the aforementioned means. As described supra, the apparatus1402may include the TX Processor316, the RX Processor370, and the controller/processor375. As such, in one configuration, the aforementioned means may be the TX Processor316, the RX Processor370, and the controller/processor375configured to perform the functions recited by the aforementioned means. Additional Considerations It is understood that the specific order or hierarchy of blocks in the processes/flowcharts disclosed is an illustration of example approaches. Based upon design preferences, it is understood that the specific order or hierarchy of blocks in the processes/flowcharts may be rearranged. Further, some blocks may be combined or omitted. The accompanying method claims present elements of the various blocks in a sample order, and are not meant to be limited to the specific order or hierarchy presented. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Terms such as “if,” “when,” and “while” should be interpreted to mean “under the condition that” rather than imply an immediate temporal relationship or reaction. That is, these phrases, e.g., “when,” do not imply an immediate action in response to or during the occurrence of an action, but simply imply that if a condition is met then an action will occur, but without requiring a specific or immediate time constraint for the action to occur. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. Unless specifically stated otherwise, the term “some” refers to one or more. Combinations such as “at least one of A, B, or C,” “one or more of A, B, or C,” “at least one of A, B, and C,” “one or more of A, B, and C,” and “A, B, C, or any combination thereof” include any combination of A, B, and/or C, and may include multiples of A, multiples of B, or multiples of C. Specifically, combinations such as “at least one of A, B, or C,” “one or more of A, B, or C,” “at least one of A, B, and C,” “one or more of A, B, and C,” and “A, B, C, or any combination thereof” may be A only, B only, C only, A and B, A and C, B and C, or A and B and C, where any such combinations may contain one or more member or members of A, B, or C. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. The words “module,” “mechanism,” “element,” “device,” and the like may not be a substitute for the word “means.” As such, no claim element is to be construed as a means plus function unless the element is expressly recited using the phrase “means for.” Example Aspects The following examples are illustrative only and may be combined with aspects of other embodiments or teachings described herein, without limitation. Example 1 is a method for wireless communication by a user equipment (UE), comprising: receiving, from a network node, communication parameters for a downlink transmission, the communication parameters indicative of a first resource element group (REG) bundle and a second REG bundle; receiving the downlink transmission via the first REG bundle; and performing channel estimation of the downlink transmission based on reference signals transmitted over the first REG bundle and the second REG bundle. Example 2 is the method of example 1, wherein the downlink transmission is received within a control resource set (CORESET), and wherein the communication parameters for the downlink transmission further comprise an indication of a cell-specific reference signal (CRS) pattern transmitted within the CORESET. Example 3 is the method of example 2, wherein the downlink transmission is punctured by the CRS pattern, or wherein the downlink transmission is rate-matched around the CRS pattern. Example 4 is the method of any of examples 1-3, wherein the communication parameters comprise an indication of a superset configuration mapping the first REG bundle and the second REG bundle to a first superset. Example 5 is the method of any of examples 1-4, wherein the first REG bundle and the second REG bundle share a same precoder. Example 6 is the method of any of examples 1-5, wherein the superset configuration comprises a plurality of supersets including the first superset and a second superset, wherein each of the first superset and the second superset are uniquely mapped to at least one REG bundle. Example 7 is the method of any of examples 1-6, wherein performing channel estimation of the downlink transmission further comprises: determining the first superset associated with the first REG bundle based on receiving the downlink transmission via the first REG bundle; and performing the channel estimation based on reference signaling transmitted over at least the first REG bundle and the second REG bundle of the first superset. Example 8 is the method of example 7, further comprising demodulating the downlink transmission received via the first REG bundle based on the channel estimation of the reference signals received over each REG bundle of the first superset. Example 9 is the method of any of examples 1-8, wherein the downlink transmission is transmitted within a control resource set (CORESET), wherein the communication parameters indicative of the first REG bundle and the second REG bundle comprise an indication of a virtual CORESET within the CORESET, and wherein the method further comprises: identifying a superset based on the indication of the virtual CORESET; and identifying the first REG bundle and the second REG bundle associated with the superset. Example 10 is the method of example 9, wherein the indication of the virtual CORESET comprises an indication of a size of the superset. Example 11 is the method of any of examples 1-10, wherein the communication parameters configure the UE to monitor: (i) the first REG bundle and the second REG bundle for the downlink transmission, and (ii) a third REG bundle for another downlink transmission, wherein the first REG bundle and the second REG bundle are mapped to a first superset, wherein the third REG bundle is mapped to a second superset, and wherein the first REG bundle, the second REG bundle, and the third REG bundle are contiguous in frequency. Example 12 is the method of example 11, further comprising: receiving the other downlink transmission via the third REG bundle; receiving reference signals transmitted over a fourth REG bundle, wherein the fourth REG bundle is contiguous in frequency with the third REG bundle and outside of the first superset and the second superset; and performing channel estimation of the other downlink transmission based on reference signals transmitted over the third REG bundle and the fourth REG bundle. Example 13 is the method of any of examples 11 and 12, wherein the first superset comprises more REG bundles than the second superset. Example 14 is the method of any of examples 1-13, wherein the first REG bundle is defined by a first frequency channel, and wherein the second REG bundle is defined by a second frequency channel. Example 15 is the method of any of examples 1-14, wherein the communication parameters are received via a radio resource control (RRC) message. Example 16 is the method of any of examples 1-15, wherein the reference signals transmitted over the first REG bundle and the second REG bundle comprises a demodulation reference signal (DMRS). Example 17 is the method of any of examples 1-16, wherein the downlink transmission is a physical downlink control channel (PDCCH). Example 18 is a method for wireless communication by a network node, comprising: determining a first superset of a plurality of supersets, wherein the first superset is uniquely mapped to at a first resource element group (REG) bundle and a second REG bundle; transmitting communication parameters for a downlink transmission, the communication parameters indicative of the first REG bundle and the second REG bundle; transmitting the downlink transmission via the first REG bundle; and transmitting a reference signal for demodulating the first REG bundle via the first REG bundle and the second REG bundle. Example 19 is the method of example 18, wherein the downlink transmission is transmitted within a control resource set (CORESET), and wherein the communication parameters for the downlink transmission further comprise an indication of a cell-specific reference signal (CRS) pattern transmitted within the CORESET. Example 20 is the method of any of examples 18 and 19, wherein the downlink transmission is punctured by the CRS pattern, or wherein the downlink transmission is rate-matched around the CRS pattern. Example 21 is the method of any of examples 18-20, wherein the communication parameters comprise an indication of a superset configuration mapping the first REG bundle and the second REG bundle to a first superset. Example 22 is the method of example 21, wherein the first REG bundle and the second REG bundle share a same precoder. Example 23 is the method of any of examples 21 and 22, wherein the superset configuration comprises a plurality of supersets including the first superset and a second superset, wherein each of the first superset and the second superset are uniquely mapped to at least one REG bundle. Example 24 is the method of any of examples 18-23, wherein the downlink transmission is transmitted within a control resource set (CORESET), wherein the communication parameters indicative of the first REG bundle and the second REG bundle comprises an indication of a virtual CORESET within the CORESET. Example 25 is the method of example 24, wherein the indication of the virtual CORESET comprises an indication of a size of the superset. Example 26 is the method of any of examples 18-25, wherein the first REG bundle is defined by a first frequency channel, and wherein the second REG bundle is defined by a second frequency channel. Example 27 is the method of any of examples 18-26, wherein the communication parameters are received via a radio resource control (RRC) message. Example 28 is the method of any of examples 18-27, wherein the reference signals transmitted over the first REG bundle and the second REG bundle comprises a demodulation reference signal (DMRS). Example 29 is a user equipment (UE) comprising: a memory; and a processor coupled to the memory, the processor and memory being configured to perform the method of any of claims1-17. Example 30 is a network node comprising: a memory; and a processor coupled to the memory, the processor and memory being configured to perform the method of any of claims18-28. Example 31 is a user equipment (UE) comprising: one or more means for performing the method of any of claims1-17. Example 32 is a network node comprising: one or more means for performing the method of any of claims18-28. Example 33 is a non-transitory computer-readable storage medium having instructions stored thereon for performing the method of any of claims1-17for wireless communication by a user equipment (UE). Example 34 is a non-transitory computer-readable storage medium having instructions stored thereon for performing the method of any of claims18-28for wireless communication by a network node.
99,258
11943160
Like reference numerals refer to like elements throughout. DETAILED DESCRIPTION Disclosed herein are systems and methods for rapid low-complexity downlink messaging with high reliability in 5G and 6G networks. Systems and methods disclosed herein (the “systems” and “methods”, also occasionally termed “embodiments”, “versions”, or “arrangements”, generally according to present principles) can provide urgently needed wireless communication protocols to reduce computational demands, energy consumption, message loss, retransmission delays, and uncertainty in downlink messaging in 5G and 6G networks, according to some embodiments. The protocols may also be applicable to uplink and sidelink and other types of messages. The protocols may include demarking each downlink message, and particularly each downlink control message, by prepending or appending a predetermined pattern to the message, thereby indicating the position and size of the message. Demarcation, according to some embodiments, may reduce the number of calculations and tests that the recipient must perform, thereby reducing the chance of message faults, simplifying receiver software, and saving substantial computational power, while avoiding unnecessary delays such as retransmission delays. Terms herein generally follow 3GPP (third generation partnership project) standards, with additional terms defined as needed. A wireless network generally includes a base station (or gNB, generation Node B, eNB, evolved Node B, AP, access point, or the like) in signal communication with a plurality of user devices (or UE, user equipment, terminals, user nodes, or the like). 5G and 6G technologies are designed for eMBB (enhanced Mobile Broadband communications), URLLC (ultra reliable low latency communications), and mMTC (massive machine-type communication) generally involving large numbers of user devices such as vehicles, mobile phones, self-propelled and robotic machines, portable and stationary computers, and many other advanced wireless instruments. “5G” represents fifth-generation wireless technology and “6G” sixth-generation wireless technology. “NB-IoT” (narrow-band Internet-of-things) and “5G-Light” are versions that provide slightly reduced complexity and bandwidth requirements. A network (or cell or LAN Local Area Network or RAN Radio Access Network or the like) may include a base station in signal communication with a plurality of user devices and operationally connected to a core network (CN) which handles non-radio tasks, such as administration, and is usually connected to a larger network such as the Internet. The time-frequency space is generally configured as a “resource grid” including a number of “resource elements”, each resource element being a specific unit of time termed a “symbol period”, and a specific frequency and bandwidth termed a “subcarrier” (or “subchannel” in some references). Symbol periods may be termed “OFDM symbols” (Orthogonal Frequency-Division Multiplexing) in references. The time domain may be divided into ten-millisecond frames, one-millisecond subframes, and some number of slots, each slot including 14 symbol periods. The number of slots per subframe ranges from 1 to 8 depending on the “numerology” selected. The frequency axis is divided into “resource blocks” (also “resource element groups” or “REG” or “channels” in references) including 12 subcarriers. The “numerology” of a resource grid corresponds to the subcarrier spacing in the frequency domain. Each subcarrier is at a slightly different frequency. Subcarrier spacings of 15, 30, 60, 120, and 240 kHz are defined in various numerologies. Each subcarrier can be independently modulated to convey message information. Thus a resource element, spanning a single symbol period in time and a single subcarrier or subchannel in frequency, is the smallest unit of a message. A message is “unicast” if it is addressed to a specific recipient, and “broadcast” if it includes no recipient address. Communication in 5G generally takes place on abstract message “channels” (not to be confused with frequency channels) representing different types of messages, embodied as a PDCCH and PUCCH (physical downlink and uplink control channels) for transmitting control information, PDSCH and PUSCH (physical downlink and uplink shared channels) for transmitting data and other non-control information, PBCH (physical broadcast channel) for transmitting information to multiple user nodes, among other channels that may be in use. In addition, one or more random access channels, termed “RACH” herein, also called PRACH in references, represents both abstract and physical random access channels, including potentially multiple random access channels in a single cell, and configured for uplink and/or downlink, as detailed below. “CRC” (cyclic redundancy code) is an error-detection code. “RNTI” (radio network temporary identity) such as “C-RNTI” (cellular RNTI) are network-assigned user identification codes. “MAC” (media access control) is a global identification code. “DMRS” (demodulation reference signal) is a standard demodulation reference. A “DCI” (downlink control information) message is a message transmitted from a base station to a user device indicating a scheduled uplink grant or downlink transmission, among other tasks. In addition, the following terms are defined herein. Each modulated resource element of a message is referred to as a “modulated message resource element”, or more concisely as a “message element” in examples below, thereby avoiding confusion with ambiguous terms such as “symbol”. Likewise, each resource element of a demodulation reference is a “reference element” herein. A “start” of a message refers to the first or starting resource element of the message, and the “end” of a message refers to the last or ending resource element of the message. A message may be configured “time-spanning” if it occupies multiple sequential symbol periods at a single frequency, or “frequency-spanning” if it occupies multiple subcarriers at a single symbol period (thereby avoiding terms “TDD” time-division duplexing and “FDD” frequency-division duplexing, which properly refer to duplexing). A message may be arranged “frequency-first” if the message elements proceed across multiple subcarriers until reaching a limit, such as a resource block boundary, and then proceeding to the next symbol period. A device “knows” something if it has the relevant information. A message is “faulted” or “corrupted” if one or more bits of the message are altered relative to the original message. “Receiver” is to be interpreted broadly, as including connected processors and other electronics and related software/firmware configured to receive and process incoming wireless messages. “Random” and “pseudorandom” may be used interchangeably. “Low-complexity” refers to devices and procedures necessary for wireless communication, exclusive of devices and procedures that provide high-performance communication. 5G/6G specifications include many procedures and requirements that greatly exceed those necessary for wireless communication, in order to provide high-performance communications at low latency and high reliability for users that demand it. Compared to high-performance 5G/6G communications, low-complexity procedures generally require less computation and less signal processing, while providing defaults that involve fewer computation steps and/or smaller memory spaces. “Reduced-capability” refers to wireless devices that cannot comply with 5G/6G protocols, absent the systems and methods disclosed herein. Many future IoT (internet of things) use cases are expected to involve simple, low-cost, reduced-capability MTC (machine-type communication) wireless devices. In addition, task-specific low-cost wireless applications typically have minimal wireless communication needs. Because both high-performance and reduced-capability devices share the same limited electromagnetic spectrum, it would be tragic if the simpler machine-type applications are forced to develop a separate wireless technology, competing with 5G and 6G for bandwidth and locations. A much more efficient solution is to include simpler protocols and defaults appropriate to the low-cost, low-demand MTC devices. For economic reasons as well as commercial feasibility, future IoT application developers will demand ways to transmit messages using protocols appropriate to the simpler devices. The following examples illustrate low-complexity options including fixed-pattern “demarcations” appended and/or prepended to downlink messages. FIG.1Ais a schematic showing an exemplary embodiment of a low-complexity downlink message with demarcations, according to some embodiments. As depicted in this non-limiting example, a low-complexity downlink control or data message may include a predetermined “demarcation”101which is one or more resource elements having a predetermined modulation, as opposed to data. Each demarcation101may be prepended to the message102and configured to indicate the start of the message102, and/or an appended demarcation103configured to indicate the end of the message102. The prepended and appended demarcations may be the same or different. The demarcations101,103may be modulated as plain-text, that is, without scrambling, encoding, bit interleaving, or other manipulations that may impede the receiver in recognizing the demarcations101,103. The receiver may then monitor a downlink channel, such as a PDCCH or PDSCH channel, and may detect the prepended demarcation101, and may thereby determine the start of the message102. The receiver may continue monitoring the channel and detect the appended demarcation103, and thereby determine the end of the message102. The receiver may thereby avoid wasting time and energy on a “blind search” of candidate control messages in the downlink stream, according to some embodiments. The receiver can also recognize and receive a downlink data message demarked as shown, without requiring an unnecessary transmission first. A user device may request that the transmitting entity (such as a base station) use either the prepended or appended demarcations, or both, for downlink control or data messages, or both. For example, during the initial access, or at a later time, the user device may transmit a message including an indication that the receiver is a reduced-capability device, or that the receiver is not capable of finding downlink messages reliably using the blind search methods, or explicitly requesting that the base station prepend and append the demarcations to downlink messages. The receiver may also explicitly or implicitly accept that, as a consequence of receiving the additional demarcations, the receiver may not achieve the lowest latency that high-performance user devices expect. For example, the demarcations necessarily occupy additional resource elements, therefore taking additional time or frequency resources to transmit in addition to the message itself, and therefore may be more difficult for the base station to schedule in busy traffic. For many applications, however, such compromises may be of little or no consequence. In addition, the extra resource elements may be partially offset by use of a regular 16-bit CRC, instead of the expanded 24-bit CRC that was introduced due to the large number of accidental agreements encountered in massive blind search operations to find downlink messages. In addition, the extra resource elements may be further compensated by avoiding a DCI transmission before each data message. Although the message appears displayed horizontally in the figure, the message may be transmitted either frequency-spanning or time-spanning, or another configuration such as occupying multiple subcarriers and symbol times according to the resources allocated to the downlink message. Although the example is described in terms of downlink from a base station, the message may be transmitted in sidelink (between user devices), with demarcations indicating where each sidelink message begins and ends. As a further option, the message may be transmitted uplink, from the user device to the base station, with demarcations indicating where the ends of the message occur. FIG.1Bis a schematic showing an exemplary embodiment of a low-complexity downlink message including a predetermined start-code and a length indicator, according to some embodiments. As depicted in this non-limiting example, a downlink message112includes, or is transmitted contiguous with, a prepended demarcation which, in this case, is a start-code111. The message112includes a leading length field113indicating the length of the message112to follow. As used herein, a “start-code” is a predetermined signal pattern that, when prepended to a message, indicates the start of the message. Likewise, an “end-code” is another predetermined signal pattern that, when appended to a message, indicates the end of the message. The start-code and end-code may be different. A user device, monitoring a downlink channel, may recognize the start-code111, thereby determining the start of the message112including the length field113, and may determine the end of the message112by adding the length113to the start position. In some embodiments, the length of the message may be provided explicitly in the length field, in resource elements for example. In other embodiments, the length field may include a code that indicates the length in ranges, such as the size code of a BSR (buffer status report) message for example. FIG.1Cis a schematic showing an exemplary embodiment of a low-complexity downlink message including an appended end-code, according to some embodiments. As depicted in this non-limiting example, a message122has an appended end-code123indicating the end of the message. FIG.1Dis a schematic showing an exemplary embodiment of a low-complexity downlink message including predetermined start-codes and end-codes, according to some embodiments. As depicted in this non-limiting example, a start-code131and an end-code133are provided to demark a message132. The start-code131may include an identifier134such as one of the RNTI codes of the intended recipient. Alternatively, the start-code131may include a demodulation reference135such as a DMRS (demodulation reference signal) or a custom short-form demodulation reference as described below, for example. Alternatively, the start-code131may include a “gap”136, which is a resource element having no signal therein. Alternatively, the start-code131may include a label137specifying a message starting point, among other possibilities for the start-code131. Also shown, the end-code131may include a recipient identifier138, a demodulation reference139, a gap with no signal140, or an endpoint label141. The receiver, monitoring and interpreting signals on a downlink channel, may detect messages intended for that receiver by detecting a start-code and interpreting the subsequent resource elements as the message content, or it may detect an end-code and interpret the foregoing resource elements as the message, or it may detect both a start-code and an end-code and thereby recognize that the intervening resource elements are the message. The receiver can then perform signal processing as necessary, such as descrambling and decoding, on the message elements, interpret an error-detection code (such as a CRC) therein, and determine whether the message is intended for that receiver when the error-detection code agrees with the bit-level content of the message. In other embodiments, the message may be transmitted in plain text, without scrambling for example, and the recipient may determine that the message is intended for it according to a start-code or end-code that includes the identification of the receiver, such as its C-RNTI code. In each case, the amount of computation required of the receiver in locating and interpreting the message may be less (usually very much less) than would be required, absent the demarcations. In addition, when applied to a downlink data message, the demarcations may enable the recipient to receive the message without a preceding DCI message, a further savings. FIG.2is a schematic showing an exemplary embodiment of a resource grid including low-complexity downlink messages demarked with the recipient address, according to some embodiments. As depicted in this non-limiting example, a resource grid201includes subcarriers202indicated by horizontal lines, and symbol times203indicated by vertical lines. Each resource element, such as204, is defined by one of the subcarriers202and one of the symbol times203. The resource grid201includes two frequency-spanning downlink messages that include demarcations including the identification code of the recipient of the message. A first message includes four elements marked “ID”205holding the RNTI of the recipient, for example, followed by a number of message elements206marked “M”, and terminated by an error-detection code marked “CRC”207. The identification code205thereby serves as a start-code indicating the start of the message206, and also identifying plainly the identity of the intended recipient. The receiver may readily detect the identification code in plain-text (that is, not scrambled or punctured or encoded or transformed or otherwise obscured). The receiver may thereby determine that the message is intended for itself, without a blind search or a preceding DCI. This may greatly reduce the amount of computation required to determine the message's starting point, and may also ignore messages that are prepended by the identification codes of other users, thereby saving further computation. A second message208includes a CRC field209and an appended end-code210, which in this case is the identification code of the receiver. The receiver may record the various message elements208,209,210as-received as a series of measured modulation states or the corresponding bit sequences of each message element, and upon detecting the end-code210may recognize its own identification code. The receiver can thereby determine the content of the message208. The receiver has thus obtained the message as intended. FIG.3is a schematic showing an exemplary embodiment of user request message specifying downlink identification demarcations, according to some embodiments. As depicted in this non-limiting example, a user device may transmit a message to a base station requesting that certain demarcations be applied to each downlink message to that user. The request message may include an identification code301of the requesting user device, a “message type” field302indicating that the message is an accommodation request, a “pre-ID” field303indicating whether the base station should include a start-code exhibiting the receiver's identification code, a “post-ID” field304indicating the same for the end-code, a field305labeled “when” that instructs the base station whether to include the requested demarcations on control or data messages, and flags306which may provide additional options. All fields are optional. More specifically, the user ID field301may include the C-RNTI identification code311of the receiver so that the base station knows which user device is lodging the request, or it may be another identification code312such as the full MAC address, or a compact self-selected code that minimizes resource usage, among other identification formats. The message type302may be configured to indicate that the request is to provide downlink demarcations313in the form of receiver identification codes. The pre-ID field303may be a flag indicating whether the base station should use the receiver's C-RNTI314or another code315as the start-code demarcations, and likewise the post-ID field304may indicate whether an end-code of the RNTI318or other319identification code is desired. The “when” field305may indicate whether the demarcation is expected on downlink control messages only316or on all downlink control and data messages317. The flags306may indicate further requested details320as described in examples below. The base station, upon receiving the accommodation request message, may then record the details of the request including the type and position of the demarcations requested, and may then include the requested demarcations with subsequent downlink messages. FIG.4is a flowchart showing an exemplary embodiment of a procedure for transmitting a downlink message including an identification demarcation, according to some embodiments. As depicted in this non-limiting example, at401a user device transmits a request message, such as a downlink demarcation request including placement of the user device's identification code, to the base station. Understanding that such accommodation may cause additional delays, the user device may find that compromise acceptable. At402, the base station receives the request and records, in its memory, the details of the request such as prepending or appending demarcations, and the type of demarcations, desired. At403, the user device begins monitoring downlink channels such as the PDCCH and PDSCH for the user device's identification code. In some embodiments, the identification code may be scrambled according to the use device's identification code (that is, the identification code is itself used as the scrambling code), and the receiver can descramble it upon receipt. Alternatively, the identification code may be in plain-text, not scrambled. In either case, the receiver may determine the expected bit pattern of its identification code, with or without scrambling, and may scan for that bit pattern to recognize its message. At404, the base station has a message to send to the user device, so the base station checks its memory to see if the user device has requested any demarcations or other instructions. At405, the base station transmits the message, with the receiver's identification code prepended as requested. At406, the receiver detects its own identification code among the incoming resource elements, determines the message elements following that identification code, and transmits an acknowledgement back to the base station. Thus, by detecting the identification code, the user has found the starting point of the message and determined that the message is intended for it, and can then readily determine the extent of the message according to an appended demarcation or by testing various lengths of the message against the error-detection code. The receiver has then received and interpreted the message with a small fraction of the computation that would be required for detection of messages without the demarcations. FIGS.5A-5Fare schematics showing exemplary embodiments of downlink messages including start-codes and end-codes, according to some embodiments. As depicted in these non-limiting examples, a variety of start-code and end-code types may be provided to assist receivers in locating their downlink messages. FIG.5Ashows a message502with a start-code501prepended and an end-code503appended. The start-code501and end-code503may be predetermined bit sequences that the user device can recognize.FIG.5Bshows a message512with a start-code configured as a gap511. A gap is a resource element with no signal therein. Alternatively, the gap511may include signal with an amplitude below the lowest amplitude modulation level of the modulation scheme, or unmodulated carrier at the subcarrier frequency, or other signal not resembling a modulated resource element. Appended to the message512is another gap513. Thus the gaps511,513demark the start and end of the message. A receiver can then scan a downlink channel for message regions demarked at both ends by gaps, and may thereby localize the message512. Then, by checking an embedded RNTI or a scrambled CRC code, the receiver can determine whether the message belongs to that receiver, without a blind search and without a DCI. FIG.5Cshows another version in which a message522is prepended by a demodulation reference “demod-1”521and appended by another demodulation reference “demod-2”523. The demodulation references521,523may be conventional DMRS, or they may be a short-form demodulation reference of a custom type. For example, the short-form demodulation references may be configured to exhibit the maximum and minimum amplitude levels of the modulation scheme, and the maximum and minimum phase levels, so that the receiver can readily calculate any intermediate levels by interpolation. The receiver can then employ those demodulation references521,523in demodulating the message522with high fidelity, since the demodulation references521,523are in close proximity to the message522. For example, the short-form demodulation reference may have just two reference elements, such as a first reference element modulated as the maximum amplitude and phase of the modulation scheme, and a second reference element modulated as the minimum amplitude and phase. Alternatively, one or both of the short-form demodulation references521,523may be in the reverse order, minimum amplitude and phase followed by the maximum amplitude and phase, or other combination exhibiting the maximum and minimum values. The demarcations configured as reference elements521,523may thereby indicate the start and end of the message522, and may also provide demodulation levels in very close proximity to the message they are intended to demodulate. The message522additionally includes a length field524indicating either the length of the remaining message portion522, or the length of the message522including the length field524, depending on convention. The receiver can then determine the location of the final demodulation reference523according to the length524, further simplifying reception. FIG.5Dshows an alternative embodiment of a message532preceded by a demodulation reference531and followed by a gap533. The receiver may then recognize the start of the message532by the demodulation reference531, and the end by the gap533. FIG.5Eshows yet another embodiment in which a message542is preceded by a gap541and followed by a demodulation reference543. The receiver can find the start of the message542according to the gap541, and the end of the message542by the demodulation reference543. FIG.5Fshows an embodiment of a message553preceded by a start-code including a gap551and a demodulation reference552, and followed by an end-code including another demodulation reference554and a final gap555. The receiver can then recognize the extent of the message according to the gap positions551,555and can use the demodulation references552,554in demodulating the message553. FIG.5Gshows an embodiment of a message564having a start-code including a gap561, a demodulation reference562, and another gap563. Appended is a gap565, another demodulation reference566, and a final gap567. The user device can recognize the demodulation references562,566according to the gaps around them, and the intervening message564, without searching, other than to monitor the downlink channel for these characteristic patterns. Advantageously, a second message (not shown) may follow, and may share the first message's end-code565,566,567as the second message's start code. Any number of messages may be sequenced in this way, addressed to various users but demarked by the intervening gap-demod-gap combination. FIG.6is a schematic showing an exemplary embodiment of a downlink message including demarcations, address field, and error-detection field, according to some embodiments. As depicted in this non-limiting example, the message includes a prepended start-code601, an address field602, a data field603, an error-detection code604, and an end-code605. All fields are optional. The start-code601may be configured to indicate the start of the message according to a predetermined pattern of signals. For example, the start-code601may include a gap616and a demodulation reference617such as a short-form demodulation reference618that exhibits, in two reference elements, the maximum amplitude and phase levels of the modulation scheme, followed by the minimum amplitude and phase levels (or other suitable arrangement of the maximum and minimum amplitude and phase levels). The maximum levels may be symbolized as “11” in the first reference element, and the minimum by “00” in the second reference element. The representation indicates, in the first numeral, “1” for the maximum amplitude and “0” for the minimum amplitude. The second numeral represents phase, with again “1” being the maximum phase and “0” the minimum phase. In each case, the receiver can readily calculate any intervening amplitude and phase levels of the modulation scheme by interpolating between the maximum and minimum values exhibited by the short-form demodulation reference. The address field601may indicate the identification of the intended recipient of the message, such as its MAC address620, its C-RNTI621, its self-selected code622, or other identifying code. The data field603is the main part of the message and contains whatever information the base station has to send to the user device. The error-detection field604may be a CRC code623, a parity construct624, or other code configured to expose and possibly correct errors in the message. In some embodiments, the error-detection code604may be scrambled with the identification code of the intended recipient. In that case, the address field602may not be necessary. The end-code605may be configured to indicate the end of the message, such as a demodulation reference626and/or a gap627. The demodulation reference626may again be a short-form demodulation reference628exhibiting the maximum and minimum amplitude and phase levels of the modulation scheme. In the depicted case, the short-form demodulation reference628has an opposite code from the start-code demodulation reference618. Specifically, the end-code demodulation reference628has the minimum amplitude and phase in the first reference element, and the maximum amplitude and phase in the second reference element. The same comments and features apply equally to a phase-modulated scheme such as QPSK. The figure, and the other examples herein, refer to the modulation scheme in which a standard amplitude modulation is multiplexed with phase modulation, since this provides the best SNR (signal-to-noise ratio) under most conditions. However, in other embodiments, a different modulating scheme may be used, termed “pulse-amplitude modulation” or PAM, in which a first amplitude-modulated signal and a second amplitude-modulated signal are added with a 90-degree phase difference. The two signals are often called “real” and “imaginary” or “I” (in-phase) and “Q” (quad-phase), among other terms. For the purposes of the present disclosure, the two schemes are equivalent, as are other modulation schemes involving amplitude and/or phase modulation. It is immaterial whether the message elements are modulated according to amplitude and phase separately, or by PAM, or otherwise, so long as the receiver can calculate all of the modulation levels from those exhibited in the two modulated resource elements shown. For consistency and clarity, the examples show the standard amplitude and phase modulation version, but artisans with skill in the art will recognize how the principles disclosed herein can apply equivalently to PAM and the other modulation schemes. FIG.7Ais a schematic showing an exemplary embodiment of multiple messages demarked by gaps, according to some embodiments. As depicted in this non-limiting example, the messages are separated by a gap, configured to assist the receiver in determining where each message begins and ends. Specifically, the sequence shows an initial gap701, a first message702, another gap703, a second message704, a gap705, a third message706, and a final gap707. This example shows that a single gap may be “shared” by a preceding message and a succeeding message, thereby enabling the receiver to separate each message from the others. FIG.7Bis a schematic showing another exemplary embodiment of multiple messages demarked by gaps, according to some embodiments. As depicted in this non-limiting example, multiple messages are demarked at both ends by a separator, which is the same for the start and end of each message. In this case, an initial separator721is a gap, a short-format demodulation reference (“11-00” indicating the maximum amplitude and phase followed by the minimum amplitude and phase), and another gap. The first message722follows, and is separated from a second message724by another identical separator723. A final separator725then follows. Since the ends of each message are unambiguously demarked by the characteristic separator pattern of gap-demod-gap, there is no need to indicate which one represents the start and end of the message. If a two-element short-format demodulation reference is used, as shown, then each separator occupies just four resource elements while providing a readily detected start and end of each message as well as a demodulation reference applicable to both of the adjacent messages. FIG.8is a schematic showing an exemplary embodiment of a resource grid including low-complexity downlink messages demarked by start-codes and end-codes, according to some embodiments. As depicted in this non-limiting example, a resource grid801includes subcarriers802and symbol times803and three messages. A first message805is frequency-spanning, demarked by a gap at the start and end. The receiver can readily find the message805by monitoring the gaps in incoming data. The message includes a length field “L”804indicating the length of the message, to further assist the receiver in determining the ends of the message. The second message806is also frequency-spanning, which rolls over into the next symbol time due to its length being larger than the allocated range of subcarriers. The second message806is preceded by a start-code consisting of a gap and a short-form demodulation reference showing the maximum phase level of the modulation scheme, followed by the minimum phase level, assuming QPSK modulation in this case. The second message806is then followed by an end-code including a short-form demodulation reference in which the minimum phase level is followed by the maximum phase level, and then a final gap. The receiver can locate the second message806according to the distinctive start-code and end-code. A third message807is time-spanning, demarked by a gap at the start and end, and also isolated by blank subcarriers808,809on both sides of each message element, including the gaps. The blank subcarriers808,809may assist the receiver in determining the message elements by removing the closest potential interference signals. Reduced-capability receivers that may struggle to separate the subcarriers may benefit from the additional isolation thereby provided. In some embodiments, a time-spanning message that exceeds the size of a slot may continue, in the next slot on the same subcarrier, continuing at the next symbol time scheduled for downlink or unknown (that is, not scheduled for uplink or DMRS). In this way, by continuing across multiple slots or subframes, arbitrarily long time-spanning messages may be accommodated, a benefit to low-complexity receivers with minimal communication needs. FIG.9is a schematic showing an exemplary embodiment of demarcation types according message types, according to some embodiments. As depicted in this non-limiting example900, certain configurations of start-codes902and end-codes905may be adapted to the message type901. Also shown is an optional address field903. For a downlink message906, the start code may be a gap-11-00, that is, a resource element with little or no signal therein, followed by a short-form demodulation reference exhibiting the maximum amplitude and phase followed by the minimum amplitude and phase. If the downlink message includes an address, it is the RNTI or other identification of the recipient user device. For the end-code, a pattern distinct from the start-code may be used, such as the minimum amplitude and phase, then the maximum amplitude and phase, and finally a gap. In the depicted example, the same end-code is used for three of the message types906,907,908. For an uplink message907, a start-code may be used with a pattern different from that of the downlink906. For example, the uplink pattern may be a gap followed by an element with the minimum amplitude and maximum phase levels, followed by the maximum amplitude and minimum phase, as shown. If an address is included, it is the identification of the transmitting user device, thereby enabling the base station to determine which user device is transmitting the message. The end-code is again shown as the minimum amplitude and phase, then a demodulation reference with the maximum amplitude and phase, and a gap. For a sidelink message908, the start code may be yet another pattern, such as a gap, maximum amplitude with minimum phase, then minimum amplitude with maximum phase as shown. The end-code is the same as the others. If an address is provided, it is the intended recipient (target vehicle) of the sidelink message. In addition, the address of the transmitting vehicle may be included in the message itself. An advantage of the message versions906,907,908may be that the codes function with modulation schemes that do not have amplitude modulation, such as QPSK. For example, with QPSK, the first numeral in each amplitude-phase code in the chart may be ignored. Then the start-code for the uplink and downlink becomes a gap followed by the maximum phase, followed by the minimum phase, while the end-code becomes the minimum phase, followed by the maximum phase, and then finally a gap. Thus the start-codes and end-codes are distinct with either modulation scheme, as desired. The codes for QPSK are the same for uplink907and downlink906, but that should be no problem because the base station surely knows whether it is transmitting or receiving the message. For sidelink908, the start-code and end-code have the same effective demodulation reference in QPSK, which is the minimum phase followed by maximum phase, but the position of the gap is opposite for the start-code and end-code in sidelink, and therefore the vehicles can determine message boundaries by that difference. Also shown is a downlink message909with identification codes as the demarcation instead of gaps and demodulation references. The base station may include the identification code, such as one of the RNTI codes, prepended to the message, to assist the user device in finding and localizing its downlink messages. Alternatively, or in addition, the base station may include the identification code at the end of the message, and the receiver may thereby determine the end of the message and, working backwards, determine each of the message elements according to a scrambled error-detection code, for example. FIG.10is a schematic showing an exemplary embodiment of user request message specifying start-code and end-code demarcations, according to some embodiments. As depicted in this non-limiting example, a user device can transmit a message to a base station requesting that downlink messages intended for that user device include prepended and/or appended demarcations of various types. The depicted message includes a user identification field1021, a message type field1022, a selection1023for placing a gap at the start and/or end of each downlink message, another selection1024for placing a demodulation reference at the start and/or end of each message, yet another selection1025for placing the identification code of the intended recipient in the start and/or end of each message, a selection1026for indicating the length of the message at the start, a DCI field1027, and various flags1028. The user identification field1021may include the C-RNTI1031of the user device or its MAC address or a self-selected code or other identification code1032. The message type field1022may be configured to indicate1033that the message is an accommodation request for downlink demarcations of the specified type, and implicitly accepts the reduced quality of service necessitated by the requested accommodations. The gaps selection1023indicates whether the user device requests a gap at the start1034of the message, or the end1035, or both. In some embodiments, a selection field includes two bits, and therefore can encode one of four options, such as: no gaps, a gap only at start, a gap only at end, or gaps at both ends of the message. The demodulation selection1024indicates whether the user device requests placement of a demodulation reference at the start1036or end1037of the message, or both. The ID code selection1025indicates whether the user device requests that its identification code be placed at the start1038or end1039of the message, or both. The length field1026indicates whether the receiver requests that the length of the message1041be included in the leading resource elements of the message. The DCI field1027indicates whether the user device requests that the base station transmit a DCI message1042before each data message. In some embodiments, when a user device has a small search space for both control and data messages, and has requested demarcations including identification, then the receiver can readily detect its downlink data messages without the need for a downlink control message. The scheduling message is then redundant Eliminating the unnecessary DCI message reduces latency and avoids generating radio background in busy traffic. (However, if the base station plans to transmit a downlink data message using an unexpected format or modulation or other parameter, different from the standard values expected by the user device, the base station may transmit a DCI message warning the user device of that fact even when the user device has declined DCI messages in the accommodation request.) The flags1028field indicates further options1040besides those listed, such as whether the gaps contain reduced signal or none at all, whether the demodulation references are short-form demodulation references, and what type of identification codes are to be used. FIG.11is a flowchart showing an exemplary embodiment of a procedure for transmitting a downlink message including start-code and end-code demarcations, according to some embodiments. As depicted in this non-limiting example, at1101a user device transmits a request message to a base station specifying which types of demarcations are desired for its downlink messages, such as the presence or absence of a gap, presence or absence of a demodulation reference, or presence or absence of an identification code at the start or end of each downlink message, and whether a DCI message is requested before each downlink data message, for example. At1102, the base station receives the accommodation request message and stores the selections in its memory or equivalent media. At1103, the user device monitors the downlink channels to detect any messages intended for that user device, according to the requested start-code or end-code or both. At a later time, at1104, the base station has a message to transmit to the user device, and checks the stored parameters to determine which demarcation types were requested. Then at1105, the base station transmits the message, using the start-code prepended, the length field, and end-code appended, to the message as requested. At1106, the user device detects the start-code, reads the length field, and locates the end-code. The user device demodulates and decodes and unscrambles the intervening message, verifies that the message is intended for that user device, and then sends an acknowledgement to the base station. Thus the user device, in cooperation with the base station, has detected and localized and extracted its message from the stream of downlink information, without performing a blind search and other time-consuming steps that would be required without the demarcations. To summarize, a receiver can request that communication to it be configured in a low-complexity format that indicates the beginning and ending of each message by a particular start-code and end-code, and/or displays the intended recipient's address (optionally in plain-text), thereby allowing the receiver to process messages that are intended for it and to ignore all other messages, without performing complex and energy-consuming blind searches to find and identify each of its downlink messages. Further examples and formats, for both time-spanning and frequency spanning messages, in downlink, uplink, and sidelink communications, are also provided. The wireless embodiments of this disclosure may be aptly suited for cloud backup protection, according to some embodiments. Furthermore, the cloud backup can be provided cyber-security, such as blockchain, to lock or protect data, thereby preventing malevolent actors from making changes. The cyber-security may thereby avoid changes that, in some applications, could result in hazards including lethal hazards, such as in applications related to traffic safety, electric grid management, law enforcement, or national security. In some embodiments, non-transitory computer-readable media may include instructions that, when executed by a computing environment, cause a method to be performed, the method according to the principles disclosed herein. In some embodiments, the instructions (such as software or firmware) may be upgradable or updatable, to provide additional capabilities and/or to fix errors and/or to remove security vulnerabilities, among many other reasons for updating software. In some embodiments, the updates may be provided monthly, quarterly, annually, every 2 or 3 or 4 years, or upon other interval, or at the convenience of the owner, for example. In some embodiments, the updates (especially updates providing added capabilities) may be provided on a fee basis. The intent of the updates may be to cause the updated software to perform better than previously, and to thereby provide additional user satisfaction. The systems and methods may be fully implemented in any number of computing devices. Typically, instructions are laid out on computer readable media, generally non-transitory, and these instructions are sufficient to allow a processor in the computing device to implement the method of the invention. The computer readable medium may be a hard drive or solid state storage having instructions that, when run, or sooner, are loaded into random access memory. Inputs to the application, e.g., from the plurality of users or from any one user, may be by any number of appropriate computer input devices. For example, users may employ vehicular controls, as well as a keyboard, mouse, touchscreen, joystick, trackpad, other pointing device, or any other such computer input device to input data relevant to the calculations. Data may also be input by way of one or more sensors on the robot, an inserted memory chip, hard drive, flash drives, flash memory, optical media, magnetic media, or any other type of file-storing medium. The outputs may be delivered to a user by way of signals transmitted to robot steering and throttle controls, a video graphics card or integrated graphics chipset coupled to a display that may be seen by a user. Given this teaching, any number of other tangible outputs will also be understood to be contemplated by the invention. For example, outputs may be stored on a memory chip, hard drive, flash drives, flash memory, optical media, magnetic media, or any other type of output. It should also be noted that the invention may be implemented on any number of different types of computing devices, e.g., embedded systems and processors, personal computers, laptop computers, notebook computers, net book computers, handheld computers, personal digital assistants, mobile phones, smart phones, tablet computers, and also on devices specifically designed for these purpose. In one implementation, a user of a smart phone or WiFi-connected device downloads a copy of the application to their device from a server using a wireless Internet connection. An appropriate authentication procedure and secure transaction process may provide for payment to be made to the seller. The application may download over the mobile connection, or over the WiFi or other wireless network connection. The application may then be run by the user. Such a networked system may provide a suitable computing environment for an implementation in which a plurality of users provide separate inputs to the system and method. It is to be understood that the foregoing description is not a definition of the invention but is a description of one or more preferred exemplary embodiments of the invention. The invention is not limited to the particular embodiments(s) disclosed herein, but rather is defined solely by the claims below. Furthermore, the statements contained in the foregoing description relate to particular embodiments and are not to be construed as limitations on the scope of the invention or on the definition of terms used in the claims, except where a term or phrase is expressly defined above. Various other embodiments and various changes and modifications to the disclosed embodiment(s) will become apparent to those skilled in the art. For example, the specific combination and order of steps is just one possibility, as the present method may include a combination of steps that has fewer, greater, or different steps than that shown here. All such other embodiments, changes, and modifications are intended to come within the scope of the appended claims. As used in this specification and claims, the terms “for example”, “e.g.”, “for instance”, “such as”, and “like” and the terms “comprising”, “having”, “including”, and their other verb forms, when used in conjunction with a listing of one or more components or other items, are each to be construed as open-ended, meaning that the listing is not to be considered as excluding other additional components or items. Other terms are to be construed using their broadest reasonable meaning unless they are used in a context that requires a different interpretation.
50,548
11943161
DESCRIPTION OF THE EMBODIMENTS The technical scheme of the present disclosure is described below in further detail in conjunction with the drawings. It should be noted that the embodiments in the present disclosure and the characteristics of the embodiments may be arbitrarily combined if there is no conflict. Embodiment 1 Embodiment 1 illustrates an example of a flowchart of the transmission of first information and a first signaling according to one embodiment of the present disclosure, as shown inFIG.1. InFIG.1, each box represents one step. In Embodiment 1, the UE in the present disclosure first receives first information and then monitors a first signaling in M RE sets, wherein the first information is used for determining the M RE sets, a monitor of the first signaling assumes that P modulation symbol groups are one-to-one mapped to P RE sets, the P RE sets are P RE sets of the M RE sets, and one blind detection for the first signaling is performed on the P RE sets; modulation symbols obtained by modulating the bits in a first bit sequence constitute a first modulation symbol set sequentially, wherein the bits in the first bit sequence are sequentially arranged according to an output of a channel encoder; the first modulation symbol set is divided into the P modulation symbol groups successively from the front to the rear, and each modulation symbol group of the P modulation symbol groups consists of X modulation symbols; and relative positions of the P RE sets in frequency domain are related to at least one of {M, X}, wherein M is a positive integer, P is a positive integer not greater than M, and X is a positive integer. In one embodiment, the first information is high-layer information. In one embodiment, the first information is carried through a Radio Resource Control (RRC) signaling. In one embodiment, the first information is one Information Element (IE) of the RRC signaling. In one embodiment, the first information is physical layer information. In one embodiment, the first information is transmitted through Downlink Control Information (DCI). In one embodiment, the first signaling is a physical layer signaling. In one embodiment, the first signaling is a DCI. In one embodiment, the first signaling is one field of the DCI. In one embodiment, the first signaling is transmitted through a PDCCH. In one embodiment, the first signaling is transmitted through a New Radio Physical Downlink Control Channel (NR-PDCCH). In one embodiment, each RE set of the M RE sets includes X REs. In one embodiment, X is equal to one of {18, 27, 54, 20, 30, 60}. In one embodiment, X is related to the number of multicarrier symbols occupied by the M RE sets in time domain. In one embodiment, none of the M RE sets includes a Reference Signal (RS) RE. In one embodiment, each RE set of the M RE sets includes the same number of RS REs. In one embodiment, any two RE sets of the M RE sets are orthogonal, wherein the two RE sets being orthogonal refers that there is no RE belonging to both the two RE sets. In one embodiment, each RE set of the M RE sets consists of one of {2, 3, 6} REGs, wherein each REG occupies one multicarrier symbol in time domain and occupies one PRB in frequency domain. In one embodiment, each RE in the M RE sets occupies one multicarrier symbol in time domain and occupies one subcarrier in frequency domain. In one embodiment, the M RE sets occupy one of {1, 2, 3} multicarrier symbols in time domain. In one embodiment, the M RE sets are discrete in frequency domain. In one embodiment, the M RE sets are consecutive in frequency domain. In one embodiment, the first information is used by the UE to determine the M RE sets. In one embodiment, the first information indicates the M RE sets. In one embodiment, the blind detection for the first signaling on the P RE sets is realized through the validation of a Radio Network Temporary Identity (RNTI). In one embodiment, M is greater than 1 and P is greater than 1. In one embodiment, all modulation symbols in the first modulation symbol set are based on the same modulation scheme. In one embodiment, each modulation symbol in the first modulation symbol set is modulated based on Quadrature Phase Shift Keying (QPSK). In one embodiment, each modulation symbol in the first modulation symbol set is modulated based on 16 Quadrature Amplitude Modulation (QAM). In one embodiment, the channel encoder is based on polar coding. In one embodiment, the channel encoder is based on convolutional coding. In one embodiment, the channel encoder is based on a code rate of ⅓ tail-biting convolutional coding. In one embodiment, the bits in the first bit sequence are an output of rate matching of the output of the channel encoder. In one subembodiment, the rate matching includes one of {Repetition, Puncturing, Shortening}. In one embodiment, the relative positions of the P RE sets in frequency domain refer to a pattern in frequency domain of the P RE sets sequentially corresponding to the P modulation symbol groups. In one embodiment, the relative positions of the P RE sets in frequency domain refer to distances between the positions in frequency domain of any two adjacent RE sets of the P RE sets sequentially corresponding to the P modulation symbol groups. In one embodiment, the relative positions of the P RE sets in frequency domain refer to a minimum value of the distances between the positions in frequency domain of any two adjacent RE sets of the P RE sets sequentially corresponding to the P modulation symbol groups. In one embodiment, the relative positions of the P RE sets in frequency domain refer to the interleaving depth of an interleaver, wherein the interleaver will change the index sequence of the M RE sets which are indexed by frequencies, and the interleaving depth refers to a minimum distance of two adjacent elements after the interleaving. In one embodiment, the P RE sets are consecutive in frequency domain. In one embodiment, the P RE sets are discrete in frequency domain. Embodiment 2 Embodiment 2 illustrates an example of a diagram of a network architecture according to the present disclosure, as shown inFIG.2.FIG.2is a diagram illustrating a system network architecture200of NR 5G, Long-Term Evolution (LTE), Long-Term Evolution Advanced (LTE-A). The NR 5G or LTE network architecture200may be called an Evolved Packet System (EPS)200. The EPS200may include one or more UEs201, an NG-RAN202, an Evolved Packet Core/5G-Core Network (EPC/5G-CN)210, a Home Subscriber Server (HSS)220and an Internet Service230. Herein, the EPS may be interconnected with other access networks. For simple description, the entities/interfaces are not shown. As shown inFIG.2, the EPS provides packet switching services. Those skilled in the art are easy to understand that various concepts presented throughout the present disclosure can be extended to networks providing circuit switching services or other cellular networks. The NG-RAN includes an NR node B (gNB)203and other evolved nodes B (gNBs)204. The gNB203provides UE201oriented user plane and control plane protocol terminations. The gNB203may be connected to other gNBs204via an Xn interface (for example, backhaul). The gNB203may be called a base station, a base transceiver station, a radio base station, a radio transceiver, a transceiver function, a Basic Service Set (BSS), an Extended Service Set (ESS), a TRP or other appropriate terms. The gNB203provides an access point of the EPC/5G-CN210for the UE201. Examples of UE201include cellular phones, smart phones, Session Initiation Protocol (SIP) phones, laptop computers, Personal Digital Assistants (PDAs), Satellite Radios, Global Positioning Systems (GPSs), multimedia devices, video devices, digital audio player (for example, MP3 players), cameras, games consoles, unmanned aerial vehicles, air vehicles, narrow-band physical network equipment, machine-type communication equipment, land vehicles, automobiles, wearable equipment, or any other devices having similar functions. Those skilled in the art also can call the UE201a mobile station, a subscriber station, a mobile unit, a subscriber unit, a wireless unit, a remote unit, a mobile device, a wireless device, a radio communication device, a remote device, a mobile subscriber station, an access terminal, a mobile terminal, a wireless terminal, a remote terminal, a handset, a user proxy, a mobile client, a client or other appropriate terms. The gNB203is connected to the EPC/5G-CN210via an S1/NG interface. The EPC/5G-CN210includes an MME/AMF/UPF211, other MMEs/AMFs/UPFs214, a Service Gateway (S-GW)212and a Packet Data Network Gateway (P-GW)213. The MME/AMF/UPF211is a control node for processing a signaling between the UE201and the EPC/5G-CN210. Generally, the MME/AMF/UPF211provides bearer and connection management. All user Internet Protocol (IP) packets are transmitted through the S-GW212. The S-GW212is connected to the P-GW213. The P-GW213provides UE IP address allocation and other functions. The P-GW213is connected to the Internet service230. The Internet service230includes IP services corresponding to operators, specifically including Internet, Intranet, IP Multimedia Subsystems (IP IMSs) and Packet switching Streaming Services (PSSs). In one embodiment, the UE201corresponds to the UE of the present disclosure. In one embodiment, the gNB203corresponds to the base station of the present disclosure. In one embodiment, the UE201supports blind detections of downlink control information. In one embodiment, the UE201supports an interleaving transmission of a PDCCH. In one embodiment, the gNB203supports an interleaving transmission of a PDCCH. Embodiment 3 Embodiment 3 is a diagram illustrating an embodiment of a radio protocol architecture of a user plane and a control plane according to the present disclosure, as shown inFIG.3.FIG.3is a diagram illustrating an embodiment of a radio protocol architecture of a user plane and a control plane. InFIG.3, the radio protocol architecture of a UE and a gNB is represented by three layers, which are a layer 1, a layer 2 and a layer 3 respectively. The layer 1 (L1)301is the lowest layer and performs signal processing functions of each PHY layer. The layer 1 is called PHY301in this paper. The layer 2 (L2)305is above the PHY301, and is in charge of the link between the UE and the gNB via the PHY301. In the user plane, the L2305includes a Medium Access Control (MAC) sublayer302, a Radio Link Control (RLC) sublayer303, and a Packet Data Convergence Protocol (PDCP) sublayer304. All the three sublayers terminate at the gNB of the network side. Although not described inFIG.3, the UE may include several higher layers above the L2305, such as a network layer (i.e. IP layer) terminated at a P-GW of the network side and an application layer terminated at the other side of the connection (i.e. a peer UE, a server, etc.). The PDCP sublayer304provides multiplexing among variable radio bearers and logical channels. The PDCP sublayer304also provides a header compression for a higher-layer packet so as to reduce a radio transmission overhead. The PDCP sublayer304provides security by encrypting a packet and provides support for UE handover between gNBs. The RLC sublayer303provides segmentation and reassembling of a higher-layer packet, retransmission of a lost packet, and reordering of a lost packet to as to compensate the disordered receiving caused by Hybrid Automatic Repeat Request (HARQ). The MAC sublayer302provides multiplexing between logical channels and transport channels. The MAC sublayer302is also responsible for allocating between UEs various radio resources (i.e., resource block) in a cell. The MAC sublayer302is also in charge of HARQ operation. In the control plane, the radio protocol architecture of the UE and the gNB is almost the same as the radio protocol architecture in the user plane on the PHY301and the L2305, but there is no header compression for the control plane. The control plane also includes a Radio Resource Control (RRC) sublayer306in the layer 3 (L3). The RRC sublayer306is responsible for acquiring radio resources (i.e. radio bearer) and configuring the lower layers using an RRC signaling between the gNB and the UE. In one embodiment, the radio protocol architecture inFIG.3is applicable to the UE in the present disclosure. In one embodiment, the radio protocol architecture inFIG.3is applicable to the base station device in the present disclosure. In one embodiment, the first information in the present disclosure is generated by the RRC306. In one embodiment, the first signaling in the present disclosure is generated by the PHY301. In one embodiment, the second information in the present disclosure is generated by the RRC306. In one embodiment, the interleaving in the present disclosure is performed by the PHY301. Embodiment 4 Embodiment 4 illustrates a diagram of an evolved node B and a given UE according to the present disclosure, as shown inFIG.4.FIG.4is a block diagram of a gNB410in communication with a UE450in an access network. In downlink transmission, a higher-layer packet coming from the core network is provided to a controller/processor440. The controller/processor440provides functions of a layer 2. In downlink transmission, the controller/processor440provides header compression, encrypting, packet segmentation and reordering, multiplexing between a logical channel and a transport channel, and radio resource allocation for the UE450based on various priorities. The controller/processor440is also in charge of HARQ operation, retransmission of a lost packet, and a signaling to the UE450. A transmitting processor415performs signal processing functions used for the layer 1 (that is, PHY). The signal processing function includes encoding and interleaving, so as to ensure an FEC (Forward Error Correction) and the mapping to corresponding signal clusters corresponding to each modulation scheme (i.e., BPSK, QPSK) at the UE450side. The encoded and modulated signals are divided into parallel streams. Each of the parallel streams is mapped into multi-carrier subcarriers and/or multi-carrier symbols, after the interleaving method of the present disclosure the modulated symbols are multiplexed with a reference signal (i.e., pilot frequency) in time domain and/or frequency domain, and then they are assembled through Inverse Fast Fourier Transform (IFFT) to generate a physical channel carrying time-domain multi-carrier symbol streams. The parallel streams subject to spatial precoding to generate multiple spatial streams. Each spatial stream is provided to different antennas420via the transmitter416. Every transmitter416is used to transmit a corresponding spatial stream modulation RF (Radio Frequency) carrier. At the UE450side, every receiver456receives a signal via a corresponding antenna460. Every receiver456recovers information modulated to the RF carrier and provides the information to a receiving processor452. The receiving processor452performs signal receiving processing functions of the layer 1. The receiving processor452performs spatial processing on the information to recover any spatial stream targeting the UE450. The receiving processor452then converts the multi-carrier symbol stream from time domain into frequency domain using FFT. A frequency domain signal includes a single multi-carrier symbol stream of each subcarrier used for multi-carrier signals. The symbol and reference signal on each subcarrier are recovered and demodulated by determining the most possible signal cluster point transmitted by the gNB410, and generate a soft decision. The soft decision is then decoded and de-interleaved so as to recover the original data and control signal transmitted by the gNB410on the physical channel. The data and control signal are then provided to a controller/processor490. The controller/processor490performs functions of the layer 2. The controller/processor can be connected to a memory480that stores program code and data. The memory480is a computer readable medium. In one embodiment, the UE450includes at least one processor and at least one memory. The at least one memory includes computer program codes. The at least one memory and the computer program codes are configured to be used in collaboration with the at least one processor. The UE450at least receives first information and monitors a first signaling in M RE sets, wherein the first information is used for determining the M RE sets, a monitor of the first signaling assumes that P modulation symbol groups are one-to-one mapped to P RE sets, the P RE sets are P RE sets of the M RE sets, and one blind detection for the first signaling is performed on the P RE sets; modulation symbols obtained by modulating the bits in a first bit sequence constitute a first modulation symbol set sequentially, wherein the bits in the first bit sequence are sequentially arranged according to an output of a channel encoder; the first modulation symbol set is divided into the P modulation symbol groups successively from the front to the rear, and each modulation symbol group of the P modulation symbol groups consists of X modulation symbols; and relative positions of the P RE sets in frequency domain are related to at least one of {M, X}, wherein M is a positive integer, P is a positive integer not greater than M, and X is a positive integer. In one embodiment, the UE450includes a memory that stores a computer readable instruction program. The computer readable instruction program generates an action when executed by at least one processor. The action includes: receiving first information and monitoring a first signaling in M RE sets; wherein the first information is used for determining the M RE sets, a monitor of the first signaling assumes that P modulation symbol groups are one-to-one mapped to P RE sets, the P RE sets are P RE sets of the M RE sets, and one blind detection for the first signaling is performed on the P RE sets; modulation symbols obtained by modulating the bits in a first bit sequence constitute a first modulation symbol set sequentially, wherein the bits in the first bit sequence are sequentially arranged according to an output of a channel encoder; the first modulation symbol set is divided into the P modulation symbol groups successively from the front to the rear, and each modulation symbol group of the P modulation symbol groups consists of X modulation symbols; and relative positions of the P RE sets in frequency domain are related to at least one of {M, X}, wherein M is a positive integer, P is a positive integer not greater than M, and X is a positive integer. In one embodiment, the gNB410includes at least one processor and at least one memory. The at least one memory includes computer program codes. The at least one memory and the computer program codes are configured to be used in collaboration with the at least one processor. The gNB410at least transmits first information and transmits a first signaling in M RE sets, wherein the first information is used for determining the M RE sets, a monitor of the first signaling assumes that P modulation symbol groups are one-to-one mapped to P RE sets, the P RE sets are P RE sets of the M RE sets, and one blind detection for the first signaling is performed on the P RE sets; modulation symbols obtained by modulating the bits in a first bit sequence constitute a first modulation symbol set sequentially, wherein the bits in the first bit sequence are sequentially arranged according to an output of a channel encoder; the first modulation symbol set is divided into the P modulation symbol groups successively from the front to the rear, and each modulation symbol group of the P modulation symbol groups consists of X modulation symbols; and relative positions of the P RE sets in frequency domain are related to at least one of {M, X}, wherein M is a positive integer, P is a positive integer not greater than M, and X is a positive integer. In one embodiment, the gNB410includes a memory that stores a computer readable instruction program. The computer readable instruction program generates an action when executed by at least one processor. The action includes: transmitting first information and transmitting a first signaling in M RE sets, wherein the first information is used for determining the M RE sets, a monitor of the first signaling assumes that P modulation symbol groups are one-to-one mapped to P RE sets, the P RE sets are P RE sets of the M RE sets, and one blind detection for the first signaling is performed on the P RE sets; modulation symbols obtained by modulating the bits in a first bit sequence constitute a first modulation symbol set sequentially, wherein the bits in the first bit sequence are sequentially arranged according to an output of a channel encoder; the first modulation symbol set is divided into the P modulation symbol groups successively from the front to the rear, and each modulation symbol group of the P modulation symbol groups consists of X modulation symbols; and relative positions of the P RE sets in frequency domain are related to at least one of {M, X}, wherein M is a positive integer, P is a positive integer not greater than M, and X is a positive integer. In one embodiment, the UE450corresponds to the UE in the present disclosure. In one embodiment, the gNB410corresponds to the base station in the present disclosure. In one embodiment, at least one of the receiving processor452and the controller/processor490is used for monitoring the first signaling in the present disclosure. In one embodiment, the controller/processor490is used for receiving the first information in the present disclosure. In one embodiment, the controller/processor490is used for receiving the second information in the present disclosure. Embodiment 5 Embodiment 5 illustrates an example of a flowchart of the transmission of a radio signal according to one embodiment of the present disclosure, as shown inFIG.5. InFIG.5, the base station N1is a maintenance base station for a serving cell of the UE U2. The base station N1transmits first information in S11, transmits second information in S12, and transmits a first signaling in M RE sets in S13. The UE U2receives the first information in S21, receives the second information in S22, and receives the first signaling in the M RE sets in S23. In embodiment 5, the first information is used for determining the M RE sets, a monitor of the first signaling assumes that P modulation symbol groups are one-to-one mapped to P RE sets, the P RE sets are P RE sets of the M RE sets, and one blind detection for the first signaling is performed on the P RE sets; modulation symbols obtained by modulating the bits in a first bit sequence constitute a first modulation symbol set sequentially, wherein the bits in the first bit sequence are sequentially arranged according to an output of a channel encoder; the first modulation symbol set is divided into the P modulation symbol groups successively from the front to the rear, and each modulation symbol group of the P modulation symbol groups consists of X modulation symbols; relative positions of the P RE sets in frequency domain are related to at least one of {M, X}, wherein M is a positive integer, P is a positive integer not greater than M, and X is a positive integer; and the second information is used for determining X. In one embodiment, the second information is high-layer information. In one embodiment, the second information is carried through an RRC signaling. In one embodiment, the second information is one IE of the RRC signaling. In one embodiment, the second information is physical layer information. In one embodiment, the second information is transmitted through a DCI. In one embodiment, the second information and the first information are transmitted as two IEs of one RRC signaling, respectively. In one embodiment, the second information and the first information are transmitted as two fields of one RRC signaling, respectively. In one embodiment, the second information is used by the UE to determine X. In one embodiment, the second information indicates X. Embodiment 6 Embodiment 6 illustrates an example of a diagram of an interleaving operation according to one embodiment of the present disclosure, as shown inFIG.6. InFIG.6, each small rectangle represents one element in a target interleaving matrix; each element is placed with a modulation symbol group, and the modulation symbol group can be one of M modulation symbol groups, also can be one of J blank modulation symbol groups; the direction of the solid arrow represents the order of writing modulation symbol groups into the target interleaving matrix, and the direction of the dashed arrow represents the order of reading out M modulation symbol groups from the target matrix. In Embodiment 6, the monitor of the first signaling assumes that the P modulation symbol groups are P consecutive groups of M modulation symbol groups sequentially arranged, the M modulation symbol groups sequentially arranged are interleaved and subsequently constitute a first symbol group sequence sequentially, and the modulation symbol groups in the first symbol group sequence are mapped, starting from a target RE set, to the M RE sets which are arranged in the order of frequency, wherein the target RE set is one of the M RE sets, and the interleaving includes the following: the M modulation symbol groups sequentially arranged are mapped into a target interleaving matrix sequentially according to the order of rows first and then columns, as M matrix elements; and the elements in the target interleaving matrix are rearranged and then read out sequentially according to the order of columns first and then rows to obtain the first symbol group sequence. Herein, each modulation symbol group of the P modulation symbol groups consists of X modulation symbols, at least one of {M, X} is used for determining the number of columns of the target interleaving matrix, the monitor of the first signaling assumes that the P modulation symbol groups are one-to-one mapped to P RE sets, and relative positions of the P RE sets in frequency domain are related to the number of columns of the target interleaving matrix. In one embodiment, the number of columns of the target interleaving matrix is equal to M. In one embodiment, the number of columns of the target interleaving matrix is equal to the quotient obtained when the product of M and X is divided by 72. In one embodiment, the number of columns of the target interleaving matrix is equal to the quotient obtained when 72 is divided by X. In one embodiment, the number of columns of the target interleaving matrix is equal to the smallest positive integral power of 2 that is not greater than M. In one embodiment, the monitor of the first signaling assumes that each modulation symbol in the M modulation symbol groups and the modulation symbols in the P modulation symbol groups use the same modulation scheme. In one embodiment, the target interleaving matrix is the transposition of an initial interleaving matrix, the interleaving can be regarded as a process in which the M modulation symbol groups which are sequentially arranged are arranged based on the initial interleaving matrix according to the order of columns first and then rows and then read out according to the order of rows first and then columns to obtain a first symbol group sequence. In one embodiment, at least one of {M, X} determines the number of columns of the target interleaving matrix through a given corresponding relationship. In one embodiment, at least one of {M, X} determines the number of columns of the target interleaving matrix by way of listing. In one embodiment, at least one of {M, X} is used by the UE to determine the number of columns of the target interleaving matrix. In one embodiment, the position of the target RE set in the M RE sets is fixed. In one embodiment, the position of the target RE set in the M RE sets is changeable. In one embodiment, the position of the target RE set in the M RE sets is predetermined. In one embodiment, the position of the target RE set in the M RE sets is configurable. In one embodiment, the position of the target RE set in the M RE sets is configured through a signaling the same as the first information. Embodiment 7 Embodiment 7 illustrates an example of a diagram of a relationship between M modulation symbol groups and J blank modulation symbol groups according to one embodiment of the present disclosure, as shown inFIG.7. InFIG.7, each blank rectangle represents one blank modulation symbol group of J blank modulation symbol groups, and each rectangle filled by cross lines represents one modulation symbol group of M modulation symbol groups. In Embodiment 7, a target interleaving matrix includes R rows and C columns, M modulation symbol groups and J blank modulation symbol groups together are arranged into the target interleaving matrix, the sum of M and J is equal to the product of R and C, wherein R is a positive integer, C is a positive integer, and J is a non-negative integer. In one embodiment, for a given value of C, R is the smallest positive integer that makes the product of R and C smaller than M. In one embodiment, for a given value of R, C is the smallest positive integer that makes the product of R and C smaller than M. In one embodiment, each modulation symbol in the J blank modulation symbol groups is NULL. In one embodiment, the J blank modulation symbol groups are concatenated to the front of the M modulation symbol groups. In one embodiment, the J blank modulation symbol groups are concatenated to the rear of the M modulation symbol groups. In one embodiment, one blank modulation symbol group of the J blank modulation symbol groups is located between two modulation symbol groups of the M modulation symbol groups. In one embodiment, the UE in the present disclosure assumes that the modulation symbols in the J blank modulation symbol groups are invalid. In one embodiment, the UE in the present disclosure cannot make any assumption about the modulation symbols in the J blank modulation symbol groups. Embodiment 8 Embodiment 8 illustrates an example of a diagram of a target RE set according to one embodiment of the present disclosure, as shown inFIG.8.FIG.8shows four distributions (#1, #2, #3, #4) of P RE sets. The vertical axis represents frequency, each small rectangle filled by cross lines represents one RE set of P RE sets, each blank small rectangle represents one RE set other than the P RE sets in M RE sets, and each small rectangle filled by cross lines at the lowest frequency in each distribution represents a target RE set. In Embodiment 8, the target RE set is related to at least one of {a feature ID of a transmitter of a first signaling, a feature ID of a monitor of the first signaling, a time-frequency position of an RE in the M RE sets}. In one embodiment, the feature ID of the transmitter of the first signaling is a Physical Cell ID (PCID) of a cell transmitting the first signaling. In one embodiment, the feature ID of the transmitter of the first signaling is an ID of a Transmit Reception Point (TRP) transmitting the first signaling. In one embodiment, the feature ID of the transmitter of the first signaling is an ID of a base station transmitting the first signaling. In one embodiment, the feature ID of the monitor of the first signaling is an RNTI of the UE. In one embodiment, the time-frequency position of the RE in the M RE sets refers to an index of a multicarrier symbol occupied by one RE in the M RE sets. In one embodiment, the time-frequency position of the RE in the M RE sets refers to an index of a subcarrier occupied by one RE in the M RE sets. In one embodiment, the time-frequency position of the RE in the M RE sets refers to an index of a multicarrier symbol occupied by the RE of the lowest frequency in the M RE sets. In one embodiment, the time-frequency position of the RE in the M RE sets refers to an index of a Physical Resource Block (PRB) occupied by the RE of the lowest frequency in the M RE sets. In one embodiment, the time-frequency position of the RE in the M RE sets refers to an index of a Physical Resource Block (PRB) group occupied by the RE of the lowest frequency in the M RE sets, and the PRB group includes a positive integer number of PRBs which are consecutive in frequency domain. In one embodiment, the time-frequency position of the RE in the M RE sets refers to an index of a multicarrier symbol occupied by the RE of the highest frequency in the M RE sets. In one embodiment, the time-frequency position of the RE in the M RE sets refers to an index of a PRB occupied by the RE of the highest frequency in the M RE sets. In one embodiment, the time-frequency position of the RE in the M RE sets refers to an index of a PRB group occupied by the RE of the highest frequency in the M RE sets, and the PRB group includes a positive integer number of PRBs which are consecutive in frequency domain. In one embodiment, the time-frequency position of the RE in the M RE sets refers to an index of a multicarrier symbol occupied by one RE in the M RE sets. In one embodiment, at least one of {the feature ID of the transmitter of the first signaling, the feature ID of the monitor of the first signaling, the time-frequency position of the RE in the M RE sets} is used to determine the target RE set. In one embodiment, the target RE set is cyclically shifted in the M RE sets, and at least one of {the feature ID of the transmitter of the first signaling, the feature ID of the monitor of the first signaling, the time-frequency position of the RE in the M RE sets} is used to determine the position of the cyclic shift. Embodiment 9 Embodiment 9 illustrates an example of a diagram of a first RE set according to one embodiment of the present disclosure, as shown inFIG.9. InFIG.9, the horizontal axis represents frequency, each small rectangle filled by cross lines represents one RE in a first RE set, and each blank small rectangle represents one RE other than the first RE set. In Embodiment 9, the first RE set is one of M RE sets, the monitor of the first signaling assumes that all REs in the first RE set adopt the same precoding, and the monitor of the first signaling cannot assume that the REs in the first RE set and the REs other than the first RE set both adopt the same precoding. In one embodiment, the first RE set can be any one RE set of the M RE sets. In one embodiment, all REs in the first RE set are transmitted through one same antenna port. In one embodiment, two REs in the first RE set are transmitted through different antenna ports. In one embodiment, the REs in the first RE set and a reference signal of the antenna port corresponding to the first RE set adopt the same precoding. In one embodiment, the precoding is realized in digital domain. In one embodiment, the precoding is realized in analog domain. In one embodiment, the monitor of the first signaling assumes that one same channel estimation is applicable to all the REs in the first RE set, and the monitor of the first signaling cannot assume that one same channel estimation is applicable to both the REs in the first RE set and the REs other than the first RE set. Embodiment 10 Embodiment 10 illustrates an example of a diagram of a blind detection on P RE sets according to one embodiment of the present disclosure, as shown inFIG.10. InFIG.10, the vertical axis represents frequency, each small rectangle with a bold border represents an RE set, each small rectangle filled by cross lines with a bold border represents one RE set of P RE sets, each rectangle with a dashed border represents one blind detection of a first signaling, the rectangle filled by cross lines with a dashed border represents one blind detection of the first signaling on P RE sets. In Embodiment 10, a maximum of Y detections are performed for the first signaling in the M RE sets, at least one detection of the Y detections is performed on the P RE sets, Y is a positive integer, and the first signaling is a physical layer signaling. In one embodiment, each detection of the Y detections is realized through RNTI matching. In one embodiment, any two detections of the Y detections are performed on different RE sets. In one embodiment, two detections of the Y detections are performed on the same RE set. In one embodiment, Y is related to the processing capability of the UE. In one embodiment, only one detection of the Y detections is performed on the P RE sets. Embodiment 11 Embodiment 11 illustrates an example of a structure block diagram of a processing device in a UE, as shown inFIG.11. InFIG.11, the UE processing device1100is mainly composed of a first receiver module1101and a second receiver module1102. The first receiver module1101includes the transmitter/receiver456(including antenna460), the receiving processor452and the controller/processor490shown inFIG.4. The second receiver module1102includes the transmitter/receiver456(including antenna460), the receiving processor452and the controller/processor490shown inFIG.4. In Embodiment 11, the first receiver module1101receives first information, and the second receiver module1102monitors a first signaling in M RE sets, wherein the first information is used for determining the M RE sets, a monitor of the first signaling assumes that P modulation symbol groups are one-to-one mapped to P RE sets, the P RE sets are P RE sets of the M RE sets, and one blind detection for the first signaling is performed on the P RE sets; modulation symbols obtained by modulating the bits in a first bit sequence constitute a first modulation symbol set sequentially, wherein the bits in the first bit sequence are sequentially arranged according to an output of a channel encoder; the first modulation symbol set is divided into the P modulation symbol groups successively from the front to the rear, and each modulation symbol group of the P modulation symbol groups consists of X modulation symbols; and relative positions of the P RE sets in frequency domain are related to at least one of {M, X}, wherein M is a positive integer, P is a positive integer not greater than M, and X is a positive integer. In one embodiment, the monitor of the first signaling assumes that the P modulation symbol groups are P consecutive groups of M modulation symbol groups sequentially arranged, the M modulation symbol groups sequentially arranged are interleaved and subsequently constitute a first symbol group sequence sequentially, and the modulation symbol groups in the first symbol group sequence are mapped, starting from a target RE set, to the M RE sets which are arranged in the order of frequency, wherein the target RE set is one of the M RE sets, and the interleaving includes the following: the M modulation symbol groups sequentially arranged are mapped into a target interleaving matrix sequentially according to the order of rows first and then columns, as M matrix elements; and the elements in the target interleaving matrix are rearranged and then read out sequentially according to the order of columns first and then rows to obtain the first symbol group sequence. Herein, at least one of {M, X} is used for determining the number of columns of the target interleaving matrix, and relative positions of the P RE sets in frequency domain are related to the number of columns of the target interleaving matrix. In one embodiment, the monitor of the first signaling assumes that the P modulation symbol groups are P consecutive groups of M modulation symbol groups which are sequentially arranged, the M modulation symbol groups which are sequentially arranged are interleaved to sequentially constitute a first symbol group sequence, and the modulation symbol groups in the first symbol group sequence are mapped, starting from a target RE set, to the M RE sets which are arranged in the order from high frequency to low frequency, wherein the target RE set is one of the M RE sets, and the interleaving includes the following processes: the M modulation symbol groups which are sequentially arranged are arranged into a target interleaving matrix sequentially according to the order of rows first and then columns, as M matrix elements; and the elements in the target interleaving matrix are rearranged and then read out sequentially according to the order of columns first and then rows to obtain the first symbol group sequence. Herein, at least one of {M, X} is used for determining the number of columns of the target interleaving matrix, and relative positions of the P RE sets in frequency domain are related to the number of columns of the target interleaving matrix; the target interleaving matrix includes R rows and C columns, the M modulation symbol groups and J blank modulation symbol groups together are arranged into the target interleaving matrix, the sum of M and J is equal to the product of R and C, wherein R is a positive integer, C is a positive integer, and J is a non-negative integer. In one embodiment, the monitor of the first signaling assumes that the P modulation symbol groups are P consecutive groups of M modulation symbol groups sequentially arranged, the M modulation symbol groups sequentially arranged are interleaved and subsequently constitute a first symbol group sequence sequentially, and the modulation symbol groups in the first symbol group sequence are mapped, starting from a target RE set, to the M RE sets which are arranged in the order of frequency, wherein the target RE set is one of the M RE sets, and the interleaving includes the following:the M modulation symbol groups which are sequentially arranged are arranged into a target interleaving matrix sequentially according to the order of rows first and then columns, as M matrix elements; andthe elements in the target interleaving matrix are rearranged and then read out sequentially according to the order of columns first and then rows to obtain the first symbol group sequence. Herein, at least one of {M, X} is used for determining the number of columns of the target interleaving matrix, and relative positions of the P RE sets in frequency domain are related to the number of columns of the target interleaving matrix; the target RE set is related to at least one of {a feature ID of a transmitter of the first signaling, a feature ID of the monitor of the first signaling, a time-frequency position of an RE in the M RE sets}. In one embodiment, the monitor of the first signaling assumes that the P modulation symbol groups are P consecutive groups of M modulation symbol groups sequentially arranged, the M modulation symbol groups sequentially arranged are interleaved and subsequently constitute a first symbol group sequence sequentially, and the modulation symbol groups in the first symbol group sequence are mapped, starting from a target RE set, to the M RE sets which are arranged in the order of frequency, wherein the target RE set is one of the M RE sets, and the interleaving includes the following:the M modulation symbol groups which are sequentially arranged are arranged into a target interleaving matrix sequentially according to the order of rows first and then columns, as M matrix elements; andthe elements in the target interleaving matrix are rearranged and then read out sequentially according to the order of columns first and then rows to obtain the first symbol group sequence. Herein, at least one of {M, X} is used for determining the number of columns of the target interleaving matrix, and relative positions of the P RE sets in frequency domain are related to the number of columns of the target interleaving matrix; the target interleaving matrix includes R rows and C columns, the M modulation symbol groups and J blank modulation symbol groups together are arranged into the target interleaving matrix, the sum of M and J is equal to the product of R and C, wherein R is a positive integer, C is a positive integer, and J is a non-negative integer; the target RE set is related to at least one of {a feature ID of a transmitter of the first signaling, a feature ID of the monitor of the first signaling, a time-frequency position of an RE in the M RE sets}. In one embodiment, a first RE set is one of the M RE sets, the monitor of the first signaling assumes that all REs in the first RE set adopt the same precoding, and the monitor of the first signaling cannot assume that the REs in the first RE set and the REs other than the first RE set both adopt the same precoding. In one embodiment, a maximum of Y detections are performed for the first signaling in the M RE sets, at least one detection of the Y detections is performed on the P RE sets, Y is a positive integer, and the first signaling is a physical layer signaling. In one embodiment, the first receiver module1101further receives second information, and the second information is used for determining X. Embodiment 12 Embodiment 12 illustrates an example of a structure block diagram of a processing device in a base station, as shown inFIG.12. InFIG.12, the processing device1200of the base station is mainly composed of a first transmitter module1201and a second transmitter module1202. The first transmitter module1201includes the transmitter/receiver416(including antenna420), the transmitting processor415and the controller/processor440shown inFIG.4. The second transmitter module1202includes the transmitter/receiver416(including antenna420), the transmitting processor415and the controller/processor440shown inFIG.4. In Embodiment 12, the first transmitter module1201transmits first information, and the second transmitter module1202transmits a first signaling in M RE sets, wherein the first information is used for determining the M RE sets, a monitor of the first signaling assumes that P modulation symbol groups are one-to-one mapped to P RE sets, the P RE sets are P RE sets of the M RE sets, and one blind detection for the first signaling is performed on the P RE sets; modulation symbols obtained by modulating the bits in a first bit sequence constitute a first modulation symbol set sequentially, wherein the bits in the first bit sequence are sequentially arranged according to an output of a channel encoder; the first modulation symbol set is divided into the P modulation symbol groups successively from the front to the rear, and each modulation symbol group of the P modulation symbol groups consists of X modulation symbols; and relative positions of the P RE sets in frequency domain are related to at least one of {M, X}, wherein M is a positive integer, P is a positive integer not greater than M, and X is a positive integer. In one embodiment, the monitor of the first signaling assumes that the P modulation symbol groups are P consecutive groups of M modulation symbol groups sequentially arranged, the M modulation symbol groups sequentially arranged are interleaved and subsequently constitute a first symbol group sequence sequentially, and the modulation symbol groups in the first symbol group sequence are mapped, starting from a target RE set, to the M RE sets which are arranged in the order of frequency, wherein the target RE set is one of the M RE sets, and the interleaving includes the following:the M modulation symbol groups sequentially arranged are mapped into a target interleaving matrix sequentially according to the order of rows first and then columns, as M matrix elements; andthe elements in the target interleaving matrix are rearranged and then read out sequentially according to the order of columns first and then rows to obtain the first symbol group sequence. Herein, at least one of {M, X} is used for determining the number of columns of the target interleaving matrix, and relative positions of the P RE sets in frequency domain are related to the number of columns of the target interleaving matrix. In one embodiment, the monitor of the first signaling assumes that the P modulation symbol groups are P consecutive groups of M modulation symbol groups sequentially arranged, the M modulation symbol groups sequentially arranged are interleaved and subsequently constitute a first symbol group sequence, and the modulation symbol groups in the first symbol group sequence are mapped, starting from a target RE set, to the M RE sets which are arranged in the order of frequency, wherein the target RE set is one of the M RE sets, and the interleaving includes the following:the M modulation symbol groups sequentially arranged are mapped into a target interleaving matrix sequentially according to the order of rows first and then columns, as M matrix elements; andthe elements in the target interleaving matrix are rearranged and then read out sequentially according to the order of columns first and then rows to obtain the first symbol group sequence. Herein, at least one of {M, X} is used for determining the number of columns of the target interleaving matrix, and relative positions of the P RE sets in frequency domain are related to the number of columns of the target interleaving matrix; the target interleaving matrix includes R rows and C columns, the M modulation symbol groups and J blank modulation symbol groups together are arranged into the target interleaving matrix, the sum of M and J is equal to the product of R and C, wherein R is a positive integer, C is a positive integer, and J is a non-negative integer. In one embodiment, the monitor of the first signaling assumes that the P modulation symbol groups are P consecutive groups of M modulation symbol groups sequentially arranged, the M modulation symbol groups sequentially arranged are interleaved and subsequently constitute a first symbol group sequence sequentially, and the modulation symbol groups in the first symbol group sequence are mapped, starting from a target RE set, to the M RE sets which are arranged in the order of frequency, wherein the target RE set is one of the M RE sets, and the interleaving includes the following:the M modulation symbol groups which are sequentially arranged are arranged into a target interleaving matrix sequentially according to the order of rows first and then columns, as M matrix elements; andthe elements in the target interleaving matrix are rearranged and then read out sequentially according to the order of columns first and then rows to obtain the first symbol group sequence. Herein, at least one of {M, X} is used for determining the number of columns of the target interleaving matrix, and relative positions of the P RE sets in frequency domain are related to the number of columns of the target interleaving matrix; the target RE set is related to at least one of {a feature ID of a transmitter of the first signaling, a feature ID of the monitor of the first signaling, a time-frequency position of an RE in the M RE sets}. In one embodiment, the monitor of the first signaling assumes that the P modulation symbol groups are P consecutive groups of M modulation symbol groups sequentially arranged, the M modulation symbol groups sequentially arranged are interleaved and subsequently constitute a first symbol group sequence sequentially, and the modulation symbol groups in the first symbol group sequence are mapped, starting from a target RE set, to the M RE sets which are arranged in the order of frequency, wherein the target RE set is one of the M RE sets, and the interleaving includes the following:the M modulation symbol groups which are sequentially arranged are arranged into a target interleaving matrix sequentially according to the order of rows first and then columns, as M matrix elements; andthe elements in the target interleaving matrix are rearranged and then read out sequentially according to the order of columns first and then rows to obtain the first symbol group sequence. Herein, at least one of {M, X} is used for determining the number of columns of the target interleaving matrix, and relative positions of the P RE sets in frequency domain are related to the number of columns of the target interleaving matrix; the target interleaving matrix includes R rows and C columns, the M modulation symbol groups and J blank modulation symbol groups together are arranged into the target interleaving matrix, the sum of M and J is equal to the product of R and C, wherein R is a positive integer, C is a positive integer, and J is a non-negative integer; the target RE set is related to at least one of {a feature ID of a transmitter of the first signaling, a feature ID of the monitor of the first signaling, a time-frequency position of an RE in the M RE sets}. In one embodiment, a first RE set is one of the M RE sets, the monitor of the first signaling assumes that all REs in the first RE set adopt the same precoding, and the monitor of the first signaling cannot assume that the REs in the first RE set and the REs other than the first RE set both adopt the same precoding. In one embodiment, a maximum of Y detections are performed for the first signaling in the M RE sets, at least one detection of the Y detections is performed on the P RE sets, Y is a positive integer, and the first signaling is a physical layer signaling. In one embodiment, the first transmitter module1201further transmits second information, and the second information is used for determining X. The ordinary skill in the art may understand that all or part steps in the above method may be implemented by instructing related hardware through a program. The program may be stored in a computer readable storage medium, for example Read-Only Memory (ROM), hard disk or compact disc, etc. Optionally, all or part steps in the above embodiments also may be implemented by one or more integrated circuits. Correspondingly, each module unit in the above embodiment may be realized in the form of hardware, or in the form of software function modules. The present disclosure is not limited to any combination of hardware and software in specific forms. The UE or terminal in the present disclosure includes but not limited to mobile phones, tablet computers, notebooks, network cards, low-power equipment, enhanced MTC (eMTC) equipment, NB-IoT equipment, vehicle-mounted communication equipment, and other wireless communication equipment. The base station or network side equipment in the present application includes but not limited to macro-cellular base stations, micro-cellular base stations, home base stations, relay base station, eNB, gNB, TRP, and other wireless communication equipment. The above are merely the preferred embodiments of the present disclosure and are not intended to limit the scope of protection of the present disclosure. Any modification, equivalent substitute and improvement made within the spirit and principle of the present disclosure are intended to be included within the scope of protection of the present disclosure.
55,262
11943162
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION Both the SRI (schedule request indicator) and the SRS (sounding reference signal) allocations are configured semi-statically by the eNB, and occur periodically. The typical period for the SRI is 10 ms so as to provide a low-latency procedure whenever the UE needs to transmit new data. The SRS period typically depends on the type of traffic and the UE velocity. As a result, SRS and SRI periods may not be integer multiple of each other, in which case it may happen that both are occasionally scheduled in the same transmission instance. Disclosed herein are various embodiments of solutions for retaining the SC (single carrier) property of the transmission for a particular allocation in which the SRS and SRI overlap in one transmission instance. FIG.1shows an exemplary wireless telecommunications network100. The illustrative telecommunications network includes representative base stations101,102, and103; however, a telecommunications network necessarily includes many more base stations. Each of base stations101,102, and103are operable over corresponding coverage areas104,105, and106. Each base station's coverage area is further divided into cells. In the illustrated network, each base station's coverage area is divided into three cells. Handset or other UE109is shown in Cell A108, which is within coverage area104of base station (eNB)101. Base station101is transmitting to and receiving transmissions from UE109via downlink110and uplink111. A UE in a cell may be stationary such as within a home or office, or may be moving while a user is walking or riding in a vehicle. UE109moves within cell108with a velocity112relative to base station101. As UE109moves out of Cell A108, and into Cell B107, UE109may be handed over to base station102. Because UE109is synchronized with base station101, UE109must employ non-synchronized random access to initiate handover to base station102. As long as UE remains within cell108and remains synchronized to nNB101it may request allocation of resources using the scheduling request procedure. Occasional conflicts between the semi-static SRS allocations and SRI allocations for UE109are resolved using an embodiment as described in more detail below. FIG.2is a ladder diagram illustrating a scheduling request procedure for UL synchronized UEs. For example, a UE, such as UE109inFIG.1, is semi-statically allocated an SRI channel on a set of periodic transmission instances. When UE109determines that it needs to transmit data or information to eNB101(i.e. the UE has a pending scheduling request), it first transmits a positive (or ON) SRI202at its next assigned SRI transmission opportunity. Here, an SRI transmission opportunity refers to an allocated SRI channel on a transmission instance. The eNB receives SRI202and then issues an uplink scheduling grant204to UE108. UE108then transmits a scheduling request (SR)206along with data defining what resources are required using the just-allocated resource indicated in scheduling grant204. In the particular case that an SRS is also scheduled for transmission at the exact same transmission instance as SRI202, then the conflict is resolved as described in more detail below. FIGS.3A and3Billustrate coherent orthogonal structures that support transmission of SRI by multiple users within the same frequency and time resource. A similar structure is specified in E-UTRA specifications for ACK/NACK transmission on PUCCH.FIG.3Aillustrates one slot of a transmission frame in which normal cyclic prefix (CP) are used, where c0-c11represent the cyclic shifts of a root CAZAC-like sequence, and s0-s6represent seven OFDM symbols per slot (0.5 ms). Without loss of generality, the middle three OFDM symbols312-314carry PUCCH DM RS, while the other four OFDM symbols310,311,315and316carry SRI data information. Orthogonal covering317and318is applied to the RS OFDM symbols and the data bearing OFDM symbols, respectively. In case a UE has a pending scheduling request and is transmitting a positive (or ON) SRI, then the CAZAC-like sequences in OFDM symbols310,311,315and316are modulated/multiplied by 1. In case a UE does not have a pending scheduling requesting, it does not transmit any signal on its assigned SRI channel, including the RS symbols312-314and the data symbols310,311,315and316, which is equivalent to transmit a negative (or OFF) SRI. For the SRI illustrated inFIG.3A, in each slot of a two slot sub-frame, a seven symbol length sequence is split into two orthogonal sequences, length three and length four, as illustrated. In 3GPP LTE, the defined length-3 orthogonal sequence is the DFT sequences, while the length-4 orthogonal sequence is the Hadamard sequence. A third length-2 orthogonal covering sequence can be applied on to the length-3 and length-4 orthogonal covering sequences, which allows multiplexing up to six UEs per cyclic shift. Using up to six cyclic shifts out of twelve available per 180 kHz frequency resource block (RB) this SRI channel can multiplex 36 UEs per RB and per sub-frame (1 ms). In another embodiment, C0-C11 represent 12 different amounts of phase ramp applied to a root CAZAC-like sequence. A cyclic shifted sequence is obtained by a cyclic shift operation on the root sequence, which is typically defined in the time domain. Phase ramped sequence is obtained by a phase ramp operation on the root sequences, which is typically defined in the frequency domain. The proposed method in this disclosure applies to both cyclic shifted sequences and phase ramped sequences. Similarly,FIG.3Billustrates one slot of a transmission frame in which extended cyclic prefix (CP) are used and therefore only six symbols320-325are available per slot (0.5 ms). The middle two OFDM symbols322-323carry PUCCH DM RS, while the other four OFDM symbols320,321,324and325carry SRI data information. Orthogonal covering326and327is applied to the RS OFDM symbols and the data bearing OFDM symbols, respectively. In case a UE has a pending scheduling request and is transmitting a positive (or ON) SRI, then the CAZAC-like sequences in OFDM symbols320,321,324and325are modulated/multiplied by 1. In case a UE does not have a pending scheduling requesting, it does not transmit any signal on its assigned SRI channel, including the RS symbols322-323and the data symbols320,321,324and325, which is equivalent to transmit a negative (or OFF) SRI. In 3GPP LTE, similar structures inFIG.3AandFIG.3Bare used for the transmission of ACK/NAK on PUCCH, for normal and extended CP, respectively. For ACK/NAK transmission, the four data OFDM symbols carry the ACK/NAK BSPK or QPSK symbol. In other words, the CAZAC-like sequence in a data OFDM symbol is modulated/multiplied by the ACK/NAK BPSK or QPSK symbol. In each OFDM symbol, a cyclically shifted or phase ramped CAZAC-like sequence is transmitted. The CAZAC-like sequence in an PUCCH DM RS OFDM symbol is un-modulated, or equivalently modulated/multiplied by 1. The CAZAC-like sequence in a data OFDM symbol is modulated by the data symbol. In case of a positive SRI transmission, the CAZAC-like sequence in a data OFDM symbol is modulated/multiplied by 1. In this disclosure, a CAZAC-like sequence generally refers to any sequence that has the property of constant amplitude zero auto correlation. Examples of CAZAC-like sequences includes but not limited to, Chu Sequences, Frank-Zadoff Sequences, Zadoff-Chu (ZC) Sequences, Generalized Chirp-Like (GCL) Sequences, or any computer generated CAZAC sequences. One example of a CAZAC-like sequenceru,v(n) is given by ru,v(n)=ejφ(n)π/4, 0≤n≤MscRS−1 where MscRS=12 and φ(n) is defined in Table 1. In this disclosure, the cyclically shifted or phase ramped CAZAC-like sequence is sometimes denoted as cyclic shifted base sequence, cyclic shifted root sequence, phase ramped base sequence, phase ramped root sequence, or any other equivalent term. Definition of φ(n)uφ(0), . . . , φ(11)0−113−3331131−33111333−11−3−31−33211−3−3−3−1−3−31−31−13−11111−1−3−31−33−14−131−11−1−3−11−11351−33−1−111−1−13−316−13−3−3−331−133−317−3−1−1−11−33−11−33181−331−1−1−1113−1191−3−133−1−31111110−13−111−3−3−1−3−33−11131−1−133−313133121−311−3111−3−3−311333−33−3113−1−33314−31−1−3−131333−11153−11−3−1−11131−1−316131−11333−1−13−117−3113−33−3−3313−118−3311−31−3−3−1−11−319−13131−1−13−3−1−3−120−1−3111131−11−3−121−13−11−3−3−3−3−31−1−32211−3−3−3−3−13−31−332311−1−3−1−31−113−1124113133−11−1−3−31251−3331331−3−1−132613−3−33−31−1−13−1−327−3−1−3−1−331−113−3−328−13−33−133−333−1−1293−3−3−1−1−3−13−331−1 FIG.4is frequency vs. time plot illustrating PUCCH404,405and PUSCH402, with Scheduling Request Indicators transmitted in the PUCCH. In this patent application, without loss of generality, an SRI is sent on the PUCCH. As mentioned earlier, SRI is continuously allocated on one RB of the physical uplink control channel (PUCCH) such that thirty-six UEs can be multiplexed in one RB subframe, as indicated generally at407. The next sequential subframe is indicated at408and can likewise support up to thirty-six UE. Within a sub-frame, the SRI hops at both edges of the system bandwidth on a slot basis. Each slot represents one-half of a subframe. For example, an SRI in slot406-1of subframe407is in the higher frequency edge404and the SRI is repeated in slot406-2of subframe407which is in the lower frequency edge405of the PUCCH. Similarly, slots408-1,408-2carry SRI for the next set of thirty-six UE in subframe409. In general, the first and second slot SRI sequences are the same, but they may be different in some embodiments. A Sounding reference signal (SRS) is transmitted in one SC-OFDM symbol within a transmission instance (e.g. a 1 ms subframe consisting of 14 or 12 SC-OFDM symbols). SRS is typically wideband in support of frequency-dependent scheduling, link adaptation, power control, and UL synchronization maintenance, for example. Unlike “pure” OFDMA systems, Single Carrier (SC) systems are more restrictive regarding how different UL transmissions can be frequency-multiplexed, and can be defined as follows: for a given UE, only one transmission per SC-OFDM symbol can be frequency-mapped to the system bandwidth so that the UE cannot frequency multiplex different transmissions in the same SC-OFDM symbol. This prevents a spike in the peak to average power ratio (PAPR) that is undesirable. Both SRI and SRS allocations are configured semi-statically by the eNB, and occur periodically. The typical period for the SRI is 10 ms so as to provide a low-latency procedure whenever the UE needs to transmit new data. The SRS period typically depends on the type of traffic and the UE speed. As a result, SRS and SRI periods may not be integer multiple of each other, in which case it may happen that both are configured in the same transmission instance. Several solutions will be described in the following paragraphs. Solution 1 It is possible that SRS and ACK/NAK (or CQI) may need to be transmitted in the same transmission instance (e.g. a 1 ms subframe). Due to the restriction imposed by the single carrier property, the transmission of ACK/NACK (or CQI) is prioritized over SRS and SRS is always dropped. As a result, one could think at extending the same rule for SRI and SRS. However, in the case of simultaneous SRI and SRS allocation in the same transmission instance, the “SRS dropping” rule does not need to be so restrictive, because the SRI transmission only occurs when a UE actually has a pending scheduling request. In other words, the transmission of SRI is On-Off based, and most of the time, UE does not send anything (i.e. no pending scheduling request). As a result, in order not to unnecessarily drop SRS, the following transmission method is applicable in case of simultaneous SRI and SRS allocation: whenever a UE has both a PUCCH-SRI and an SRS allocation in the same transmission instance, if the UE has a pending scheduling request, it transmits a positive (or ON) SRI and does not transmit SRS; otherwise, it transmits SRS. Solution 2 FIG.5is a simplified version ofFIG.4further illustrating an embodiment of concurrent transmission scheme of SRS and SRI in the same transmission instance. Here, each transmission instance consists of two slots, where SRS is transmitted in one slot502and SRI is transmitted in the other slot503.FIG.5shows an example where SRS504is transmitted in the first slot502on the PUSCH resources and SRI is transmitted in the second slot503on the PUCCH resource. The resource for SRI in the first slot of the PUCCH can be left unused. It is not precluded that UL SRS is transmitted in the second slot and SRI is transmitted in the first slot. The position of SRS inFIG.5is exemplary. Solution 3 A third approach is to puncture one SRI symbol within the transmission instance (comprising a plurality of symbols) to accommodate the transmission of SRS.FIGS.6A-6Eshow a few examples where the first SRI symbol is punctured for SRS. The position of SRS is exemplary. The SRS may be placed in the last OFDM symbol of a subframe, for example. InFIG.6A, SRS604is transmitted in the first OFDM symbol612in slot 0602. Note that SRI is not transmitted in symbol612, thereby maintaining the single carrier property. The punctured slot 0 SRI contains only six symbols, while the SRI in slot 1603contains the normal seven symbols. It should be noted that when any of the examples of the third solution are used, all UEs in the cell should transmit the SRI with the punctured format, even when SRS is not being transmitted. FIG.6Billustrates a punctured SRI in which the first three symbols (S1, S2, S3) after empty symbol S0612contain the first SRI sequence and the last three symbols (S4, S5, S6) of slot 0 contain the second SRI sequence. The slot 1 SRI contains the standard SRI with the first sequence in the middle three symbols and the second sequence in the four remaining outer symbols. Each of the OFDM symbols denoted by the first SRI sequence can be used for the transmission of SRI DM RS, while each of the OFDM symbols denoted by the second SRI sequence can be used for the transmission of SRI data. FIG.6Cillustrates a punctured SRI in which the first SRI sequence in slot 0 occupies only two symbols (S3, S4) and the second SRI sequence in slot 0 occupies the four remaining symbols (S1, S2, S5, S6). The slot 1 SRI contains the standard SRI with the first sequence in the middle three symbols and the second sequence in the four remaining outer symbols. Each of the OFDM symbols denoted by the first SRI sequence can be used for the transmission of SRI DM RS, while each of the OFDM symbols denoted by the second SRI sequence can be used for the transmission of SRI data. FIG.6Dillustrates a punctured SRI in which the first SRI sequence in slot 0 occupies four symbols (S2, S3, S4, S5) and the second SRI sequence in slot 0 occupies the two remaining symbols (S1, S6). The slot 1 SRI contains the standard SRI with the first sequence in the middle three symbols and the second sequence in the four remaining outer symbols. Each of the OFDM symbols denoted by the first SRI sequence can be used for the transmission of SRI DM RS, while each of the OFDM symbols denoted by the second SRI sequence can be used for the transmission of SRI data. FIG.6Eillustrates a punctured SRI based on the extended CP format ofFIG.3Bin which the first SRI sequence in slot 0 occupies two symbols (S2, S3) and the second SRI sequence in slot 0 occupies the three remaining symbols (S1, S4, S5). The slot 1 SRI contains the standard SRI with the first sequence in the middle two symbols of the extended CP format and the second sequence in the four remaining outer symbols. Each of the OFDM symbols denoted by the first SRI sequence can be used for the transmission of SRI DM RS, while each of the OFDM symbols denoted by the second SRI sequence can be used for the transmission of SRI data. In other embodiments, the format of the SRI in slot 1 may also be varied to more closely match the punctured SRI in slot 0. Configuration of SRS+SRI Operation Overall, solution 1 and solution 3 appear to be the reasonable methods for handling concurrent allocation/transmission of SRS and SRI in the same transmission instance. Therefore, the following two Options may apply. Option 1: Drop SRS in case UE needs to transmit a positive (or ON) SRI as described in solution 1. Option 2: Puncture one SRI symbol to allow simultaneous SRS and SRI transmission in one transmission instance as described in solution 3. It is possible for Node-Bs or cells to configure the SRS+SRI operation. For example, a 1-bit control signaling can be included in a broadcast channel (for example, dynamic BCH) to indicate whether Option 1 or Option 2 is adopted for a given cell. This configuration can be cell specific, NodeB specific or common to all cells/NodeBs in the system. Similarly, for SRS+ACK/NAK, two options are available: Option 1: Drop SRS in case of collision with ACK/NAK Option 2: Puncture one ACK/NAK symbol to allow simultaneous SRS and ACK/NAK transmission. Thus, two control signaling bits can be included in D-BCH, one for the configuration of SRS+SRI, and the other for the configuration of SRS+ACK/NAK. Alternatively, it is not precluded that a common 1-bit control signaling bit is used to configure the operations of both SRS+SRI and SRS+ACK/NAK. For example, the 1-bit control signaling indicates: Drop SRS in case of collision with a positive SRI or ACK/NAK, or Puncture one SRI or ACK/NAK symbol to allow simultaneous SRS and SRI or SRS and ACK/NAK transmission. Using a common 1-bit signaling for the configurations of concurrent allocation/transmission of SRS+SRI and SRS+ACK/NAK is preferable, due to less control overhead. FIG.7is a flow diagram illustrating allocation and transmission of SRS and SRI according to an embodiment of the present invention. As described above, orthogonal block spreading codes can be applied to multiple users for simultaneous transmission within the same frequency-time resource. This scheme is used for transmission of SRI. When a UE enters a cell, it receives702from the NodeB serving the cell an allocation of a set of periodic transmission instances for SRI. It also receives702an allocation of a set of periodic transmission instances for SRS. It also receives configuration information to instruct it as to which channel resources it is to use for transmission. In some embodiments, it also receives an indication of a mode of operation to use when an SRS transmission and an SRI transmission are both allocated in the same transmission instance. During a normal course of operation, a given UE transmits720an SRS according to its periodic SRS allocation. Whenever it has a scheduling request to transmit, it sends720a positive (or ON) SRI according to its periodic SRI allocation and receives further resource allocations using the three step procedure described with respect toFIG.2. Generally, a particular SRS allocation and a particular SRI allocation will be allocated different transmission instances. Since the SRI and SRS allocations have different periods, occasionally a same transmission instance will be allocated704for both SRS and SRI. Frequently, when this happens, there will not be any scheduling request706for transmission so there will not be a pending SRI. Hence, the SRS is transmitted708without transmitting an SRI in a transmission instance allocated for both SRI and SRS when the SRI indicates no pending scheduling request. In a small percentage of occurrences, there is a pending scheduling request and consequently an SRI will therefore be pending706for transmission. In this case, if the UE has been instructed to use a first mode of operation when a transmission instance is allocated for both SRS and SRI, the SRI is transmitted710without transmitting the SRS in the transmission instance. In a second mode of operation, both the SRI and the SRS are transmitted712in the transmission instance. In various embodiments, this is accomplished using one of the structures described in more detail inFIGS.6A-6Ewhere a transmission instance comprises of a plurality of SC-OFDMA symbols. In the second mode of operation, at least one SC-OFDMA symbol is used for the transmission of SRS and at least another SC-OFDMA symbol is used for the transmission of SRI. For embodiments in which a control message702indicating a mode of operation is not used, then the UE will follow a default procedure. The default may be to transmit710an SRI without transmitting an SRS in a transmission instance allocated for both SRI and SRS when the SRI indicates a pending scheduling request, and transmit708an SRS without transmitting an SRI in a transmission instance allocated for both SRI and SRS when the SRI indicates no pending scheduling request. The control message indicating a mode of operation is sent702to all user equipments within a cell of the wireless network. The mode of operation is common to all user equipments within a cell of the wireless network. In some embodiments, the control message702further indicates a mode of operation in a transmission instance allocated for both ACK/NAK and SRS, wherein in a third mode of operation, the ACK/NAK is transmitted without transmitting the SRS in the transmission instance; and in a fourth mode of operation, both the ACK/NAK and the SRS are transmitted in the transmission instance. In various embodiments, this is accomplished using one of the structures described in more detail inFIGS.6A-6Efor a transmission instance containing a plurality of SC-OFDMA symbols. In the fourth mode of operation, at least one SC-OFDMA symbol is used for the transmission of SRS and at least another SC-OFDMA symbol is used for the transmission of ACK/NAK. In some embodiments, the control message702is binary, indicating either the first and third mode of operations, or the second and fourth mode of operations. FIG.8Ais a block diagram of a transmitter structure850for transmitting the coherent structures ofFIGS.3A-6E. Elements of transmitter850may be implemented as components in a fixed or programmable processor. In some embodiments, the inverse Fast Fourier Transform (IFFT) block in856may be implemented using an Inverse Discrete Fourier Transform (IDFT). Similarly, Discrete Fourier Transform (DFT)852may be implemented as a Fast Fourier Transform. The SRI and SRS sequences860are transformed to the frequency domain using DFT852and mapped onto a designated set of tones (sub-carriers) using Tone Map854. Additional signals or zero-padding may or may not be present. The UE next performs IFFT of the mapped signal using the IFFT856to transform the OFDM signal back to the time domain. The CP is then formed using a portion of the OFDM signal output from IFFT856and appended to the OFDM signal to form the complete SC-OFDM symbol which is output to the transmitter (not shown). Formation of the SC-OFDM symbol is controlled as described above so that both an SRS and an SRI are not formed in the same symbol. In other embodiments, the order of DFT852, tone map854and IFFT856may be arranged in various combinations. For example, in one embodiment a DFT operation is performed on a selected root sequence, tone mapping is then performed, an IDFT is performed on the mapped tones and then a cyclic shift may be performed. In another embodiment, tone mapping is performed on the root sequence and then an IDFT is performed on the mapped tones and then a cyclic shift is performed. In this disclosure, the cyclically shifted or phase ramped CAZAC-like sequence is sometimes denoted as cyclic shifted base sequence, cyclic shifted root sequence, phase ramped base sequence, phase ramped root sequence, or any other equivalent term. FIG.8Bis a more detailed block diagram of the illustrative transmitter ofFIG.8A. Elements of the transmitter may be implemented as components in a fixed or programmable processor by executing instructions stored in memory. A pre-defined set of sequences is defined. The UE generates a CAZAC-like (e.g. ZC or extended ZC or zero-autocorrelation QPSK computer-generated) sequence using base sequence generator802. A cyclic shift value is selected for each symbol based on the SRI resource index, the OFDM symbol number and the slot number in cyclic shift selecting module804. The base sequence is then shifted by cyclic shifter806on a symbol by symbol basis using shift values provided by cyclic shift selection module804. The UE generates both orthogonal covering sequences317and318, for example, using orthogonal sequence generator808. Orthogonal sequence generator808generates one sequence out of the set of orthogonal sequences based on the SRI resource index. The orthogonal covering sequence sample selection810selects and issues the appropriate sequence sample from the covering sequence based on the index of the OFDM symbol being currently generated. The cyclic shifted base sequence vector is element-wise complex-multiplied by the selected orthogonal covering complex sample in complex multiplier812. The result of the element-wise complex multiplication is processed as described above to form a final fully formed SC-OFDM uplink signal836. System Examples FIG.9is a block diagram illustrating operation of an eNB and a mobile UE in the network system ofFIG.1. As shown inFIG.9, wireless networking system900comprises a mobile UE device901in communication with an eNB902. The mobile UE device901may represent any of a variety of devices such as a server, a desktop computer, a laptop computer, a cellular phone, a Personal Digital Assistant (PDA), a smart phone or other electronic devices. In some embodiments, the electronic mobile UE device901communicates with the eNB902based on a LTE or E-UTRAN protocol. Alternatively, another communication protocol now known or later developed can be used. As shown, the mobile UE device901comprises a processor903coupled to a memory907and a Transceiver904. The memory907stores (software) applications905for execution by the processor903. The applications905could comprise any known or future application useful for individuals or organizations. As an example, such applications905could be categorized as operating systems (OS), device drivers, databases, multimedia tools, presentation tools, Internet browsers, e-mailers, Voice-Over-Internet Protocol (VOIP) tools, file browsers, firewalls, instant messaging, finance tools, games, word processors or other categories. Regardless of the exact nature of the applications905, at least some of the applications905may direct the mobile UE device901to transmit UL signals to the eNB (base-station)902periodically or continuously via the transceiver904. In at least some embodiments, the mobile UE device901identifies a Quality of Service (QoS) requirement when requesting an uplink resource from the eNB902. In some cases, the QoS requirement may be implicitly derived by the eNB902from the type of traffic supported by the mobile UE device901. As an example, VOIP and gaming applications often involve low-latency uplink (UL) transmissions while High Throughput (HTP)/Hypertext Transmission Protocol (HTTP) traffic can involve high-latency uplink transmissions. As shown inFIG.9, the transceiver904comprises uplink logic. The uplink logic executes instructions that control the operation of the transceiver. Some of these instructions may be stored in memory907and executed when needed. As would be understood by one of skill in the art, the components of the Uplink Logic may involve the physical (PHY) layer and/or the Media Access Control (MAC) layer of the transceiver904. Transceiver904includes one or more receivers and one or more transmitters. The transmitter(s) may be embodied as described with respect toFIGS.8A,8Bfor transmission of SC-OFDM SRI and SRS symbols. In particular, as described above, formation of the SRI+SRS SC-OFDM symbols is controlled so that both an SRS and an SRI are not formed in the same symbol. Buffer logic921coupled to transmitter922stores any pending scheduling request. Receiver920is operable to receive and store in memory907an allocation comprising a plurality of periodic transmission instances for a scheduling request indicator (SRI) and an allocation comprising a plurality of periodic transmission instances for a sounding reference signal (SRS). Buffer logic921is controlled by processor903and is operable to store a pending scheduling request. Transmitter922is responsive to the buffer logic and is operable to produce and transmit an SRI without transmitting an SRS in a transmission instance allocated for both SRI and SRS when the buffer logic indicates the pending scheduling request. It is operable to transmit an SRS without transmitting an SRI in a transmission instance allocated for both SRI and SRS, when the buffer logic indicates no pending scheduling request. In some embodiments, receiver920is further operable to receive and store in memory907a control message from NodeB902indicating a mode of operation in a transmission instance allocated for both SRI and SRS with a pending scheduling request in the transmission instance. In a first mode of operation, the transmitter is operable to transmit the SRI without transmitting the SRS in the transmission instance, wherein in a second mode of operation, both the SRI and the SRS are transmitted in the transmission instance using a structure such as illustrated inFIGS.6A-6E. On some embodiments, the control message further indicates a mode of operation in a transmission instance allocated for both ACK/NAK and SRS. In a third mode of operation, the transmitter logic is further operable to produce and transmit an ACK/NAK without transmitting the SRS in the transmission instance, wherein in a fourth mode of operation, both the ACK/NAK and the SRS are transmitted in the transmission instance using a structure similar to that illustrated inFIGS.6A-6E. In some embodiments, the control message is binary, indicating either the first and third mode of operations, or the second and fourth mode of operations. As shown inFIG.9, the eNB902comprises a Processor909coupled to a memory913and a transceiver910. The memory913stores applications908for execution by the processor909. The applications908could comprise any known or future application useful for managing wireless communications. At least some of the applications908may direct the base-station to manage transmissions to or from the user device901. Transceiver910comprises an uplink Resource Manager912, which enables the eNB902to selectively allocate uplink PUSCH resources to the user device901. As would be understood by one of skill in the art, the components of the uplink resource manager912may involve the physical (PHY) layer and/or the Media Access Control (MAC) layer of the transceiver910. Transceiver910includes a Receiver911for receiving transmissions from various UE within range of the eNB. Uplink resource manager912executes instructions that control the operation of transceiver910. Some of these instructions may be located in memory913and executed when needed. Resource manager912controls the transmission resources allocated to each UE that is being served by eNB902and broadcasts control information via the physical downlink control channel PDCCH. In particular, for the transmission of SRS and SRI, eNB902allocates in a semi-static manner periodic allocations for SRS and SRI and also indicates which mode of operation is to be used to resolve overlap conflicts, as described in more detail above. FIG.10is a block diagram of mobile cellular phone1000for use in the network ofFIG.1. Digital baseband (DBB) unit1002can include a digital processing processor system (DSP) that includes embedded memory and security features. Stimulus Processing (SP) unit1004receives a voice data stream from handset microphone1013aand sends a voice data stream to handset mono speaker1013b. SP unit1004also receives a voice data stream from microphone1014aand sends a voice data stream to mono headset1014b. Usually, SP and DBB are separate ICs. In most embodiments, SP does not embed a programmable processor core, but performs processing based on configuration of audio paths, filters, gains, etc being setup by software running on the DBB. In an alternate embodiment, SP processing is performed on the same processor that performs DBB processing. In another embodiment, a separate DSP or other type of processor performs SP processing. RF transceiver1006includes a receiver for receiving a stream of coded data frames and commands from a cellular base station via antenna1007and a transmitter for transmitting a stream of coded data frames to the cellular base station via antenna1007. Transmission of the PUSCH data is performed by the transceiver using the PUSCH resources designated by the serving eNB. In some embodiments, frequency hopping may be implied by using two or more bands as commanded by the serving eNB. In this embodiment, a single transceiver can support multi-standard operation (such as EUTRA and other standards) but other embodiments may use multiple transceivers for different transmission standards. Other embodiments may have transceivers for a later developed transmission standard with appropriate configuration. RF transceiver1006is connected to DBB1002which provides processing of the frames of encoded data being received and transmitted by the mobile UE unite1000. The EUTRA defines SC-FDMA (via DFT-spread OFDMA) as the uplink modulation. The basic SC-FDMA DSP radio can include discrete Fourier transform (DFT), resource (i.e. tone) mapping, and IFFT (fast implementation of IDFT) to form a data stream for transmission. To receive the data stream from the received signal, the SC-FDMA radio can include DFT, resource de-mapping and IFFT. The operations of DFT, IFFT and resource mapping/de-mapping may be performed by instructions stored in memory1012and executed by DBB1002in response to signals received by transceiver1006. For SRS and SRI transmission, a transmitter(s) within transceiver1006may be embodied as described with respect toFIG.8A/8B. In particular, as described above, formation of the SRI+SRS SC-OFDM symbols is controlled so that both an SRS and an SRI are not formed in the same symbol. However, in some embodiments, a receiver in transceiver1006is operable to receive and store in memory a control message from a NodeB indicating a mode of operation in a transmission instance allocated for both SRI and SRS with a pending scheduling request in the transmission instance. In a first mode of operation, the transmitter is operable to transmit the SRI without transmitting the SRS in the transmission instance, wherein in a second mode of operation, both the SRI and the SRS are transmitted in the transmission instance using a structure such as illustrated inFIGS.6A-6E. In 3GPP LTE UL, a similar structure is defined for the transmission of scheduling request indicator (SRI) and ACK/NAK. The difference between the transmission of ACK/NAK and SRI is that ACK/NAK is BPSK/QPSK modulated, depending on the number of ACK/NAK bits, while SRI is ON-OFF keying modulated. On some embodiments, the control message further indicates a mode of operation in a transmission instance allocated for both ACK/NAK and SRS. In a third mode of operation, the transmitter logic is further operable to produce and transmit an ACK/NAK without transmitting the SRS in the transmission instance, wherein in a fourth mode of operation, both the ACK/NAK and the SRS are transmitted in the transmission instance using a structure similar to that illustrated inFIGS.6A-6E. In some embodiments, the control message is binary, indicating either the first and third mode of operations, or the second and fourth mode of operations DBB unit1002may send or receive data to various devices connected to universal serial bus (USB) port1026. DBB1002can be connected to subscriber identity module (SIM) card1010and stores and retrieves information used for making calls via the cellular system. DBB1002can also connected to memory1012that augments the onboard memory and is used for various processing needs. DBB1002can be connected to Bluetooth baseband unit1030for wireless connection to a microphone1032aand headset1032bfor sending and receiving voice data. DBB1002can also be connected to display1020and can send information to it for interaction with a user of the mobile UE1000during a call process. Display1020may also display pictures received from the network, from a local camera1026, or from other sources such as USB1026. DBB1002may also send a video stream to display1020that is received from various sources such as the cellular network via RF transceiver1006or camera1026. DBB1002may also send a video stream to an external video display unit via encoder1022over composite output terminal1024. Encoder unit1022can provide encoding according to PAL/SECAM/NTSC video standards. Other Embodiments While the invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various other embodiments of the invention will be apparent to persons skilled in the art upon reference to this description. For example, a larger or smaller number of symbols then described herein may be used in a slot. In some embodiments, a transmission instance may refer to a subframe that contains two slots as describe herein. In another embodiment, a transmission instance may refer to a single slot. In yet other embodiments, a transmission instance may refer to another agreed upon logical time duration that may be allocated for transmission resources. As used herein, the terms “applied,” “coupled,” “connected,” and “connection” mean electrically connected, including where additional elements may be in the electrical connection path. “Associated” means a controlling relationship, such as a memory resource that is controlled by an associated port. It is therefore contemplated that the appended claims will cover any such modifications of the embodiments as fall within the true scope and spirit of the invention.
38,197
11943163
DETAILED DESCRIPTION OF EMBODIMENTS Before a detailed description of the embodiments under reference ofFIG.1is given, general explanations are made. As mentioned in the outset, in general, several generations of mobile telecommunications systems are known, e.g. the third generation (“3G”), which is based on the International Mobile Telecommunications-2000 (IMT-2000) specifications, the fourth generation (“4G”), which provides capabilities as defined in the International Mobile Telecommunications-Advanced Standard (IMT-Advanced Standard), and the current fifth generation (“5G”), which is under development and which might be put into practice in the year 2020. A candidate for providing the requirements of 5G is the so-called Long Term Evolution (“LTE”), which is a wireless communications technology allowing high-speed data communications for mobile phones and data terminals and which is already used for 4G mobile telecommunications systems. Other candidates for meeting the 5G requirements are termed New Radio (NR) Access Technology Systems (NR). An NR can be based on LTE technology, just as LTE was based on previous generations of mobile communications technology. LTE is based on the GSM/EDGE (“Global System for Mobile Communications”/“Enhanced Data rates for GSM Evolution” also called EGPRS) of the second generation (“2G”) and UMTS/HSPA (“Universal Mobile Telecommunications System”/“High Speed Packet Access”) of the third generation (“3G”) network technologies. LTE is standardized under the control of 3GPP (“3rd Generation Partnership Project”) and there exists a successor LTE-A (LTE Advanced) allowing higher data rates than the basic LTE and which is also standardized under the control of 3GPP. For the future, 3GPP plans to further develop LTE-A such that it will be able to fulfill the technical requirements of 5G. As the 5G system will be based on LTE or LTE-A, respectively, it is assumed that specific requirements of the 5G technologies will, basically, be dealt with by features and methods which are already defined in the LTE and LTE-A standard documentation. In LTE, as mentioned above, a Physical Broadcast Channel is known which is used for broadcasting system information, such as the master information block (MIB). The PBCH broadcasts a limited number of parameters, which are essential for an initial access to a cell. These parameters pertain, for example, to the downlink system bandwidth, the Physical Hybrid ARQ Indicator Channel structure, and the most significant eight-bits of the System Frame Number. Typically, as mentioned above, in LTE, the UE which wants to get access to a cell performs a cell synchronization and acquires a physical cell ID, time slot and frame synchronization, on the basis of which the UE will be able to read the system information blocks. When the UE tunes to a specific channel it typically finds the primary synchronization signal (PSS). In LTE, the PSS is located in the last OFDM symbol of the first time slot of the first subframe (subframe 0) of a radio frame. In a next step, typically, the UE finds the secondary synchronization signal (SSS), wherein the symbols of the SSS are located in the same subframe as the symbols of the PSS. Based on the SSS, the UE is able to obtain a physical layer cell identity group number (e.g. in a range from 0 to 167). On the basis of this information, the UE is able to get the location of reference signals of the cell, wherein reference signals may be used for channel estimation, cell selection and reselection, handover procedures and the like. In 3GPP a Study Item (SI) on New Radio Access Technology (NR) has been agreed, as can exemplary be taken from 3GPP document RP-160671, “New SID Proposal: Study on New Radio Access Technology,” NTT DOCOMO, RAN #71. On the basis of this SI a new Radio Access Technology (RAT) for the next generation wireless communications system, i.e. 5G, is studied and developed. The new RAT is expected to operate in a wide range of frequencies, in some embodiments, for example, from hundreds of MHz to one hundred GHz and it is expected to cover a broad range of use cases in some embodiments. Typical use cases that may be considered, e.g. under this SI, are Enhanced Mobile Broadband (eMBB), Massive Machine Type Communications (mMTC), and Ultra Reliable & Low Latency Communications (URLLC). Typically, for any waking user equipment (UE) accessing a network (e.g. LTE and/or NR network), detecting the primary and secondary synchronization signals (PSS and SSS) provides the UE, for example, with time, frequency and frame synchronization as well as sector, group and cell identity information. Following this, the UE can then find and decode the physical broadcast channel (PBCH), which in turn provides the UE with critical system and configuration information of the particular component carrier. This critical system and configuration information is sometimes referred to as a “Master Information Block”, MIB. Whilst the detection of the PSS and SSS entails the processing of signals to detect the random sequences used in their construction, decoding of the PBCH actually entails the channel estimation and equalization, demodulation and forward error correction code decoding of critical system information bits carried by the PBCH. The PSS and SSS are therefore, in some embodiments, designed to be detectable non-coherently, i.e. in the presence of unknown timing and frequency offsets, whether the UE is stationary or moving. Often, in practice, the PSS is detected first and its processing may provide enough frequency and timing offset correction to allow the subsequent detection of the SSS to be done coherently. The PBCH on the other hand must be decoded coherently. This may mean that apart from the fact that this is done after frequency and timing offset correction, the channel may, or even must, be estimated and equalized before the set of resource elements carrying the critical system information can be demodulated and decoded. Mobile systems may suffer from time and frequency selectivity. Channel estimation therefore may have to contend with these degradations. The time selectivity of the radio channel is due to time-varying fading that arises from relative movement between the UE and the base station, e.g. eNodeB. This time selectivity can be quantified as the coherence time of the transmission channel or Doppler spread which depends on the relative speed of movement between the UE and the base station, e.g. eNodeB. For these reasons, the OFDM symbols carrying the PBCH must be resilient to Doppler spread in some embodiments. OFDM symbol resilience to Doppler spread may be assured by wider subcarrier spacing (SCS) between the subcarriers of the OFDM symbol. It is known that at higher operating frequencies, a given speed of travel for the UE results in a higher Doppler spread than would be the case for operation at a lower frequency band. The RAN1 3GPP group has decided that the SCS to be used for OFDM symbols carrying the PSS and SSS will be determined according to the operating frequency range for this reason. This may also be implemented in some embodiments. Nevertheless, for networks operating at higher frequency bands, it may be desirable to use a narrower SCS for the PSS and SSS, as this may make time domain detection of the PSS easier at the UE. In such an operating scenario, it may be desirable in some embodiments, for the PBCH to be carried on OFDM symbols with wider subcarrier spacing in order to ensure its robustness to Doppler spread. Hence, some embodiments provide an efficient means of conveying to the receiver the SCS of the PBCH OFDM symbols, pertain to the issue of how the wider subcarrier spacing of the PBCH symbols can be derived from the SCS of the synchronization symbols in a manner to make its signaling easier, and provide alternative means of carrying the critical system information of the PBCH OFDM symbols even if they use the SCS of the synchronization symbols but in a manner more robust to Doppler spread. Consequently, some embodiments pertain to a base station, a user equipment and a mobile telecommunications system method, as will also be discussed in the following. The base station and the user equipment each have a circuitry which is configured to perform a method and the mobile telecommunications system method may include the methods as performed by the circuitry of the user equipment and of the base station. The base station may be based on the principles of LTE (LTE-A) and/or it may be based on NR RAT, as also discussed above. The base station may be based on the known eNodeB of LTE, as one example, or it may be based on the discussed NR eNodeB. The user equipment may be, for example, a mobile phone, smartphone, a computer, tablet, tablet personal computer, or the like, including a mobile communication interface, or any other device which is able to perform a mobile telecommunication via, for example, LTE or NR, such as a hot spot device with a mobile communication interface, etc. Some embodiments pertain to a base station for a mobile telecommunications system comprising circuitry configured to communicate with at least one user equipment, wherein the circuitry is further configured to set a first subcarrier spacing for transmission of at least one synchronization signal; and set a second subcarrier spacing for transmission on a physical broadcast channel, wherein the first subcarrier spacing differs from the second subcarrier spacing. The synchronization signal may be a primary synchronization signal or a secondary synchronization signal. As mentioned, the transmission may be based, as discussed, on the LTE standard or on a successor. Hence, in some embodiments, a different SCS can be used for synchronization signals (PSS & SSS) and PBCH transmission. Corresponding embodiments pertain to a user equipment for a mobile telecommunications system comprising circuitry configured to communicate with at least one base station, wherein the circuitry is further configured to receive at least one synchronization signal and receive a transmission over a physical broadcast channel, wherein the synchronization signal is transmitted with a first subcarrier spacing and the transmission over the physical broadcast channel is transmitted with a second subcarrier spacing, wherein the first subcarrier spacing differs from the second subcarrier spacing. Some embodiments pertain to a base station for a mobile telecommunications system comprising circuitry configured to communicate with at least one user equipment, wherein the circuitry is further configured to indicate, based on a primary synchronization signal, a specific subcarrier spacing for a physical broadcast channel. Hence, some embodiments allow using PSS to signal which SCS is used for PBCH in general and, for example, provide a method of encoding and transmitting that signaling by toggling signs of PSS sequence on both halves of its spectrum. The primary synchronization signal may include a symbol sequence, as it is generally known, and it may be based on transmission of OFDM symbols. The symbol sequence may be divided in at least two parts and the specific subcarrier spacing may be indicated, based on the division of the symbol sequence. For example, for each part of the divided symbol sequence, a positive or negative sign may be assigned and the specific subcarrier spacing may be indicated, based on the signs assigned to the parts, as will also be discussed further below. Each part of the divided symbol sequence may be multiplied by the sign assigned to the part. The amount of the specific subcarrier spacing may depend on the number of parts into which the symbol sequence, i.e. the primary synchronization signal is divided and/or the relationship between the amount of the specific subcarrier spacing used on the physical broadcast channel and the number of parts into which symbol sequence is divided may be predefined. Corresponding embodiments pertain to a user equipment for a mobile telecommunications system comprising circuitry configured to communicate with at least one base station, wherein the circuitry is further configured to receive at least one primary synchronization signal; and to determine, based on the received at least one primary synchronization signal, a subcarrier spacing of a physical broadcast channel. Hence, some embodiments, allow reducing complexity at a receiver side to allow detection of the PBCH SCS signaling on the PSS. As mentioned, the primary synchronization signal may include a symbol sequence, and the symbol sequence may be divided in at least two parts, and the specific subcarrier spacing is determined based on the division of the symbol sequence. For each part of the divided symbol sequence, a positive or negative sign may be assigned and the subcarrier spacing may be determined, based on the signs assigned to the parts. As also discussed, the amount of the subcarrier spacing may depend on the number of parts and/or the amount of the subcarrier spacing may be predefined. The circuitry may be further configured to compute a correlation between parts of the received primary synchronization signal. Thereby, in particular, when a positive/negative sign is assigned to the parts of the primary synchronization signal, the search for subcarrier spacing is reduced, basically, to a sign search. Some embodiments pertain to a base station for a mobile telecommunications system comprising circuitry configured to communicate with at least one user equipment, wherein the circuitry is further configured to transmit symbols on a physical broadcast channel, wherein symbols are transmitted with (at least) a (first) subcarrier spacing and modulation symbols are repeated in consecutive resource elements in the frequency domain. Although, in some embodiments PBCH symbols are transmitted at one subcarrier spacing in our special repeated pattern, which may have the effect of providing the resilience of a wider subcarrier spacing, in some embodiments, the symbols may be transmitted with different subcarrier spacings. Hence, in some embodiments, an increasing of the SCS of PBCH is provided by repetition of QAM symbols over REs (resource elements). For example, if a base SCS is 15 kHz a repetition of symbols of four times results in an effective SCS of 60 kHz. Thereby, a guard band which may be needed in known systems, can be made obsolete. The symbols may include quadrature amplitude modulation symbols. Corresponding embodiments pertain to a user equipment for a mobile telecommunications system comprising circuitry configured to communicate with at least one base station, wherein the circuitry is further configured to receive symbols on a physical broadcast channel, wherein symbols are transmitted with (at least) a (first) subcarrier spacing and modulation symbols are repeated in consecutive resource elements in the frequency domain. As mentioned, the symbols may include quadrature amplitude modulation symbols. The receiving of symbols may include decoding the repeatedly transmitted symbol based on the first subcarrier spacing and/or the receiving of the symbols may include decoding the repeatedly transmitted symbol based on a second subcarrier spacing. Hence, in some embodiments, the receiver approach in decoding such repeated QAM symbols either uses the base SCS and then combines the result prior to demodulation (where the process of demodulation includes the generation of log-likelihood rations in some exemplary embodiments) or it uses decoding with the composite and larger SCS. Some embodiments pertain to a base station for a mobile telecommunications system comprising circuitry configured to communicate with at least one user equipment, wherein the circuitry is further configured to transmit symbols in a physical broadcast channel by applying a phase ramp, as will also be discussed further below. Hence, some embodiments allow carrying PBCH in a more robust manner by exploiting processing gain instead of FEC coding, this may reduce the number of bits to be transmitted allowing more compact transmission. Each burst of the physical broadcast channel may include a reference symbol. The reference symbol may be based on a Zadoff-Chu sequence. A root of the Zadoff-Chu sequence may differ from a root of a Zadoff-Chu sequence used for a primary synchronization signal. The phase ramp may be based on a phase shift applied to a previous symbol. The phase shift may be spread over 2π. Corresponding embodiments pertain to a user equipment for a mobile telecommunications system comprising circuitry configured to communicate with at least one base station, wherein the circuitry is further configured to receive symbols over a physical broadcast channel, wherein a phase ramp is applied to at least one symbol and the bursts of the physical broadcast channel are repeated once. Hence, some embodiments provide a receiving side including decoding the phase shifts, CRC checking, potentially exiting after the first two bursts if the CRC passes, and if the CRC fails storing and combining results prior to IFFT and peak detection and CRC checking a second time. The phase ramp may be estimated on at least one symbol of a burst of the physical broadcast channel and a burst reference sequence. The phase ramp may be detected based on an inverse discrete Fourier transformation. All embodiments disclosed herein can be combined, such that, for example also embodiments exist where a base station implements all or any sub-combination of base station features as described herein and a user equipment implements all or any sub-combination of user equipment features as described herein. Returning toFIG.1, there is illustrated a RAN1which has a macro cell2, which is established by a LTE eNodeB3, and a NR cell4, which is established by a NR eNodeB5. A UE6can communicate with the LTE eNodeB3and, as long it is within the NR cell4, it can also communicate with the NR eNodeB5. As mentioned above, for example, when the UE6wakes up, it may search for the primary and secondary synchronization signals. Although the communication according to LTE is generally known,FIG.2schematically and exemplary illustrates an FDD (Frequency Division Duplexing) frame10which has a length of 10 ms. Each vertical line corresponds to one slot11and two slots form a subframe12. One slot includes in this example six resource blocks, wherein one resource block13is depicted on the right side of the frame10. Each resource block is divided into twelve subcarriers14, wherein each of the subcarriers14is carried on seven OFDM symbols15. The PBCH may be located in a different slot of the same subframe, e.g. in the next slot following slot11. As mentioned above, in LTE, the PSS is located in the last OFDM (Orthogonal Frequency-Division Multiplexing) symbol of the first time slot of the first subframe (subframe 0) of a radio frame, such as radio frame10. In frame10, the first time slot is time slot11in the first subframe12, i.e. the most left vertical structure of six resource blocks. In a first step, the UE decodes the primary synchronization signal (PSS). In a next step, typically, the UE finds the secondary synchronization signal (SSS), wherein the symbols of the SSS are located in the same subframe as the symbols of the PSS. In LTE-A, the PSS is composed of one of three sequences. Detecting any one of these sequences at the UE, e.g. UE6, indicates that the component carrier is transmitted from one of three possible sectors of the (NR) eNodeB, e.g. eNodeB5. The SSS on the other hand carries two 31 element sequences which can be arranged in one of 168 possible ways. Between the PSS and SSS can therefore be signaled 504 different cell identities ranging from 0 to 3*167+2=503. In the following, an embodiment pertaining to signaling the SCS by use of the PSS is discussed under reference ofFIG.3, which is a flowchart of a mobile telecommunications method30which can be performed by a (NR) base station, such as eNodeB5and a user equipment, such as UE6. Moreover, the SCS of the PSS and of the PBCH may differ from each other, as will be discussed below. In this embodiment, the PSS is structured to also convey information of the SCS that will be used for the PBCH for indicating which amount of SCS is used. Here, the used PSS sequence is exemplary divided at31, e.g. by the eNodeB, into two halves (in other embodiments it could be divided into more parts). Let the basic sequence used for the PSS be designated as {umlaut over (Z)}i(n) n=0, 1, . . . , N and i∈{0, 1, 2} then the modified used PSS sequence Zi(n) would be: Zi⁡(n)={SL*Z¨i⁡(n)for⁢⁢n⁢≤N20for⁢⁢n=⌈(N+1)2⌉SH*Z¨i⁡(n)for⁢⁢n>⌈(N+1)2⌉ Where (N+1) (often odd) is the length of the sequence. In LTE, N=62. With the binary variables SLand SH, at least 4 different SCS for the PBCH symbols can be signaled as in Table 1, whereby at32the SCS may be indicated, e.g. by the eNodeB5. Hence, for each half and the associated SCS used in the PBCH, a sign is assigned at32for indicating the SCS. TABLE 1Example settings for signaling of PBCH subcarrier spacingSLSHPBCH SCS, a1101−11−112−1−13 The entry a in the PBCH SCS column of Table 1 may be a scale factor such that the SCS of the PBCH OFDM symbols is 2a(i.e. 2{circumflex over ( )}a) times the SCS of the OFDM symbol carrying the PSS. Therefore, when a=0, the SCS of the PBCH OFDM symbols is the same as that of the OFDM symbol carrying the PSS. In another embodiment, a could be the index to a table of SCS values, such that thereby the SCS used can be indicated at32. The table may directly include SCS values or it may even point to another table, or it may include a formula for calculating SCS values, etc. This approach might be seen as increasing the number of sequences to search through at the UE by a factor of four, thereby quadrupling the amount of processing for PSS detection. However, because the SCS signaling is only carried in the sign of halves of the original PSS sequence, the processing complexity can be considerably reduced by computing the correlation with the original sequence Zi(n) in two halves at the UE receiving side at33. Assuming the received sequence is R(n), then the half-correlations for a given index k is: XL⁡(k)=∑n=0N2⁢Zi⁡(n)*R*⁡(n+k)XH⁡(k)=∑n=⌈(N+1)2⌉N⁢Zi⁡(n)*R*⁡(n+k) Then to find the correlation for the various entries in Table 1, the different correlation results X(k) are derived as illustrated in Table 2. TABLE 2Computation of correlation results related to PBCH SCS signalingSLSHX(k)11|XL(k) + XH(k)|1−1|XL(k) − XH(k)|−11|−XL(k) + XH(k)|−1−1|−XL(k) − XH(k)| The sign search therefore entails only toggling of the signs of each half correlation systematically before summing up the results, which may be a miniscule change in complexity in some embodiments. Based, on the signs found in the sign search, the UE6determines the SCS at34. As mentioned, the disclosure is not restricted to signal only four SCS, any other number of SCS can also be signaled using this method, e.g. by dividing the PSS sequence in more than two parts. In the following, an embodiment pertaining to the implicit scaling of the PBCH SCS is discussed under reference ofFIGS.4and5.FIG.4is a flowchart of a mobile telecommunications method40which can be performed by a base station, such as eNodeB5and a user equipment, such as UE6.FIG.5illustrates subcarrier scaling by QAM symbol repetition. The SCS used for the payload data (i.e. data transmitted on PDSCH, which is transmitted, for example in the first subframe in one or both slots) can differ between subframes or slots of the same component carrier or between base stations of the same PLMN (Public Land Mobile Network). In an exemplary embodiment, the subcarrier spacings that can be used are power of two multiples of each other, while in other embodiments, as discussed above, the OFDM symbols are repeated by a specific (integer) number, e.g. three times, which may result in a 45 kHz SCS, if the base SCS is 15 kHz, or five times, which may result in a 75 kHz SCS, if the base SCS is 15 kHz (without limiting the present disclosure to these examples). The PBCH is not expected to use all the resource elements (RE) in each of the OFDM symbols it is carried on. Therefore, the PBCH carrying OFDM symbols has to carry payload (PDSCH) or other signals on the same OFDM symbol. If the PBCH is to use for data transmission at41by e.g. eNodeB5a wider SCS than the rest of the data that shares its OFDM symbols and the PBCH REs use a different SCS than the other REs of the same OFDM symbol, then there may be need for guard bands separating the zones of differing SCS in order to avoid interference arising from the differences in SCS. The present embodiment provides a solution where a guard band may be not needed and, therefore, may be obsolete by repeating adjacent resource elements in the PBCH carrying REs, performed at42, e.g. by eNodeB5. Therefore, for PBCH SCS of a in Table 1, each PBCH QAM symbol would be mapped to 2a(i.e. 2{circumflex over ( )}a) adjacent REs. This is illustrated for a=0, 1, 2, 3 inFIG.5. In each vertical grid, the grey lines indicate repetition boundaries while the black lines indicate change of a QAM symbol, such that for a=0 no repetition is present, for a=1 two repetitions, for a=2 four repetitions and for a=3 eight repetitions. Transmitted with this arrangement, each PBCH carrying OFDM symbol can be equalized as a signal with SCS corresponding to a=0 in which case, after equalization the REs carrying repetitions of the same QAM symbol should be maximal ratio combined prior to QAM de-mapping. The PBCH carrying OFDM symbol can also be equalized as a signal with SCS=2a(i.e. 2{circumflex over ( )}a) to avoid the maximal ratio combining and increase resilience to Doppler spread. For a fixed number of bits in the PBCH this repetition reduces the PBCH carrying capacity of each OFDM symbol by a factor of 2a (i.e. 2{circumflex over ( )}-a). It therefore follows that for the same bandwidth, the number of OFDM symbols required to carry the PBCH would increase by a factor of 2a(i.e. 2{circumflex over ( )}a). On the other hand, since it is possible to think of the REs as having a bandwidth of 2a(i.e. 2{circumflex over ( )}a) times the basic SCS, assigning the same number of these wider REs per OFDM symbol to carry the PBCH as before requires that the raw bandwidth allocated for PBCH should increase to 2a(i.e. 2{circumflex over ( )}a) times what it was with the basic SCS. In this embodiment, the number of OFDM symbols necessary to carry one PBCH remains the same as before repetition. When this repetition approach is taken, then the reference symbols used for demodulating the PBCH at43have to be similarly repeated or distributed at44taking into account the 2a(i.e. 2{circumflex over ( )}a) scaling of the SCS, wherein this is performed e.g. by UE6. Alternatively, the PBCH can be decoded using the PSS and/or SSS as reference symbols, in which case there is no need for reference symbols within the PBCH structure. Despite the above analysis comparing repetition versus non-repetition, it should be understood that in Rel 13 of LTE, the PBCH transmits 40 bits (24 bits+16 CRC bits). These bits are coded with a combination of an actual 1/3-rate code and repetition coding to an aggregate code rate of 1/48 giving a total of 1920 bits for the output block size. This block is then broken up into 4 bursts each of 480 bits, each to be modulated by QPSK modulation and transmitted in a subframe using 240 REs. Using this disclosure to increase the SCS by 2a(i.e. 2{circumflex over ( )}a) for the PBCH OFDM symbols, as discussed above, the repetition code rate can be proportionally increased to 2a/48 in some embodiments. For example, let a=3 so that 2a=8. Then the effective aggregate code rate can be increased to 8/48=1/6 i.e. after the 1/3-rate code, a repetition code rate of 2 is applied, instead of 1/16 as before. The PBCH output coded block size is now 40*6=240 bits. As in Rel-14 of LTE, this is divided into 4 equal bursts each of 60 bits modulated into 30 QPSK symbols. A SCS=2aentails repeating each QPSK symbol over 8 adjacent REs. Thus the 30 QPSK symbols of each burst would still occupy 30*8=240 REs in the subframe as before. Therefore, there is no overall increase in required REs. However, in some embodiments, there is some loss of coding gain because of the higher aggregate code rate. This is more than made up for by the diversity gain arising from repeating each QPSK symbol 8 times. Thus, the PBCH still occupies 480 physical bits/subframe (and still occupies 1.08 MHz with SCS=15 kHz and the time taken to transmit 4 subframes). In the following, an embodiment pertaining to a more robust method to carry the PBCH in OFDM symbols of smaller SCS is discussed under reference ofFIGS.6and7.FIG.6is a flowchart of a mobile telecommunications method50which can be performed by a base station, such as eNodeB5and a user equipment, such as UE6.FIG.7schematically illustrates a receiver of a UE, such as UE6. The repetition of QAM symbols on adjacent REs, as discussed above, may have, in some embodiments, the deleterious effect of either requiring an increase in the number of OFDM symbols carrying the PBCH or increasing the raw bandwidth of the PBCH. Increasing the raw bandwidth may not be feasible if the expected raw bandwidth exceeds the component carrier bandwidth. The increased bandwidth may also result in an increase in power consumption for the UE because of the wider bandwidth to be tuned. On the other hand, if the raw bandwidth is kept constant and the number of OFDM symbols is increased instead, this increase in the number of symbols results in delay in decoding of the PBCH. It is therefore desirable, in some embodiments, to minimize as much as possible the factor 2a(i.e. 2{circumflex over ( )}a) as this minimizes either the required increase in bandwidth or number of OFDM symbols. It may still be desirable to have 2a>1 (wider SCS for the PBCH), but if the PBCH information can be carried more robustly, the factor 2amay be set to 2 or 4 instead of 8. This would reduce the necessary bandwidth or number of PBCH carrying symbols significantly in some embodiments. In LTE-A, the PBCH after CRC protection is made up of 40 bits before error control coding is applied to it. By substituting processing gain for coding gain, the FEC coding of the PBCH can be dispensed with. The 40 bit PBCH is split into eight fields each of five bits long. Each burst of PBCH bearing OFDM symbols is comprised of five OFDM symbols, the first being a burst reference symbol, whose REs align with those of the following four burst PBCH symbols. The reference symbol carries a burst reference sequence in the REs reserved for the PBCH. In an embodiment, the burst reference sequence is a particular Zadoff-Chu (ZC) sequence, which is generally known, with a different root than the sequences used for the PSS. In another embodiment, the chosen ZC sequence is also multiplied by an N chip m-sequence to improve its autocorrelation properties (in LTE Rel-14, N=62). Each of the following four PBCH symbols of the burst carries one of the PBCH fields by applying a calculated phase ramp at51, e.g. by the eNodeB5, to each of the REs of a copy of the burst reference symbol. The phase ramp to apply is calculated from the sum of the decimal value of the five bits PBCH field and the phase shift applied to the previous symbol. Since each burst carries only four PBCH fields, it therefore takes two PBCH bursts at52, e.g. performed by the eNodeB5, to carry all the eight fields of the PBCH. In LTE-A, the PBCH is carried over four bursts (one burst is transmitted per radio frame over a period of 40 ms). Equivalently, in an embodiment of this case, the two bursts carrying all the fields of the PBCH are repeated at53, e.g. performed by the eNodeB5, once per 40 ms period (e.g. the first burst is transmitted in the first and third radio frames of a 40 ms period and the second burst is transmitted in the second and fourth radio frames of the 40 ms period). In one embodiment, the relative phase shift M0applied to the reference symbol is substantially zero but can be some other fixed and known value. Then the sequence for the REs of PBCH burst symbol n will be: Sn(k)=Z(k)*ej2πk(Mn-1+ƒ(Dn))/NFFT Mn=(Mn-1+ƒ(Dn))modNFFT For n=1, 2, 3, 4 and where Dn(i.e. D_n) is the decimal value of the 5-bits field allocated for transmission in symbol n, Z(k) is the burst reference symbol sequence and NFis the size of the DFT used for converting the PBCH time domain symbol into the frequency domain. With 0≤Dn≤31, there are therefore only 32 possible valid phase shifts that can be applied to the reference sequence. However, the reference sequence has length NFFTwhich is more than 32. This means the quantization of the phase shifts can be significantly coarser. This is done by use of the function ƒ(·) (“·” being a placeholder) on the decimal value of the PBCH field. The effect of this function is to spread the possible phase shifts uniformly over 2π. The function ƒ(·) therefore converts the number 0≤Dn≤31 to a number 0≤ƒ(D)≤NFFTwhere in general, Dn≤NFFT. At the receiver, e.g. performed by the UE6, the phase ramps can be estimated at54without explicit channel estimation and correction since the UE knows the symbols of the PBCH burst and the burst reference sequence Z(k). Thus for the sequence of the REs of the PBCH burst symbol n and assuming the received resource elements are Rn(k) and that the channel H(k) is relatively stationary between symbols n−1 and n, the UE can compute (ignoring the index k, “.” is an abbreviation for a (dot) product): Rn-1R*n=(H·Z·e−2πMn-1/NFFT+Nn-1)*(H·Z·e−j2π(Mn-1+ƒ(Dn))/NFFT+Nn)* Where Nnis the noise on symbol n. This can be expanded as follows: Rn-1R*n=|H|2|Z|2ej2πƒ(Dn)/NFFT+H·Z·e−jα·Nn*+H*·Z*·e−jβ·Nn-1+Nn-1·Nn* where β=2π(Mn-1+ƒ(Dn))/NFFTand α=2πMn-1/NFFT. The second and third terms of the right-hand side are modulated noise whilst the last term is just plain white noise. As all the noise is additive, the combined power of these terms depends on the SNR of the received signal. It can therefore be expected in some embodiments that, at reasonable levels of SNR, the argument or phase trajectory of the result would be dominated by the first term on the right. Thus by detecting the phase slope of the result, ƒ(Dn) can be detected, i.e. the relative phase ramp between the two symbols. Further, this phase ramp can also be detected by carrying out an IDFT at55on the result and taking the sample location of the peak amplitude since: IFFT(e−j2πnk/NFFT)=FFT(ej2πnk/NFFT)/NFFT Where, for example (as, e.g. also in LTE), NFFT=64. The actual decimal number Dncan then be found by passing the detected sample location through the inverse function ƒ1(·). Taking LTE as an example in which NFFT=64, an example function ƒ(·) for spreading all the possible phase shifts uniformly from 0 to 2π could be: ƒ(n)=2*n→ƒ−1(n)=n/2 In other embodiments ƒ(n) can also take into account the known Gray coding such that any two possible phase shifts that are close in value to each other are derived from values of n that only differ by one bit. A functional receiver algorithm60, as may be implemented in UE6, which may perform the receiving steps discussed above, is illustrated inFIG.7. The algorithm60has a “Delay Nu” part61, a first FFT (Fast Fourier Transform) part62and a second FFT part63, a conjugate part64, an IFFT (inverse FFT) part65, and a peak detection function part66, where Nu(i.e. N_u) in the “Delay Nu” part61is the number of time domain samples in one OFDM symbol (excluding the cyclic prefix) and the input rn(i) (i.e. r_n(i)) is assumed to be already stripped of the cyclic prefixes. The input rn(i) (i.e. r_n(i)) is split into a first part and a second part. The first part is delayed by part61, while the second part is input into the second FFT63and the conjugate part64calculates the conjugate of the output of FFT63. After delaying the first part, it is input into FFT62and the output of FFT62is multiplied at67with the output of64. The multiplied output is fed to IFFT part65(more generally an IDFT) which calculates the IFFT, as discussed above. The output of65is fed to part66, which performs peak detection of the output of the IFFT and calculates the inverse function of the input, whereby the Dncan be derived. In an embodiment where the two bursts carrying all the fields of the PBCH are repeated once, each PBCH field is transmitted twice. This means the CRC can be checked after decoding all fields at the end of the second PBCH burst. If the CRC passes, PBCH decoding can be said to have succeeded and the remaining two bursts can then be ignored. Otherwise, the third and fourth bursts are also received, decoded and the CRC checked again. It is expected, in some embodiments, that one of the CRCs passes. In another embodiment of the receiver (for the case where the two bursts carrying all the fields of the PBCH are repeated once and hence each PBCH field is transmitted twice), the outputs of the multiplier67inFIG.7for each of the PBCH fields carrying symbols can be stored separately during reception of the first two bursts. Then during reception of the third and fourth bursts, the output of the multiplier67can be combined with that of the equivalent symbols previously stored from the first two bursts and the result of this combiner passed through the IFFT and peak detection. The diversity combining improves the reliability of the detected phase slopes. In some embodiments, carrying of information in a pre-processing of a known sequence carried on an OFDM symbol is known and used for the ATSC3.0 bootstrap signal. In that case, the values are carried in a time-domain cyclic shift of a whole OFDM symbol. In this case, we are carrying the information in a phase ramp restricted to the resource elements allocated for the PBCH. All other resource elements of the symbol are left unchanged. As mentioned above, embodiments may be combined. For example, in some embodiments, the embodiments ofFIGS.1to7are combined. Also the embodiments ofFIGS.3to7may be combined, or the embodiments ofFIGS.3and6to7, or the embodiments ofFIGS.3and4to5, etc. Some embodiments may pertain to a combination of the embodiments ofFIGS.4to7. In particular, in some embodiments, the methods30,40and50are combined to one method, or the methods30and40are combined to one method, or the methods30and50are combined to one method. Some embodiments may pertain to a combination of methods40and50. In the following, an embodiment of a general purpose computer130is described under reference ofFIG.8. The computer130can be implemented such that it can basically function as any type of base station or new radio base station, transmission and reception point, or user equipment as described herein. The computer has components131to140, which can form a circuitry, such as any one of the circuitries of the base stations, and user equipments, as described herein. Embodiments which use software, firmware, programs or the like for performing the methods as described herein can be installed on computer130, which is then configured to be suitable for the concrete embodiment. The computer130has a CPU131(Central Processing Unit), which can execute various types of procedures and methods as described herein, for example, in accordance with programs stored in a read-only memory (ROM)132, stored in a storage137and loaded into a random access memory (RAM)133, stored on a medium140which can be inserted in a respective drive139, etc. The CPU131, the ROM132and the RAM133are connected with a bus141, which in turn is connected to an input/output interface134. The number of CPUs, memories and storages is only exemplary, and the skilled person will appreciate that the computer130can be adapted and configured accordingly for meeting specific requirements which arise, when it functions as a base station, and user equipment. At the input/output interface134several components are connected: an input135, an output136, the storage137, a communication interface138and the drive139, into which a medium140(compact disc, digital video disc, compact flash memory, or the like) can be inserted. The input135can be a pointer device (mouse, graphic table, or the like), a keyboard, a microphone, a camera, a touchscreen, etc. The output136can have a display (liquid crystal display, cathode ray tube display, light emittance diode display, etc.), loudspeakers, etc. The storage137can have a hard disk, a solid state drive and the like. The communication interface138can be adapted to communicate, for example, via a local area network (LAN), wireless local area network (WLAN), mobile telecommunications system (GSM, UMTS, LTE, NR etc.), Bluetooth, infrared, etc. It should be noted that the description above only pertains to an example configuration of computer130. Alternative configurations may be implemented with additional or other sensors, storage devices, interfaces or the like. For example, the communication interface138may support other radio access technologies than the mentioned UMTS, LTE and NR. When the computer130functions as a base station, the communication interface138can further have a respective air interface (providing e.g. E-UTRA protocols OFDMA (downlink) and SC-FDMA (uplink)) and network interfaces (implementing for example protocols such as S1-AP, GTP-U, S1-MME, X2-AP, or the like). Moreover, the computer130may have one or more antennas and/or an antenna array. The present disclosure is not limited to any particularities of such protocols. The methods as described herein are also implemented in some embodiments as a computer program causing a computer and/or a processor and/or circuitry to perform the method, when being carried out on the computer and/or processor and/or circuitry. In some embodiments, also a non-transitory computer-readable recording medium is provided that stores therein a computer program product, which, when executed by a processor and/or circuitry, such as the processor and/or circuitry described above, causes the methods described herein to be performed. It should be recognized that the embodiments describe methods with an exemplary order of method steps. The specific order of method steps is, however, given for illustrative purposes only and should not be construed as binding. All units and entities described in this specification and claimed in the appended claims can, if not stated otherwise, be implemented as integrated circuit logic, for example on a chip, and functionality provided by such units and entities can, if not stated otherwise, be implemented by software. In so far as the embodiments of the disclosure described above are implemented, at least in part, using a software-controlled data processing apparatus, it will be appreciated that a computer program providing such software control and a transmission, storage or other medium by which such a computer program is provided are envisaged as aspects of the present disclosure. Note that the present technology can also be configured as described below. (1) A base station for a mobile telecommunications system comprising circuitry configured to communicate with at least one user equipment, wherein the circuitry is further configured to:set a first subcarrier spacing for transmission of at least one synchronization signal; andset a second subcarrier spacing for transmission on a physical broadcast channel, wherein the first subcarrier spacing differs from the second subcarrier spacing. (2) The base station of (1), wherein the synchronization signal is a primary synchronization signal or a secondary synchronization signal. (3) A user equipment for a mobile telecommunications system comprising circuitry configured to communicate with at least one base station, wherein the circuitry is further configured to:receive at least one synchronization signal and receive a transmission over a physical broadcast channel, wherein the synchronization signal is transmitted with a first subcarrier spacing and the transmission over the physical broadcast channel is transmitted with a second subcarrier spacing, wherein the first subcarrier spacing differs from the second subcarrier spacing. (4) A base station for a mobile telecommunications system comprising circuitry configured to communicate with at least one user equipment, wherein the circuitry is further configured to:indicate, based on a primary synchronization signal, a specific subcarrier spacing for a physical broadcast channel. (5) The base station of (4), wherein the primary synchronization signal includes a symbol sequence. (6) The base station of (5), wherein the symbol sequence is divided in at least two parts and the specific subcarrier spacing is indicated, based on the division of the symbol sequence. (7) The base station of (6), wherein for each part of the divided symbol sequence, a positive or negative sign is assigned. (8) The base station of (7), wherein the specific subcarrier spacing is indicated, based on the signs assigned to the parts. (9) The base station of anyone of (6) to (8), wherein the amount of the specific subcarrier spacing depends on the number of parts. (10) The base station of (9), wherein the amount of the specific subcarrier spacing of the physical broadcast channel and the number of parts into which the symbol sequence is divided is predefined. (11) A user equipment for a mobile telecommunications system comprising circuitry configured to communicate with at least one base station, wherein the circuitry is further configured to:receive at least one primary synchronization signal; anddetermine based on the received at least one primary synchronization signal a subcarrier spacing of a physical broadcast channel. (12) The user equipment of (11), wherein the primary synchronization signal includes a symbol sequence. (13) The user equipment of (12), wherein the symbol sequence is divided in at least two parts and the specific subcarrier spacing is determined based on the division of the symbol sequence. (14) The user equipment of (13), wherein for each part of the divided symbol sequence, a positive or negative sign is assigned. (15) The user equipment of (14), wherein the specific subcarrier spacing is determined, based on the signs assigned to the parts. (16) The user equipment of anyone of (13) to (15), wherein the amount of the specific subcarrier spacing depends on the number of parts. (17) The user equipment of (16), wherein the amount of the specific subcarrier spacing of the physical broadcast channel and the number of parts into which the symbol sequence is divided is predefined. (18) The user equipment of anyone of (13) to (17), further comprising computing a correlation between parts of the received primary synchronization signal. (19) A base station for a mobile telecommunications system comprising circuitry configured to communicate with at least one user equipment, wherein the circuitry is further configured to:transmit symbols on a physical broadcast channel, wherein symbols are transmitted with a subcarrier spacing and modulation symbols are repeated in consecutive resource elements in the frequency domain. (20) The base station of (19), wherein the symbols include quadrature amplitude modulation symbols. (21) A user equipment for a mobile telecommunications system comprising circuitry configured to communicate with at least one base station, wherein the circuitry is further configured to:receive symbols on a physical broadcast channel, wherein symbols are transmitted with a subcarrier spacing and modulation symbols are repeated in consecutive resource elements in the frequency domain. (22) The user equipment of (21), wherein the symbols include quadrature amplitude modulation symbols. (23) The user equipment of (21) or (22), wherein the receiving of the symbols includes decoding the repeatedly transmitted symbol based on the subcarrier spacing. (24) The user equipment of anyone of (21) to (23), wherein the receiving of the symbols includes decoding the repeatedly transmitted symbol based on a second subcarrier spacing. (25) A base station for a mobile telecommunications system comprising circuitry configured to communicate with at least one user equipment, wherein the circuitry is further configured to:transmit symbols in a physical broadcast channel by applying a phase ramp. (26) The base station of (25), wherein each burst of the physical broadcast channel includes a reference symbol. (27) The base station of (26), wherein the reference symbol is based on a Zadoff-Chu sequence. (28) The base station of (27), wherein a root of the Zadoff-Chu sequence differs from a root of a Zadoff-Chu sequence used for a primary synchronization signal. (29) The base station of anyone of (25) to (28), wherein the phase ramp is based on a phase shift applied to a previous symbol. (30) The base station of anyone of (25) to (29), wherein a phase shift of the phase ramp is spread over 2π. (31) A user equipment for a mobile telecommunications system comprising circuitry configured to communicate with at least one base station, wherein the circuitry is further configured to:receive symbols over a physical broadcast channel, wherein a phase ramp is applied to at least one symbol. (32) The user equipment of (31), wherein the phase ramp is estimated on at least one symbol of a burst of the physical broadcast channel and a burst reference sequence. (33) The user equipment of (31) or (32), wherein the phase ramp is detected based on an inverse discrete Fourier transformation.
49,895
11943164
DETAILED DESCRIPTION OF THE EMBODIMENTS The present disclosure will be described in detail below using some embodiments with reference to the accompanying drawings. It should be noted that, without conflict, the embodiments and the features of embodiments of the present disclosure can be combined. As a continuation of the development of the fourth generation terrestrial-based 4G LTE cellular mobile communication system, the fifth generation terrestrial-based 5G cellular mobile communication system includes two major subsystems: a next generation core network 5GC (5G Core) and a Next Generation Radio Access Network (NG-RAN). The 5GC includes a network node such as an Access Mobility Function (AMF), a Session Management Function (SMF) and a User Plane Function (UPF). The NG-RAN at least includes two base stations of different Radio Access Technologies (RATs), i.e. a next generation eNB (ng-eNB) (the air interface still supporting Evolved-Universal Terrestrial Radio Access (E-UTRA) RAT) which is provided according to the smooth evolution based on a 4G eNB, and gNB (the air interface supporting a New Radio (NR) RAT) with a new physical layer air interface design, and interfaces between related network elements. An NG-RAN Node (an eNB or an ng-eNB) is connected to the 5GC (including an NG-C control plane connection (used for signaling transmission) and an NG-U user plane connection (used for user data transmission)) by means of a standardized NG interface, and the NG-RAN Nodes (the eNB or the ng-eNB) are connected to one another by means of an Xn interface (including an Xn-C control plane connection and an Xn-U user plane connection).FIG.1is an architecture diagram illustrating a situation where an aggregated NG-RAN Node adopts air interface protocol stack of CU/DU split.FIG.2is an architecture diagram illustrating a situation where a disaggregated NG-RAN Node adopts air interface protocol stack of CU/DU split. Taking the gNB-CU/DU split supported by the 3rd Generation Partnership Project (3GPP) protocol as an example, a single gNB is split into a single gNB-CU and a plurality of gNB-DU network node entities which are in connection with each other through a standardized F1 interface, and such connection includes an F1-C control plane connection and an F1-U user plane connection. The gNB with CU/DU split and the gNB/ng-eNB without the CU/DU split still adopt the NG interface and the Xn interface. The control plane connection of the network interface is used for transmitting control signaling messages between network nodes, and the user plane connection is used for transmitting user service data (packets). A Next Generation Access Point (NGAP), an XnAP and an F1AP are respectively logical network application layer protocols for NG-C, Xn-C and F1-C control plane Radio Network Layer (RNL), and transmit control signaling of a corresponding interface on the basis of a transport network layer (TNL) transmission bearer (Stream Control Transmission Protocol (SCTP) connection). For the user service data frames of the NG-U, Xn-U and F1-U user plane interfaces, the user service data (packets) of the corresponding interface are transmitted on the basis of the TNL transmission bearer (General Packet Radio Service Tunneling Protocol (GTP)-U tunnel) of the lower network layer. In a future large heterogeneous network adopting hybrid deployment of micro cells and macro cells, there may be various types of RAN Nodes having different Radio Access Technologies (RATS), different frequency bandwidths and different wireless coverage capabilities, for example, network nodes such as legacy eNB, gNB, ng-eNB, and Wireless Local Area Network (WLAN) AP. As shown inFIG.3, in a certain physical area, a Master Node (MN) is usually used for providing wireless macro coverage, and in a local hotspot or a weak coverage area, a plurality of Secondary Nodes (SNs) are used for providing enhanced micro coverage, so as to improve network capacity and performance. These different types of RAN Nodes can provide radio access functions of different capabilities and different performances for UEs. The RAN Nodes and the UE are in connection with each other by a 3GPP standardized Uu air interface (i.e., the radio air interface), and the connection includes an air interface control plane connection bearer SRB (Signaling Radio Bearer, used for transmitting air interface control signaling) and an air interface user plane connection bearer DRB (Data Radio Bearer, used for transmitting an air interface user service data packet). The RAN Node may simultaneously configure a plurality of different SRBs and DRBs for the UE according to requirements of different services. In order to reduce the cost and complexity brought to a mobile network operator when performing a manual driving test in terms of cellular network performance by adopting dedicated equipment and personnel, a series of Minimization of Driving Test (MDT) functions are introduced from the Rel-10 version of the 4G LTE-A system. The MDT technology uses a plurality of pieces of UEs of the LTE mode of common general users to sense, measure, collect, record and store various network performance indexes related to a UE side under the guidance of network configuration, such as MDT Logs. Then, upon receiving a request message from an RAN eNB, the UE first reports the MDT Logs to the RAN eNB by an SRB2 at the air interface, and then further reports the MDT Logs to a Trace Collection Entity (TCE) of a network Operation And Maintenance (OAM) system by an upstream terrestrial network interface of the eNB. The TCE is usually located in an MME core network entity of a 4G LTE in the past, but may also be deployed independent of a 5GC core network entity in the future. The MDT Logs reported by numerous UEs may be used to assist in determining the performance reflecting the comprehensive deployment effect the network and in guiding adjustment and optimization of various function parameters of the network. For example, problems such as weak coverage and blind spots in some areas of a network can be found, and cases where the subscription cannot satisfy the requirements at some hotspot areas with high-capacity communication demand or cannot guarantee the Quality of Service (QoS) required by some user services can also be found. The MDT can be classified into a Management based MDT and a Signaling based MDT based on a manner of providing an MDT-related parameter configuration by the network side. In the activation process of the Management based MDT, the OAM directly sends, to the target eNB, a Trace session activation message containing the MDT parameter configuration, and the eNB selects appropriate target UE in an MDT valid area specified by the Trace session activation message and sends the MDT parameter configuration information to the selected target UE. In the activation process of the Signaling based MDT, the OAM first sends a Trace session activation including the MDT parameter configuration to an Home Subscriber Server (HSS, which is also referred to as a location register) to activate MDT measurement of specified target UE, then the HSS sends the MDT parameter configuration information of the UE to a core network, the core network then sends the MDT configuration information of the UE to the target eNB, the eNB finally sends the MDT parameter configuration information to the target UE to perform the MDT. The MDT can be classified into a Logged MDT (corresponding to a UE in an idle state (RRC_IDLE) or in an inactive state (RRC_INACTIVE)) and an Immediate MDT (corresponding to a UE in a connection state (RRC_CONNECTED)) according to an RRC state in which the UE executes MDT tasks. Logged MDT refers to: in an RRC_IDLE or RRC_INACTIVE state, the UE periodically executes various types of Logged MDT measurement tasks according to previous network MDT related parameter configuration (e.g. the specific measurement object, various measurement amounts, a measurement effective physical range, a measurement effective time and a period, etc. of the MDT), and records and stores MDT Logs. When the UE later transits back to the RRC_CONNECTED state, the UE sends an “MDT Logs available indicator” to the RAN Node via an appropriate RRC uplink message, so as to indicate that the UE has measured, recorded and stored some MDT Logs, and request the RAN Node to retrieve the MDT Logs. If necessary, the RAN Node retrieves, at an appropriate time, the UE MDT Logs by a UE Information procedure at the air interface. Specifically, as shown inFIG.4, the RAN Node sends a UE INFORMATION REQUEST message to the UE to request the UE to report the stored MDT Logs, and then the UE reports all or a part of the MDT Logs by one or more UE Information RESPONSE messages (depending on the size of the MDT Logs table). According to the current LTE protocol, the above UE Information procedure at the air interface is limited to be performed only on the SRB2 bearer at the master base station. After receiving the MDT Logs, the RAN Node aggregates and further forwards the MDT Logs to an upstream TCE for analysis and processing. The MDT can be substantively divided into three types according to different protocol layer attributes for measurement. A first type is L1 signal measurement, for example, statistic measurement of the strength RSRP (Reference Signal Receiving Power) and the RSRQ (Reference Signal Receiving Quality) of the LTE downlink pilot signal (Common Reference Signal (CRS); or Channel State Information-Reference Signal (CSI-RS)). A second type is L2 performance measurement, for example, statistical measurement of delay/packet loss ratio/packet loss amount of protocol data unit (PDU) data packets, such as LTE MAC (Medium Access Control)/RLC (Radio Link Control)/PDCP (Packet Data Convergence Protocol). A third type is L3 performance measurement, for example, establishment and maintaining of an LTE RRC connection of a UE, and statistical measurement of indexes such as a data throughput rate (Throughput) of a specific data radio bearer (DRB) and other mobility-related performance (switching, call drop, etc.). Based on the basic framework principle and operation mode of the above LTE MDT technology, the MDT technology corresponding to the 5G NG-RAN is also basically similar, but is also under the study of the 3GPP standard specification. The 5G NG-RAN at least also supports a Logged MDT function, and the basic operating principle of the Logged MDT function supported by 5G NG-RAN is basically similar to that of the LTE Logged MDT. However, the 5G NG-RAN and the LTE have the following important differences in terms of the technical background and the system capability condition in which the MDT-related functions are introduced respectively. 1) When the LTE Logged MDT function is introduced in Rel-10, there is no concept of UE (dual) multi-connectivity, therefore only a single base station currently serving the UE can retrieve the UE MDT Logs via a UE Information procedure based on SRB2. In contrast, when a 5G NG-RAN Logged MDT function is introduced in the future, there is already the concept of UE (dual) multi-connectivity, i.e., a Multi-RAT Dual Connectivity (MR-DC) function of a 5G Rel-15, and therefore it can be considered to use a plurality of master and secondary nodes currently serving the UE to simultaneously or cooperatively retrieve the UE MDT Logs, so as to realize isolation between serving nodes retrieving the MDT Logs and load balancing of air interface signaling. For example, as shown inFIG.5, in addition to an SRB2 between the UE and the single MN, more signaling radio bearers such as an SRB3/SRBx may be configured between the UE in a multi-connectivity operation mode and one or more SNs at the same time, and these signaling radio bearers can also be used for retrieving the MDT Logs. 2) A 4G LTE system is of a single E-UTRA RAT, and the MDT Logs generated by various MDT measurement tasks based on an eNB-related configuration can only be finally reported to eNBs in the same RAT, but cannot be reported to other base stations of different RATs, such as a 2G Base Station Controller (BSC) or a 3G Radio Network Controller (RNC). In contrast, the 5G NG-RAN is formed by a mixture of multiple RATs (including at least an E-UTRA and an NR at present), and therefore it can also be considered that the MDT Logs generated by an MDT measurement task performed based on configuration related to a base station of a certain RAT can also finally be flexibly reported to base stations of different RATs, without being limited to always reporting the MDT Logs to base stations of the same RAT. 3) The LTE Logged MDT-related configuration cannot be validated and executed simultaneously with the related configuration of RAN Nodes of other RATs, for example, the UE cannot perform the Logged MDT operation simultaneously on the basis of the respective MDT configurations of the 3G RNC and the 4G eNB. In contrast, in the MR-DC multi-connectivity operation of the 5G NG-RAN, the RAN Nodes of different RATs can independently configure and simultaneously validate the MDT parameter configuration. That is, the UE can perform the operations of the Logged MDT simultaneously on the basis of the MDT-related configurations of the MN and the SN, the UE can independently perform an MDT operation according to related configurations of RAN Nodes of different RATs, and generate a plurality of pieces of decoupled Per Node/RAT MDT Logs (table). In view of the differences in technical backgrounds and capabilities of the UE in the multi-connectivity operation mode, the following embodiments of the present disclosure provide a technical solution for controlling the collection and retrieval mode of the UE MDT Logs with regard to a UE in a multi-connectivity operation mode. It should be noted that the terms “first”, “second” etc., in the following embodiments of the present disclosure are used to distinguish similar objects, and are not necessarily used to describe a specific sequence or order. The present embodiment provides an MDT Logs reporting method.FIG.6is a flowchart according to some embodiments of the present disclosure. As shown inFIG.6, the flow includes operations S602to S606which are described in detail as follows. In operation S602, UE receives first Logged MDT configuration information configured by a master base station for the UE, and receives second Logged MDT configuration information configured by one or more secondary base stations for the UE, wherein the UE is in a multi-connectivity operation mode in which the UE is connected to the master base station and the one or more secondary base stations at the same time. In operation S604, the UE exits the multi-connectivity operation mode, executes a first Logged MDT operation according to the first Logged MDT configuration information to acquire first MDT Logs, and executes a second Logged MDT operation according to the second Logged MDT configuration information to acquire second MDT Logs. In operation S606, the UE re-establishes the multi-connectivity operation mode with a current master base station and one or more current secondary base stations, and reports the first MDT Logs and/or the second MDT Logs to the current master base station and/or the one or more current secondary base stations. In the embodiments, the capability and features of the UE in a multi-connectivity operation mode are fully utilized, so that the UE can simultaneously execute related MDT operations independently configured by a plurality of master and secondary base stations, and flexibly report the UE MDT Logs by utilizing a plurality of independent signaling radio bearers. In operation S606of the embodiments, the current master base station and the current secondary base station that re-establish the multi-connectivity operation mode with the UE may be the same master base station and secondary base station as those in operation S604, or may be different from those in operation S604. The MDT Logs reporting method according to the embodiments of the present disclosure will be described below with reference to an exemplary embodiment. As shown inFIG.7, the embodiment includes operations S701to S705which are described in detail as follows. In operation S701, UE is in a multi-connectivity operation mode (RRC Connected state), i.e., the UE is connected to an MN and several SNs at the same time. The MN and the SNs respectively perform, for the UE, decoupled Per Node/RAT MDT-related configuration independently (based on different RRC states in which an MDT task is executed, the configuration may include a Logged MDT-related configuration or an Immediate MDT-related configuration). After successful configuration by the network side, the UE obtains Per Node/RAT MDT-related driving test configuration information independently configured by the master and secondary base stations, and prepares for subsequent operations of Logged MDT and Immediate MDT. In operation S702, after leaving the connected state, the UE stores the previous related independent configuration parameters of the respective Logged MDT of the MN and SNs. The UE in the RRC Idle or Inactive state will perform the task of recording the MDT Logs locally corresponding to the respective Per Node/RAT MDT configuration of the master base station. The MDT Logs (table) generated based on the related Per Node/RAT MDT configuration of the MN and SNs can be respectively stored in different MDT Logs tables and distinguished by identifiers (id). In operation S703, when the UE transits back to the connected state or transits to the multi-connectivity operation mode, the UE reports, to the current serving MN by means of an SRB1 bearer, logs available indicator information related to Per Node/RAT MDT configurations for the master and secondary base stations: Per Node/RAT Logs Available Indicator (For MN and/or SN(s)), so that the MN can learn the condition of the MDT Logs table which has been stored by the current UE. The Per Node/RAT MDT configuration parameter for each of the master and secondary base stations may correspond to an independent MDT Logs table. If the UE is already in the multi-connectivity operation mode, the UE can directly report, to the current serving SN(s) by means of an SRB3/x bearer, logs available indicator information related to the Per Node/RAT MDT configuration for the corresponding secondary base station: Per Node/RAT Logs Available Indicator (For SN(s)), so that the SN(s) can learn the condition of the MDT Logs table, which has been stored by the current UE, related to the SN. In operation S704, the MN decides to use which SRB(s) to retrieve MDT Logs (table) related to the independent configuration of each RAN Node, and uses a network interface procedure message between the MN and the SN(s) to cooperate with each RAN Node to perform retrieval of UE MDT Logs. The MN also needs to determine whether a network interface connection is configured between the SN(s) and the TCE, wherein the SN(s) can be used for directly reporting the MDT Logs. The MN also needs to decide whether the SN(s) needs to forward the MDT Logs retrieved from the UE to the MN node for aggregation, and then reports the MDT Logs to the TCE. In the above embodiment, if the UE is already in the multi-connectivity operation mode, the retrieval operations of the master and secondary base stations (e.g., MN/SN(s)) are as follows. 1) If the MN decides to use its own SRB2 to retrieve all UE MDT Logs, the MN will initiate a UE Information procedure only on the SRB2, and retrieve the MDT Logs information related to the independent configurations for all RAN Nodes of the MN and the SN(s) from the UE, and further reports the retrieved MDT Logs information to the TCE. In some embodiments, if the SN(s) and the MN are nodes of different RATs, a message UE INFORMATION RESPONSE sent via the SRB2 will carry the MDT Logs related to the configuration of the SN(s) in the form of an information element of a Radio Resource Control Container (RRC Container). 2) If the MN decides to retrieve the UE MDT Logs only via the SRB3 of the SN1, the MN instructs the SN1 to initiate a UE Information procedure on the SRB3, and retrieve the MDT Logs information related to independent configurations of all RAN Nodes of the MN and the SN(s) from the UE. The MN will not initiate a UE Information procedure on its own SRB2. In some embodiments, the MN needs to notify, via a network interface procedure message between the MN and the SN1, the SN1 of logs available indicator information related to independent configurations of all the RAN Nodes: Per Node/RAT Logs Available Indicator (For MN and/or SN(s)), and instructs the SN1 to initiate a UE Information procedure on the SRB3. In some embodiments, the MN needs to notify, by a network interface flow message between the MN and the SN1, the SN1 of whether to forward to the MN all MDT Logs retrieved from the UE, or whether to prepare to directly report the MDT Logs to the upstream TCE. In some embodiments, if the SN(s) and the MN are base stations of different RATs, a message UE INFORMATION RESPONSE sent via the SRB3 will carry the MDT Logs related to the configuration of the MN in the form of an information element of an RRC container. 3) If the MN decides to use the SRB2 and the SRB3 to cooperatively retrieve the UE MDT Logs at the same time, the MN will initiate a UE Information procedure on the SRB2, and will retrieve, from the UE, the MDT Logs only related to the independent configuration of the MN; at the same time, the MN will also instruct the SN1 to initiate a UE Information procedure on SRB3, and retrieve, from the UE, the MDT Logs only related to the independent configuration of SN1. By the same reasoning, the MN needs to notify, via a network interface procedure message between the MN and the SN1, the SN1 of logs available indicator information related to the independent configuration of SN1: Per Node/RAT Logs Available Indicator (for SN1), and instructs the SN1 to initiate a UE Information procedure on the SRB3. The MN needs to notify, by a network interface flow message between the MN and the SN1, the SN1 of whether to forward to the MN the MDT Logs retrieved from the UE, or to prepare to directly report the MDT Logs to the upstream TCE. 4) If there are a plurality of secondary base stations (e.g., SNx/SNy), similarly, if the MN decides to retrieve the UE MDT Logs via the SRBx and SRBy corresponding to the SNx/SNy at the same time, the MN will also instruct the SNx and SNy to initiate a UE Information procedure on each of the corresponding SRBx and SRBy, and respectively retrieve the MDT Logs related to independent configurations of SNx and SNy. By the same reasoning, the MN needs to notify, by a respective network interface procedure message between the MN and the SNx/SNy, the SNx/SNy of related logs available indicator information: Per Node/RAT Logs Available Indicator (for SNx/SNy), and instructs the SNx/SNy to initiate a UE Information procedure on the SRBx and the SRBy respectively. The MN needs to notify, by a network interface flow message between the MN and the SNx/SNy, the SNx/SNy of whether to forward to the MN the MDT Logs retrieved from the UE, or to prepare to directly report the MDT Logs to the upstream TCE. In operation S705, the MN or the SN(s) directly reports, to different or same TCE entities connected to the MN or the SN(s), the retrieved MDT Logs related to the Per Node/RAT MDT configurations, or performs local processing in each base station. In some embodiments, according to the instruction of the MN, the SN(s) may also forward, by a network interface process message between the MN and the SN(s), all or part of the MDT Logs retrieved from the UE to the MN for further processing. In the embodiments of the present disclosure, the capability and features of configuration of the UE in a multi-connectivity operation mode are fully utilized, so that the UE can simultaneously execute MDT operations related to Per Node/RAT independent configurations of a plurality of master and secondary base stations, and flexibly report the UE MDT Logs by utilizing a plurality of signaling radio bearers. The plurality of master and secondary base stations can cooperate with each other via a network interface procedure message, and flexibly exchange their respective Per Node/RAT MDT configurations and MDT Logs. Different retrieval modes of the MDT Logs adopted by the MN/SN(s) will be described in detail hereinafter with reference to exemplary embodiments. Embodiment 1 In the embodiment, the UE is in an E-UTRA NR-Dual Connectivity (EN-DC) operation mode, i.e., the UE is connected to a master base station (e.g., MeNB) and a secondary base station (e.g., SgNB1) at the same time. The MeNB and the SgNB1 at the network side respectively configure the UE with Logged MDT-related configuration information (e.g. the target E-UTRA and NR frequency points to be detected, effective physical and temporal range, etc.) independently according to the requirements of their respective RAT deployment and network management, and then the UE stores the Per Node/RAT Logged MDT-related configuration information of the MeNB and the SgNB1. As shown inFIG.8, the embodiment includes operations S801to S805which are described in detail as follows. In operation S801, when transmission of user plane data ends, UE exits an EN-DC operation mode, transits to an RRC Idle state, starts to execute Logged MDT operations respectively corresponding to Per Node/RAT related configurations of the MeNB and the SgNB1, and respectively records and stores MDT Logs in a local table 1 and a local table 2 in a manner of being decoupled from one another. In operation S802, transmission of user plane data restarts, the UE transits back to the RRC Connected state and quickly transits to the EN-DC operation mode, and the UE reports the Per Node/RAT Logs Available Indicator (for the MeNB and the SgNB1) to the MeNB via an SRB1: RRC Connection Setup Complete message, indicating that the local tables 1 and 2 of the UE respectively record and store MDT Logs, which are respectively generated based on the configurations of the MeNB and the SgNB1, to be retrieved by the network side. In operation S803, the MeNB decides to use the SRB3 of the SgNB1 to retrieve all UE MDT Logs, and thus sends an X2AP: UE INFORMATION REQUEST message to the SgNB1 by a UE Information procedure newly introduced on the X2AP. The message includes Per Node/RAT Logs Available Indicator (for MeNB and SgNB1), which is used for requesting the SgNB1 to retrieve all UE MDT Logs related to the configurations of the MeNB and SgNB1. The X2AP: UE INFORMATION REQUEST message sent by the MeNB also instructs the SgNB1 to directly upload all the retrieved MDT Logs to the TCE by an appropriate procedure, without returning the MDT Logs to the MeNB. In operation S804, according to the instruction of the X2AP message of the MeNB, the SgNB1 retrieves all UE MDT Logs related to the configurations of the MeNB and SgNB1. After successful retrieval, the SgNB1 sends to the MeNB an X2AP: UE INFORMATION RESPONSE message including a Per Node/RAT Logs Retrieval Success Indicator (for the MeNB and the SgNB1), notifying that the MeNB has successfully retrieved all the UE MDT Logs. In operation S805, the SgNB1 aggregates all the retrieved UE MDT Logs and then directly reports the aggregated MDT Logs to an upstream TCE directly connected to the SgNB1 for processing. In the process that the MeNB and SgNB1 cooperate to retrieve MDT Logs, there is no need for the MeNB to retrieve or report to the TCE any UE MDT Logs via its own SRB2 anymore. Embodiment 2 In the embodiment, the UE is once in an EN-DC operation mode, that is, the UE is connected to both a master base station (e.g., MeNB) and a secondary base station (e.g., SgNB1) at the same time. The MeNB and the SgNB1 at the network side respectively configure the UE with Logged MDT-related configuration information (e.g. the target E-UTRA and NR frequency points to be detected, effective physical and temporal range, etc.) independently according to the requirements of their respective RAT deployment and network management, and then the UE stores the Per Node/RAT Logged MDT-related configuration information of the MeNB and the SgNB1. As shown inFIG.9, the embodiment includes operations S901to S905which are described in detail as follows. In operation S901, when transmission of user plane data ends, UE exits an EN-DC operation mode, transits to an RRC Idle state, starts to execute Logged MDT operations respectively corresponding to Per Node/RAT related configurations of the MeNB and the SgNB1, and respectively records and stores MDT Logs in a local table 1 and a local table 2 in a manner of being decoupled from one another. In operation S902, transmission of user plane data restarts, the UE transits back to the RRC Connected state and quickly transits to the EN-DC operation mode, and the UE reports the Per Node/RAT Logs Available Indicator (for the MeNB and the SgNB1) to the MeNB via an SRB1: RRC Connection Setup Complete message, indicating that the local tables 1 and 2 of the UE respectively record and store MDT Logs, which are respectively generated based on the configurations of the MeNB and the SgNB1, to be retrieved by the network side. In operation S903, the MeNB decides to respectively use the SRB2 of the MeNB and the SRB3 of the SgNB1 to cooperatively retrieve the UE MDT Logs, and thus sends an X2AP: UE INFORMATION REQUEST message to the SgNB1 by a UE Information procedure newly introduced on the X2AP. The message includes a Per Node/RAT Logs Available Indicator (for SgNB1), which is used for requesting the SgNB1 to retrieve the UE MDT Logs related to the configuration of the SeNB1. The X2AP: UE Information Request message sent by the MeNB also instructs the SgNB1 to directly upload the retrieved MDT Logs to the TCE subsequently by an appropriate procedure, without returning the MDT Logs to the MeNB. Meanwhile, the MeNB retrieves the UE MDT Logs related to the configuration of the MeNB via SRB2 of the MeNB, and subsequently directly uploads the MDT Logs to the TCE by an appropriate procedure. In operation S904, according to the instruction of the X2AP message of the MeNB, the SgNB1 retrieves the UE MDT Logs related to the configuration of the SgNB1 by a UE Information procedure at the air interface on SRB3 of the SgNB1. After successful retrieval, the SgNB1 sends to the MeNB an X2AP: UE INFORMATION RESPONSE message including a Per Node/RAT Logs Retrieval Access Indicator (for SgNB1), notifying that the MeNB has successfully retrieved the UE MDT Logs related to the configuration of the SgNB1. In operation S905, the MeNB and the SgNB1 aggregate the retrieved UE MDT Logs and directly report the MDT Logs to the upstream TCE directly connected to the MeNB and the SgNB1 for processing. In the process that the MeNB and SgNB1 cooperate to retrieve MDT Logs, the MeNB still needs to retrieve and report to the TCE the UE MDT Logs via its own SRB2. Embodiment 3 In this embodiment, the UE is once in a NR-DC operation mode, that is, the UE is connected to both a master base station (e.g., MgNB) (low frequency) and a secondary base station (e.g., SgNB1) (high frequency). The MgNB and the SgNB1 at the network side respectively configure the UE with Logged MDT-related configuration information (such as target NR high/low frequency points to be detected, effective physical and temporal ranges, etc.) independently according to the requirements of their respective RAT deployment and network management, and then the UE stores the Per Node/RAT Logged MDT-related configuration information of the MgNB and the SgNB1. As shown inFIG.10, the embodiment includes operations S1001to S1005which are described in detail as follows. In operation S1001, when transmission of user plane data ends, UE exits an NR-DC operation mode, transits to an RRC Inactive state, starts to execute Logged MDT operations respectively corresponding to Per Node/RAT related configurations of the MgNB and the SgNB1, and respectively records and stores MDT Logs in a local table 1 and a local table 2 in a manner of being decoupled from one another. In operation S1002, transmission of user plane data restarts, the UE transits back to the RRC Connected state and quickly transits to the NR-DC operation mode, and the UE reports the Per Node/RAT Logs Available Indicator (for the MgNB and the SgNB1) to the MgNB via an SRB1: RRC Resume Complete message, indicating that the local tables 1 and 2 of the UE respectively record and store MDT Logs, which are respectively generated based on the configurations of the MgNB and the SgNB1, to be retrieved by the network side. In operation S1003, the MgNB decides to use the SRB3 of the SgNB1 to retrieve all UE MDT Logs, and thus sends an XnAP: UE INFORMATION REQUEST message to the SgNB1 by a UE Information procedure newly introduced on the XnAP. The message includes Per Node/RAT Logs Available Indicator (for MgNB and SgNB1), which is used for requesting the SgNB1 to retrieve all UE MDT Logs related to the configurations of the MgNB and the SgNB1. The XnAP: UE INFORMATION REQUEST message sent by the MgNB also instructs the SgNB1 to directly upload all the retrieved MDT Logs to the TCE by an appropriate procedure, without returning the MDT Logs to the MgNB. In operation S1004, according to the instruction of the XnAP message of the MgNB, the SgNB1 retrieves all UE MDT Logs related to the configurations of the MgNB and the SgNB1. After successful retrieval, the SgNB1 sends to the MgNB an XnAP: UE INFORMATION RESPONSE message including a Per Node/RAT Logs Retrieval Access Indicator (for the MgNB and the SgNB1), notifying that the MgNB has successfully retrieved all the UE MDT Logs. In operation S1005, the SgNB1 aggregates all the retrieved UE MDT Logs and then directly reports the aggregated MDT Logs to an upstream TCE directly connected to the SgNB1 for processing. In the process that the MeNB and SgNB1 cooperate to retrieve MDT Logs, there is no need for the MgNB to retrieve or report to the TCE any UE MDT Logs via its own SRB2 anymore. Embodiment 4 In the embodiment, the UE is once in an NE-DC operation mode, that is, the UE is connected to both a master base station (e.g., MeNB) and a secondary base station (e.g., SgNB1) at the same time. The MgNB and the SeNB1 at the network side respectively configure the UE with Logged MDT-related configuration information (e.g. the target E-UTRA and NR frequency points to be detected, effective physical and temporal range, etc.) independently according to the requirements of their respective RAT deployment and network management, and then the UE stores the Per Node/RAT Logged MDT-related configuration information of the MgNB and the SeNB1. As shown inFIG.11, the embodiment includes operations S1101to S1105which are described in detail as follows. In operation S1101, when transmission of user plane data ends, UE exits an NE-DC operation mode, transits to an RRC Inactive state, starts to execute Logged MDT operations respectively corresponding to related configurations of an MgNB and an SeNB1 Per Node/RAT, and respectively records and stores MDT Logs in a local table 1 and a local table 2 in a manner of being decoupled from one another. In operation S1102, transmission of user plane data restarts, the UE transits back to the RRC Connected state and quickly transits to the NE-DC operation mode, and the UE reports the Per Node/RAT Logs Available Indicator (for the MgNB and the SeNB1) to the MgNB via an SRB1: RRC Resume Complete message, indicating that the local tables 1 and 2 of the UE respectively record and store MDT Logs, which are respectively generated based on the configurations of the MgNB and the SeNB1, to be retrieved by the network side. In operation S1103, the MgNB decides to respectively use the SRB2 at the MgNB side and the SRB3 at the SeNB1 side to cooperatively retrieve the UE MDT Logs, and thus sends an XnAP: UE INFORMATION REQUEST message to the SeNB1 by a UE Information procedure newly introduced on the XnAP. The message includes a Per Node/RAT Logs Available Indicator (for SeNB1), which is used for requesting the SeNB1 to retrieve the UE MDT Logs related to the configuration of the SeNB1. The XnAP: UE Information Request message sent by the MgNB also indicates that the SeNB1 needs to return the retrieved MDT Logs, and subsequently returns the MDT Logs to the MgNB by an XnAP: UE Information procedure. Meanwhile, the MgNB retrieves the UE MDT Logs related to the MgNB configuration via its own SRB2, and subsequently directly uploads the MDT Logs to the TCE by an appropriate procedure. In operation S1104, according to the instruction of the XnAP message of the MgNB, the SeNB1 retrieves the UE MDT Logs related to the configuration of the SeNB1 by the UE Information procedure at the air interface on its own SRB3. After successful retrieval, the SeNB1 sends to the MgNB an XnAP: UE INFORMATION RESPONSE message. The message includes a Per Node/RAT Logs Retrieval Access Indicator (for the MgNB and the SeNB1), so as to notify that the MgNB has successfully retrieved the UE MDT Logs related to the configuration of the SeNB1. The message may further include the UE MDT Logs related to the configuration of the SeNB1, so as to collectively report the UE MDT Logs to the MgNB. In operation S1105, the MgNB aggregates the UE MDT Logs retrieved from its own SRB2 at the air interface and the UE MDT Logs retrieved by the SeNB1 via SRB3, and then directly reports the aggregated MDT Logs to the upstream TCE connected to the MgNB for processing. In the process that the MgNB and the SeNB1 cooperate to retrieve MDT Logs, the MgNB still needs to collect and report to the TCE all the UE MDT Logs via its own SRB2. Embodiment 5 In this embodiment, the UE is once in a Intra-gNB NR-DC operation mode, that is, the UE is connected to both a master base station (e.g., MgNB-CU/DU) (at a low frequency) and a secondary base station (e.g., gNB-DU1) (at a high frequency, which is controlled by the same MgNB). The MgNB at the network side respectively configure the UE with Logged MDT-related configuration information (such as target NR high/low frequency points to be detected, effective physical and temporal ranges, etc.) independently according to the requirements of their respective RAT deployment and network management, and then the UE stores the Per Node/RAT Logged MDT-related configuration information of the MgNB-CU/DU and the gNB-DU1. As shown inFIG.12, the embodiment includes operations S1201to S1205which are described in detail as follows. In operation S1201, when transmission of user plane data ends, UE exits the Intra-gNB NR-DC operation mode, transits to an RRC Idle state, starts to execute Logged MDT operations respectively corresponding to Per Node/RAT related configurations of the MgNB-CU/DU and gNB-DU1, and respectively records and stores MDT Logs in a local table 1 and a local table 2 in a manner of being decoupled from one another. In operation S1202, transmission of user plane data restarts, the UE transits back to the RRC Connected state and quickly transits to an Intra-gNB NR-DC operation mode, and the UE reports the Per Node/RAT Logs Available Indicator (for MgNB-CU/DU and gNB-DU1) to the MgNB-CU/DU via an SRB1: RRC Connection Complete message, indicating that the local tables 1 and 2 of the UE respectively record and store MDT Logs, which are respectively generated based on the configurations of the MgNB-CU/DU and gNB-DU1, to be retrieved by the network side. In operation S1203, the MgNB decides to use the SRB3 of the gNB-DU1 to retrieve all the UE MDT Logs, so that the MgNB-CU/DU sends an F1AP: DL RRC INFORMATION TRANSFER message to the gNB-DU1 by means of downlink (DL) RRC Information Transfer procedure of the F1AP. The message includes an RRC: UE INFORMATION REQUEST message which is used for requesting the gNB-DU1 to retrieve all the UE MDT Logs related to the MgNB. In operation S1204, according to the instruction of the F1AP message of the MgNB, the gNB-DU1 retrieves all the UE MDT Logs related to the MgNB by an UE Information procedure at the air interface on its own SRB3. In the retrieval process, the gNB-DU1 sends an F1AP: UL RRC Information TRANSFER message to the MgNB-CU/DU by an uplink (UL) RRC Information Transfer procedure of the F1AP. The message includes an RRC: UE INFORMATION RESPONSE message, and includes all the UE MDT Logs related to the MgNB. In operation S1205, the MgNB-CU/DU aggregates all the retrieved UE MDT Logs and then directly reports the aggregated MDT Logs to an upstream TCE for processing. In the process that the MgNB-CU/DU and the gNB-DU1 cooperate to retrieve MDT Logs, there is no need for the MgNB-CU/DU to collect any UE MDT Logs via its own SRB2. Through the above description of the embodiments, it should be clearly appreciated by the person skilled in the art that the method according to the embodiments may be implemented by means of software in connection with the required universal hardware platform, and of course, can also be implemented by hardware, but in many cases the former is a more exemplary implementation. Based on this understanding, the essence of the techniques or the contributions to current technologies under the present disclosure may be implemented in the form of software products. Such software product may be stored in one or more computer storage media (such as an ROM, an RAM, a magnetic disk, or an optical disc), and include several instructions to request a terminal device (such as a cell phone, a personal computer, a server, or a network device) to implement methods of the embodiments. In some embodiments of the present disclosure, provided is an MDT Logs reporting device, the device is used for implementing the embodiments and exemplary embodiments above, and what has been described will not be described repeatedly here. As used below, the term “module” may implement a combination of software and/or hardware of a predetermined function. Although the device described in the following embodiments is preferably implemented by software, implementation of hardware or a combination of software and hardware is also possible and conceived. FIG.13is a structural block diagram of an MDT Logs reporting device according to some embodiments of the present disclosure. The device is located in UE. As shown inFIG.13, the device includes a receiving module10, an executing module20and a reporting module30. The receiving module10is configured to receive first Logged MDT configuration information configured by a master base station for the UE, receive second Logged MDT configuration information configured by one or more secondary base stations for the UE, wherein the UE is in a multi-connection operation mode in which the UE is connected to the master base station and the one or more secondary base stations at the same time. The executing module20is configured to execute a first Logged MDT operation according to the first Logged MDT configuration information to acquire first MDT Logs, and execute a second Logged MDT operation according to the second Logged MDT configuration information acquire second MDT Logs. The reporting module30is configured to report the first MDT Logs and/or the second MDT Logs to the master base station and/or the one or more secondary base stations according to an instruction of the master base station. FIG.14is a structural block diagram of an MDT Logs reporting device according to some other embodiments of the present disclosure. As shown inFIG.14, in addition to the receiving module10, the executing module20and the reporting module30shown inFIG.13, the reporting module30is further configured to report a first MDT Logs available indicator and/or a second MDT Logs available indicator to the master base station and/or the one or more secondary base stations, so as to wait for the master base station and/or the one or more secondary base stations to retrieve the first MDT Logs and/or the second MDT Logs. In the embodiment, the reporting module30further includes one of the following: a first reporting unit31, configured to report the first MDT Logs and the second MDT Logs to the master base station, to enable the first MDT Logs and the second MDT Logs to be processed by the master base station; a second reporting unit32, configured to report the first MDT Logs to the master base station, and report the second MDT Logs to the one or more secondary base stations, to enable the first MDT Logs and the second MDT Logs to be further processed by the master base station and the one or more secondary base stations respectively; a third reporting unit33, configured to report the first MDT Logs and the second MDT Logs to the one or more secondary base stations, to enable the first MDT Logs and the second MDT Logs to be further processed by the secondary base station. Some embodiments of the present disclosure further provide a MDT Logs reporting device. The device is located at a network side. As shown inFIG.15, the device includes a configuring module40and a receiving module50. The configuring module40is adapted to configure first Logged MDT configuration information for UE through a master base station; and configure second Logged MDT configuration information for the UE through one or more secondary base stations, wherein the UE is in a multi-connectivity operation mode in which the UE is connected to the master base station and the one or more secondary base stations at the same time. The receiving module50is adapted to receive, through the master base station and/or the one or more secondary base stations, first MDT Logs and/or second MDT Logs reported by the UE, wherein the first MDT Logs are MDT Logs acquired by the UE by executing a first Logged MDT operation, and the second MDT Logs are MDT Logs acquired by the UE by executing a second Logged MDT operation. FIG.16is a structure diagram of an MDT Logs reporting device according to some other embodiments of the present disclosure. As shown inFIG.16, in addition to the configuring module40and the receiving module50shown inFIG.15, the receiving module is further configured to receive, through the master base station and/or the one or more secondary base stations, a first MDT Logs available indicator and/or a second MDT Logs available indicator sent by the UE. In this embodiment, the receiving module60further includes one of the following:a first receiving unit61, configured to initiate a UE Information procedure on a signaling radio bearer of the master base station, retrieve the first MDT Logs and the second MDT Logs from the UE, and jointly report the first MDT Logs and the second MDT Logs to the TCE;a second receiving unit62, configured to initiate a UE Information procedure on a signaling radio bearer of the one or more secondary base stations, retrieve the first MDT Logs and the second MDT Logs from the UE, and jointly report the first MDT Logs and the second MDT Logs to the TCE;a third receiving unit63, configured to initiate a UE Information procedure on a signaling radio bearer of the master base station, retrieve the first MDT Logs from the UE, initiate a UE Information procedure on a radio bearer of one or more secondary base stations, retrieve the second MDT Logs from the UE. It should be noted that each module may be implemented by software or hardware. The latter may be implemented in the following manner, but is not limited thereto. All the modules are located in a same processor; or, the modules are located in different processors in an arbitrary combination. Some embodiments of the present disclosure further provide a storage medium. The storage medium stores a computer program, wherein the computer program is configured to execute any one of the method embodiments during running. In some exemplary embodiments, the storage medium may include, but is not limited to, any medium that can store a computer program, such as a USB flash drive, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disc. Some embodiments of the present disclosure also provide an electronic device, including a memory and a processor, wherein the memory stores a computer program, and the processor is configured to run the computer program, so as to execute any one of the method embodiments. Those having ordinary skill in the art should understand that the modules or operations in some embodiments of the present disclosure can be implemented by using a general computing device, and they can be integrated in a single computing device, and can also be distributed over a network consisting of a plurality of computing devices. Optionally, they may be implemented by using executable program codes of the computing devices. Thus, they can be stored in a storage device and executed by the computing devices. Furthermore, in some cases, the shown or described operations may be executed in an order different from that described here, or they can be respectively implemented by individual Integrated Circuit modules, or they can be implemented by making a plurality of the modules or operations into a single Integrated Circuit module. Thus, the present disclosure is not limited to any particular combination of hardware and software. The foregoing descriptions are merely exemplary embodiments of the present disclosure, but are not intended to limit the present disclosure. For those skilled in the art, the present disclosure may have various modifications and variations. Any modifications, equivalent replacements, improvements and the like made within the principle of the present disclosure shall belong to the scope of protection of the present disclosure.
50,101
11943165
DETAILED DESCRIPTION Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings. The accompanying drawings illustrate exemplary embodiments of the present invention and provide a more detailed description of the present invention. However, the scope of the present invention should not be limited thereto. In some cases, to prevent the concept of the present invention from being ambiguous, structures and apparatuses of the known art will be omitted, or will be shown in the form of a block diagram based on main functions of each structure and apparatus. Also, wherever possible, the same reference numbers will be used throughout the drawings and the specification to refer to the same or like parts. In the present invention, a user equipment (UE) is fixed or mobile. The UE is a device that transmits and receives user data and/or control information by communicating with a base station (BS). The term ‘UE’ may be replaced with ‘terminal equipment’, ‘Mobile Station (MS)’, ‘Mobile Terminal (MT)’, ‘User Terminal (UT)’, ‘Subscriber Station (SS)’, ‘wireless device’, ‘Personal Digital Assistant (PDA)’, ‘wireless modem’, ‘handheld device’, etc. ABS is typically a fixed station that communicates with a UE and/or another BS. The BS exchanges data and control information with a UE and another BS. The term ‘BS’ may be replaced with ‘Advanced Base Station (ABS)’, ‘Node B’, ‘evolved-Node B (eNB)’, ‘Base Transceiver System (BTS)’, ‘Access Point (AP)’, ‘Processing Server (PS)’, etc. In the following description, BS is commonly called eNB. In the present invention, a node refers to a fixed point capable of transmitting/receiving a radio signal to/from a UE by communication with the UE. Various eNBs can be used as nodes. For example, a node can be a BS, NB, eNB, pico-cell eNB (PeNB), home eNB (HeNB), relay, repeater, etc. Furthermore, a node may not be an eNB. For example, a node can be a radio remote head (RRH) or a radio remote unit (RRU). The RRH and RRU have power levels lower than that of the eNB. Since the RRH or RRU (referred to as RRH/RRU hereinafter) is connected to an eNB through a dedicated line such as an optical cable in general, cooperative communication according to RRH/RRU and eNB can be smoothly performed compared to cooperative communication according to eNBs connected through a wireless link. At least one antenna is installed per node. An antenna may refer to an antenna port, a virtual antenna or an antenna group. A node may also be called a point. Unlike a conventional centralized antenna system (CAS) (i.e. single node system) in which antennas are concentrated in an eNB and controlled an eNB controller, plural nodes are spaced apart at a predetermined distance or longer in a multi-node system. The plural nodes can be managed by one or more eNBs or eNB controllers that control operations of the nodes or schedule data to be transmitted/received through the nodes. Each node may be connected to an eNB or eNB controller managing the corresponding node via a cable or a dedicated line. In the multi-node system, the same cell identity (ID) or different cell IDs may be used for signal transmission/reception through plural nodes. When plural nodes have the same cell ID, each of the plural nodes operates as an antenna group of a cell. If nodes have different cell IDs in the multi-node system, the multi-node system can be regarded as a multi-cell (e.g., macro-cell/femto-cell/pico-cell) system. When multiple cells respectively configured by plural nodes are overlaid according to coverage, a network configured by multiple cells is called a multi-tier network. The cell ID of the RRH/RRU may be identical to or different from the cell ID of an eNB. When the RRH/RRU and eNB use different cell IDs, both the RRH/RRU and eNB operate as independent eNBs. In a multi-node system according to the present invention, which will be described below, one or more eNBs or eNB controllers connected to plural nodes can control the plural nodes such that signals are simultaneously transmitted to or received from a UE through some or all nodes. While there is a difference between multi-node systems according to the nature of each node and implementation form of each node, multi-node systems are discriminated from single node systems (e.g. CAS, conventional MIMO systems, conventional relay systems, conventional repeater systems, etc.) since a plurality of nodes provides communication services to a UE in a predetermined time-frequency resource. Accordingly, embodiments of the present invention with respect to a method of performing coordinated data transmission using some or all nodes can be applied to various types of multi-node systems. For example, a node refers to an antenna group spaced apart from another node by a predetermined distance or more, in general. However, embodiments of the present invention, which will be described below, can even be applied to a case in which a node refers to an arbitrary antenna group irrespective of node interval. In the case of an eNB including an X-pole (cross polarized) antenna, for example, the embodiments of the preset invention are applicable on the assumption that the eNB controls a node composed of an H-pole antenna and a V-pole antenna. A communication scheme through which signals are transmitted/received via plural transmit (Tx)/receive (Rx) nodes, signals are transmitted/received via at least one node selected from plural Tx/Rx nodes, or a node transmitting a downlink signal is discriminated from a node transmitting an uplink signal is called multi-eNB MIMO or CoMP (Coordinated Multi-Point Tx/Rx). Coordinated transmission schemes from among CoMP communication schemes can be categorized into JP (Joint Processing) and scheduling coordination. The former may be divided into JT (Joint Transmission)/JR (Joint Reception) and DPS (Dynamic Point Selection) and the latter may be divided into CS (Coordinated Scheduling) and CB (Coordinated Beamforming). DPS may be called DCS (Dynamic Cell Selection). When JP is performed, more various communication environments can be generated, compared to other CoMP schemes. JT refers to a communication scheme by which plural nodes transmit the same stream to a UE and JR refers to a communication scheme by which plural nodes receive the same stream from the UE. The UE/eNB combine signals received from the plural nodes to restore the stream. In the case of JT/JR, signal transmission reliability can be improved according to transmit diversity since the same stream is transmitted from/to plural nodes. DPS refers to a communication scheme by which a signal is transmitted/received through a node selected from plural nodes according to a specific rule. In the case of DPS, signal transmission reliability can be improved because a node having a good channel state between the node and a UE is selected as a communication node. In the present invention, a cell refers to a specific geographical area in which one or more nodes provide communication services. Accordingly, communication with a specific cell may mean communication with an eNB or a node providing communication services to the specific cell. A downlink/uplink signal of a specific cell refers to a downlink/uplink signal from/to an eNB or a node providing communication services to the specific cell. A cell providing uplink/downlink communication services to a UE is called a serving cell. Furthermore, channel status/quality of a specific cell refers to channel status/quality of a channel or a communication link generated between an eNB or a node providing communication services to the specific cell and a UE. In 3GPP LTE-A systems, a UE can measure downlink channel state from a specific node using one or more CSI-RSs (Channel State Information Reference Signals) transmitted through antenna port(s) of the specific node on a CSI-RS resource allocated to the specific node. In general, neighboring nodes transmit CSI-RS resources on orthogonal CSI-RS resources. When CSI-RS resources are orthogonal, this means that the CSI-RS resources have different subframe configurations and/or CSI-RS sequences which specify subframes to which CSI-RSs are allocated according to CSI-RS resource configurations, subframe offsets and transmission periods, etc. which specify symbols and subcarriers carrying the CSI RSs. In the present invention, PDCCH (Physical Downlink Control Channel)/PCFICH (Physical Control Format Indicator Channel)/PHICH (Physical Hybrid automatic repeat request Indicator Channel)/PDSCH (Physical Downlink Shared Channel) refer to a set of time-frequency resources or resource elements respectively carrying DCI (Downlink Control Information)/CFI (Control Format Indicator)/downlink ACK/NACK (Acknowledgement/Negative ACK)/downlink data. In addition, PUCCH (Physical Uplink Control Channel)/PUSCH (Physical Uplink Shared Channel)/PRACH (Physical Random Access Channel) refer to sets of time-frequency resources or resource elements respectively carrying UCI (Uplink Control Information)/uplink data/random access signals. In the present invention, a time-frequency resource or a resource element (RE), which is allocated to or belongs to PDCCH/PCFICH/PHICH/PDSCH/PUCCH/PUSCH/PRACH, is referred to as a PDCCH/PCFICH/PHICH/PDSCH/PUCCH/PUSCH/PRACH RE or PDCCH/PCFICH/PHICH/PDSCH/PUCCH/PUSCH/PRACH resource. In the following description, transmission of PUCCH/PUSCH/PRACH by a UE is equivalent to transmission of uplink control information/uplink data/random access signal through or on PUCCH/PUSCH/PRACH. Furthermore, transmission of PDCCH/PCFICH/PHICH/PDSCH by an eNB is equivalent to transmission of downlink data/control information through or on PDCCH/PCFICH/PHICH/PDSCH. FIG.1illustrates an exemplary radio frame structure used in a wireless communication system.FIG.1(a)illustrates a frame structure for frequency division duplex (FDD) used in 3GPP LTE/LTE-A andFIG.1(b)illustrates a frame structure for time division duplex (TDD) used in 3GPP LTE/LTE-A. Referring toFIG.1, a radio frame used in 3GPP LTE/LTE-A has a length of 10 ms (307200 Ts) and includes 10 subframes in equal size. The 10 subframes in the radio frame may be numbered. Here, Ts denotes sampling time and is represented as Ts=1/(2048*15 kHz). Each subframe has a length of 1 ms and includes two slots. 20 slots in the radio frame can be sequentially numbered from 0 to 19. Each slot has a length of 0.5 ms. A time for transmitting a subframe is defined as a transmission time interval (TTI). Time resources can be discriminated by a radio frame number (or radio frame index), subframe number (or subframe index) and a slot number (or slot index). The radio frame can be configured differently according to duplex mode. Downlink transmission is discriminated from uplink transmission by frequency in FDD mode, and thus the radio frame includes only one of a downlink subframe and an uplink subframe in a specific frequency band. In TDD mode, downlink transmission is discriminated from uplink transmission by time, and thus the radio frame includes both a downlink subframe and an uplink subframe in a specific frequency band. Table 1 shows DL-UL configurations of subframes in a radio frame in the TDD mode. TABLE 1Downlink-DL-ULto-Uplinkconfig-Switch-pointSubframe numberurationperiodicity012345678905 msDSUUUDSUUU15 msDSUUDDSUUD25 msDSUDDDSUDD310 msDSUUUDDDDD410 msDSUUDDDDDD510 msDSUDDDDDDD65 msDSUUUDSUUD In Table 1, D denotes a downlink subframe, U denotes an uplink subframe and S denotes a special subframe. The special subframe includes three fields of DwPTS (Downlink Pilot TimeSlot), GP (Guard Period), and UpPTS (Uplink Pilot TimeSlot). DwPTS is a period reserved for downlink transmission and UpPTS is a period reserved for uplink transmission. Table 2 shows special subframe configuration. TABLE 2Normal cyclic prefix in downlinkExtended cyclic prefix in downlinkUpPTSUpPTSSpecial subframeNormal cyclicExtended cyclicNormal cyclicExtended cyclicconfigurationDwPTSprefix in uplinkprefix in uplinkDwPTSprefix in uplinkprefix in uplink06592 · Ts2192 · Ts2560 · Ts7680 · Ts2192 · Ts2560 · Ts119760 · Ts20480 · Ts221952 · Ts23040 · Ts324144 · Ts25600 · Ts426336 · Ts7680 · Ts4384 · Ts5120 · Ts56592 · Ts4384 · Ts5120 · Ts20480 · Ts619760 · Ts23040 · Ts721952 · Ts12800 · Ts824144 · Ts———913168 · Ts——— FIG.2illustrates an exemplary downlink/uplink slot structure in a wireless communication system. Particularly,FIG.2illustrates a resource grid structure in 3GPP LTE/LTE-A. A resource grid is present per antenna port. Referring toFIG.2, a slot includes a plurality of OFDM (Orthogonal Frequency Division Multiplexing) symbols in the time domain and a plurality of resource blocks (RBs) in the frequency domain. An OFDM symbol may refer to a symbol period. A signal transmitted in each slot may be represented by a resource grid composed of NRBDL/UL*NscRBsubcarriers and NsymbDL/ULOFDM symbols. Here, NRBDLdenotes the number of RBs in a downlink slot and NRBULdenotes the number of RBs in an uplink slot. NRBDLand NRBULrespectively depend on a DL transmission bandwidth and a UL transmission bandwidth. NsymbDLdenotes the number of OFDM symbols in the downlink slot and NsymbULdenotes the number of OFDM symbols in the uplink slot. In addition, NscRBdenotes the number of subcarriers constructing one RB. An OFDM symbol may be called an SC-FDM (Single Carrier Frequency Division Multiplexing) symbol according to multiple access scheme. The number of OFDM symbols included in a slot may depend on a channel bandwidth and the length of a cyclic prefix (CP). For example, a slot includes 7 OFDM symbols in the case of normal CP and 6 OFDM symbols in the case of extended CP. WhileFIG.2illustrates a subframe in which a slot includes 7 OFDM symbols for convenience, embodiments of the present invention can be equally applied to subframes having different numbers of OFDM symbols. Referring toFIG.2, each OFDM symbol includes NRBDL/UL*NscRBsubcarriers in the frequency domain. Subcarrier types can be classified into a data subcarrier for data transmission, a reference signal subcarrier for reference signal transmission, and null subcarriers for a guard band and a direct current (DC) component. The null subcarrier for a DC component is a subcarrier remaining unused and is mapped to a carrier frequency (f0) during OFDM signal generation or frequency up-conversion. The carrier frequency is also called a center frequency. An RB is defined by NsymbDL-UL(e.g., 7) consecutive OFDM symbols in the time domain and NscRB(e.g., 12) consecutive subcarriers in the frequency domain. For reference, a resource composed by an OFDM symbol and a subcarrier is called a resource element (RE) or a tone. Accordingly, an RB is composed of NsymbDL/UL*NscRBREs. Each RE in a resource grid can be uniquely defined by an index pair (k, 1) in a slot. Here, k is an index in the range of 0 to NsymbDL/UL*NscRB−1 in the frequency domain and 1 is an index in the range of 0 to NsymbDL/UL−1. Two RBs that occupy NscRBconsecutive subcarriers in a subframe and respectively disposed in two slots of the subframe are called a physical resource block (PRB) pair. Two RBs constituting a PRB pair have the same PRB number (or PRB index). A virtual resource block (VRB) is a logical resource allocation unit for resource allocation. The VRB has the same size as that of the PRB. The VRB may be divided into a localized VRB and a distributed VRB depending on a mapping scheme of VRB into PRB. The localized VRBs are mapped into the PRBs, whereby VRB number (VRB index) corresponds to PRB number. That is, nPRB=nVRB is obtained. Numbers are given to the localized VRBs from 0 to NVRBDL−1, and NVRBDL=NRBDLis obtained. Accordingly, according to the localized mapping scheme, the VRBs having the same VRB number are mapped into the PRBs having the same PRB number at the first slot and the second slot. On the other hand, the distributed VRBs are mapped into the PRBs through interleaving. Accordingly, the VRBs having the same VRB number may be mapped into the PRBs having different PRB numbers at the first slot and the second slot. Two PRBs, which are respectively located at two slots of the subframe and have the same VRB number, will be referred to as a pair of VRBs. FIG.3illustrates a downlink (DL) subframe structure used in 3GPP LTE/LTE-A. Referring toFIG.3, a DL subframe is divided into a control region and a data region. A maximum of three (four) OFDM symbols located in a front portion of a first slot within a subframe correspond to the control region to which a control channel is allocated. A resource region available for PDCCH transmission in the DL subframe is referred to as a PDCCH region hereinafter. The remaining OFDM symbols correspond to the data region to which a physical downlink shared chancel (PDSCH) is allocated. A resource region available for PDSCH transmission in the DL subframe is referred to as a PDSCH region hereinafter. Examples of downlink control channels used in 3GPP LTE include a physical control format indicator channel (PCFICH), a physical downlink control channel (PDCCH), a physical hybrid ARQ indicator channel (PHICH), etc. The PCFICH is transmitted at a first OFDM symbol of a subframe and carries information regarding the number of OFDM symbols used for transmission of control channels within the subframe. The PHICH is a response of uplink transmission and carries an HARQ acknowledgment (ACK)/negative acknowledgment (NACK) signal. Control information carried on the PDCCH is called downlink control information (DCI). The DCI contains resource allocation information and control information for a UE or a UE group. For example, the DCI includes a transport format and resource allocation information of a downlink shared channel (DL-SCH), a transport format and resource allocation information of an uplink shared channel (UL-SCH), paging information of a paging channel (PCH), system information on the DL-SCH, information about resource allocation of an upper layer control message such as a random access response transmitted on the PDSCH, a transmit control command set with respect to individual UEs in a UE group, a transmit power control command, information on activation of a voice over IP (VoIP), downlink assignment index (DAI), etc. The transport format and resource allocation information of the DL-SCH are also called DL scheduling information or a DL grant and the transport format and resource allocation information of the UL-SCH are also called UL scheduling information or a UL grant. The size and purpose of DCI carried on a PDCCH depend on DCI format and the size thereof may be varied according to coding rate. Various formats, for example, formats 0 and 4 for uplink and formats 1, 1A, 1B, 1C, 1D, 2, 2A, 2B, 2C, 3 and 3A for downlink, have been defined in 3GPP LTE. Control information such as a hopping flag, information on RB allocation, modulation coding scheme (MCS), redundancy version (RV), new data indicator (NDI), information on transmit power control (TPC), cyclic shift demodulation reference signal (DMRS), UL index, channel quality information (CQI) request, DL assignment index, HARQ process number, transmitted precoding matrix indicator (TPMI), precoding matrix indicator (PMI), etc. is selected and combined based on DCI format and transmitted to a UE as DCI. In general, a DCI format for a UE depends on transmission mode (TM) set for the UE. In other words, only a DCI format corresponding to a specific TM can be used for a UE configured in the specific TM. A PDCCH is transmitted on an aggregation of one or several consecutive control channel elements (CCEs). The CCE is a logical allocation unit used to provide the PDCCH with a coding rate based on a state of a radio channel. The CCE corresponds to a plurality of resource element groups (REGs). For example, a CCE corresponds to 9 REGs and an REG corresponds to 4 REs. 3GPP LTE defines a CCE set in which a PDCCH can be located for each UE. A CCE set from which a UE can detect a PDCCH thereof is called a PDCCH search space, simply, search space. An individual resource through which the PDCCH can be transmitted within the search space is called a PDCCH candidate. A set of PDCCH candidates to be monitored by the UE is defined as the search space. In 3GPP LTE/LTE-A, search spaces for DCI formats may have different sizes and include a dedicated search space and a common search space. The dedicated search space is a UE-specific search space and is configured for each UE. The common search space is configured for a plurality of UEs. Aggregation levels defining the search space is as follows. TABLE 3Number ofSearch SpacePDCCHTypeAggregation Level LSize [in CCEs]candidates M(L)UE-specific16621264828162Common41648162 A PDCCH candidate corresponds to 1, 2, 4 or 8 CCEs according to CCE aggregation level. An eNB transmits a PDCCH (DCI) on an arbitrary PDCCH candidate with in a search space and a UE monitors the search space to detect the PDCCH (DCI). Here, monitoring refers to attempting to decode each PDCCH in the corresponding search space according to all monitored DCI formats. The UE can detect the PDCCH thereof by monitoring plural PDCCHs. Since the UE does not know the position in which the PDCCH thereof is transmitted, the UE attempts to decode all PDCCHs of the corresponding DCI format for each subframe until a PDCCH having the ID thereof is detected. This process is called blind detection (or blind decoding (BD)). The eNB can transmit data for a UE or a UE group through the data region. Data transmitted through the data region may be called user data. For transmission of the user data, a physical downlink shared channel (PDSCH) may be allocated to the data region. A paging channel (PCH) and downlink-shared channel (DL-SCH) are transmitted through the PDSCH. The UE can read data transmitted through the PDSCH by decoding control information transmitted through a PDCCH. Information representing a UE or a UE group to which data on the PDSCH is transmitted, how the UE or UE group receives and decodes the PDSCH data, etc. is included in the PDCCH and transmitted. For example, if a specific PDCCH is CRC (cyclic redundancy check)-masked having radio network temporary identify (RNTI) of “A” and information about data transmitted using a radio resource (e.g., frequency position) of “B” and transmission format information (e.g., transport block size, modulation scheme, coding information, etc.) of “C” is transmitted through a specific DL subframe, the UE monitors PDCCHs using RNTI information and a UE having the RNTI of “A” detects a PDCCH and receives a PDSCH indicated by “B” and “C” using information about the PDCCH. A reference signal (RS) to be compared with a data signal is necessary for the UE to demodulate a signal received from the eNB. A reference signal refers to a predetermined signal having a specific waveform, which is transmitted from the eNB to the UE or from the UE to the eNB and known to both the eNB and UE. The reference signal is also called a pilot. Reference signals are categorized into a cell-specific RS shared by all UEs in a cell and a modulation RS (DM RS) dedicated for a specific UE. ADM RS transmitted by the eNB for demodulation of downlink data for a specific UE is called a UE-specific RS. Both or one of DM RS and CRS may be transmitted on downlink. When only the DM RS is transmitted without CRS, an RS for channel measurement needs to be additionally provided because the DM RS transmitted using the same precoder as used for data can be used for demodulation only. For example, in 3GPP LTE(-A), CSI-RS corresponding to an additional RS for measurement is transmitted to the UE such that the UE can measure channel state information. CSI-RS is transmitted in each transmission period corresponding to a plurality of subframes based on the fact that channel state variation with time is not large, unlike CRS transmitted per subframe. FIG.4illustrates an exemplary uplink subframe structure used in 3GPP LTE/LTE-A. Referring toFIG.4, a UL subframe can be divided into a control region and a data region in the frequency domain. One or more PUCCHs (physical uplink control channels) can be allocated to the control region to carry uplink control information (UCI). One or more PUSCHs (Physical uplink shared channels) may be allocated to the data region of the UL subframe to carry user data. In the UL subframe, subcarriers spaced apart from a DC subcarrier are used as the control region. In other words, subcarriers corresponding to both ends of a UL transmission bandwidth are assigned to UCI transmission. The DC subcarrier is a component remaining unused for signal transmission and is mapped to the carrier frequency f0 during frequency up-conversion. A PUCCH for a UE is allocated to an RB pair belonging to resources operating at a carrier frequency and RBs belonging to the RB pair occupy different subcarriers in two slots. Assignment of the PUCCH in this manner is represented as frequency hopping of an RB pair allocated to the PUCCH at a slot boundary. When frequency hopping is not applied, the RB pair occupies the same subcarrier. The PUCCH can be used to transmit the following control information.Scheduling Request (SR): This is information used to request a UL-SCH resource and is transmitted using On-Off Keying (OOK) scheme.HARQ ACK/NACK: This is a response signal to a downlink data packet on a PDSCH and indicates whether the downlink data packet has been successfully received. A 1-bit ACK/NACK signal is transmitted as a response to a single downlink codeword and a 2-bit ACK/NACK signal is transmitted as a response to two downlink codewords. HARQ-ACK responses include positive ACK (ACK), negative ACK (NACK), discontinuous transmission (DTX) and NACK/DTX. Here, the term HARQ-ACK is used interchangeably with the term HARQ ACK/NACK and ACK/NACK.Channel State Indicator (CSI): This is feedback information about a downlink channel. Feedback information regarding MIMO includes a rank indicator (RI) and a precoding matrix indicator (PMI). The quantity of control information (UCI) that a UE can transmit through a subframe depends on the number of SC-FDMA symbols available for control information transmission. The SC-FDMA symbols available for control information transmission correspond to SC-FDMA symbols other than SC-FDMA symbols of the subframe, which are used for reference signal transmission. In the case of a subframe in which a sounding reference signal (SRS) is configured, the last SC-FDMA symbol of the subframe is excluded from the SC-FDMA symbols available for control information transmission. A reference signal is used to detect coherence of the PUCCH. The PUCCH supports various formats according to information transmitted thereon. Table 4 shows the mapping relationship between PUCCH formats and UCI in LTE/LTE-A. TABLE 4Number ofbits perPUCCHModulationsubframe,formatschemeMbitUsageEtc.1N/AN/ASR (SchedulingRequest)1aBPSK1ACK/NACKOneor SR + ACK/codewordNACK1bQPSK2ACK/NACKTwoor SR + ACK/codewordNACK2QPSK20CQI/PMI/RIJoint codingACK/NACK(extendedCP)2aQPSK +21CQI/PMI/RI +Normal CPBPSKACK/NACKonly2bQPSK +22CQI/PMI/RI +Normal CPQPSKACK/NACKonly3QPSK48ACK/NACKor SR + ACK/NACK orCQI/PMI/RI +ACK/NACK Referring to Table 4, PUCCH formats 1/1a/1b are used to transmit ACK/NACK information, PUCCH format 2/2a/2b are used to carry CSI such as CQI/PMI/RI and PUCCH format 3 is used to transmit ACK/NACK information. Reference Signal (RS) When a packet is transmitted in a wireless communication system, signal distortion may occur during transmission since the packet is transmitted through a radio channel. To correctly receive a distorted signal at a receiver, the distorted signal needs to be corrected using channel information. To detect channel information, a signal known to both a transmitter and the receiver is transmitted and channel information is detected with a degree of distortion of the signal when the signal is received through a channel. This signal is called a pilot signal or a reference signal. When data is transmitted/received using multiple antennas, the receiver can receive a correct signal only when the receiver is aware of a channel state between each transmit antenna and each receive antenna. Accordingly, a reference signal needs to be provided per transmit antenna, more specifically, per antenna port. Reference signals can be classified into an uplink reference signal and a downlink reference signal. In LTE, the uplink reference signal includes: i) a demodulation reference signal (DMRS) for channel estimation for coherent demodulation of information transmitted through a PUSCH and a PUCCH; and ii) a sounding reference signal (SRS) used for an eNB to measure uplink channel quality at a frequency of a different network. The downlink reference signal includes:i) a cell-specific reference signal (CRS) shared by all UEs in a cell;ii) a UE-specific reference signal for a specific UE only;iii) a DMRS transmitted for coherent demodulation when a PDSCH is transmitted;iv) a channel state information reference signal (CSI-RS) for delivering channel state information (CSI) when a downlink DMRS is transmitted;v) a multimedia broadcast single frequency network (MBSFN) reference signal transmitted for coherent demodulation of a signal transmitted in MB SFN mode; andvi) a positioning reference signal used to estimate geographic position information of a UE. Reference signals can be classified into a reference signal for channel information acquisition and a reference signal for data demodulation. The former needs to be transmitted in a wide band as it is used for a UE to acquire channel information on downlink transmission and received by a UE even if the UE does not receive downlink data in a specific subframe. This reference signal is used even in a handover situation. The latter is transmitted along with a corresponding resource by an eNB when the eNB transmits a downlink signal and is used for a UE to demodulate data through channel measurement. This reference signal needs to be transmitted in a region in which data is transmitted. NR (New Radio Technology) Although the structure, operation or function of the 3GPP LTE (-A) system has been described in the above description, the structure, operation, or function in the 3GPP LTE (-A) in the NR is slightly modified, Can be set. Let me briefly explain some of them. In NR, various numerology (numerology) are supported. For example, subcarrier spacing (subcarrier spacing) is supported not only at 15 KHz, but also up to 2n times (n=1, 2, 3, 4). The number of OFDM symbols per slot (hereinafter simply referred to as “symbol”) is fixed to 14, but the number of slots in one subframe is 2k (k=0, 1, 2, 3, 4, 5). However, it is the same as the existing LTE system that the radio frame is composed of 10 subframes. In case of extended CP, the number of symbols per slot is fixed to 12, and one subframe consists of 4 slots. In addition, like the existing LTE system, one resource block is defined as 12 consecutive subcarriers in the frequency domain. Also, the purpose (for example, downlink, uplink, or flexible) of each symbol in a slot is defined according to the slot format, and both the downlink symbol and the uplink symbol can be set in one slot, And this case is referred to as a self-contained subframe (or slot) structure. In FD-MIMO of LTE and MIMO of New RAT, discussion on an aperiodic CSI-RS (A-CSIRS) is in progress. The A-CSIRS corresponds to a CSI-RS transmitted at specific timing (e.g., a subframe, a slot, etc.). The A-CSIRS informs a UE of the timing at which the A-CSIRS is transmitted via DCI to make the UE use a corresponding RS for measuring CSI. In particular, when the A-CSIRS is transmitted, it is necessary to consider a method of transmitting a data symbol which is transmitted at time/position at which a corresponding RS is transmitted. A scheme used in LTE corresponds to a scheme of using rate-matching (RM). In particular, when rate matching is performed on a data symbol in an A-CSIRS RE, operations of a base station (BS) and a user equipment (UE) are described in the following. When the BS performs mapping on an RE of PDSCH, the BS sets a zero-power (ZP) CSI-RS (ZP-CSI-RS) including an RE pattern of an RS transmitting an A-CSIRS to the UE. The BS performs RE mapping under the assumption that PDSCH is not transmitted in a ZP-CSI-RS RE and may be then able to transmit PDSCH. And, the BS transmits A-CSIRS to an A-CSIRS RE. The UE performs decoding on the PDSCH by assuming the transmission operation of the BS. In particular, the UE performs decoding under the assumption that PDSCH is not mapped to a PDSCH muting RE to which a ZP-CSI-RS is set from the beginning. A semi-persistent (SP) CSI-RS (SP-CSI-RS) is also considered in the FD-MIMO of LTE and the MIMO of New RAT. Similar to the A-CSIRS, the SP-CSIRS uses a method of transmitting a CSI-RS in a prescribed time period via enable/disable signaling and has a characteristic that whether or not a CSI-RS is transmitted varies according to timing. In order to use the scheme above, it is necessary for a base station and a UE to have signaling and configuration for using rate matching. In particular, since it is able to dynamically transmit the A-CSIRS in every subframe, it is necessary to have dynamic signaling (e.g., signaling such as DCI via PDCCH) corresponding to the A-CSIRS and a ZP-CSI-RS configuration via higher layer signaling for the A-CSIRS. In the following, ‘rate matching’ can be simply referred to as ‘RM’. And, a ZP CSI-RS or an NZP CSI-RS may corresponds to a resource in which ‘CSI-RS’ is transmitted or may correspond to both a CSI-RS and a resource in which the CSI-RS is transmitted. ZP-CSI-RS Configuration Method for Rate-Matching For the aforementioned rate matching signaling, it may be able to define a configuration described in the following and the configuration can be set to a base station and a user equipment. 1. Configuration of ‘Rate Matching Setting’ Option 1: A rate matching setting corresponds to a set of the 1 number of links' designating a ZP-CSI-RS (or a resource group) to be used for performing rate matching in a ‘resource setting’ shared with a measurement setting for CSI acquisition and/or beam management frame work.FIG.5illustrates a rate matching setting having a sharing resource setting with a ZP-CSI-RS resource.InFIG.5, each link corresponds to a ZP-CSI-RS resource group. In particular, a plurality of resource settings can be set to each link as a rate matching pattern (refer to link 2 of the ‘rate matching setting’ ofFIG.5). In this case, an actually applied ZP-CSI-RS RE pattern corresponds to a union of a plurality of configured ZP-CSI-RS resource RE patterns.A resource setting corresponds to a set of RS RE pattern candidates for ZP-CSI-RS. Each resource setting can include a different type of an RS (e.g., DMRS, SRS, etc.). For the resource setting, it may reuse an RS RE pattern for NZP-CSI-RS defined for the CSI acquisition and/or beam management framework. In this case, although the NZP-CSI-RS is used, if a resource is linked in the rate matching setting, a base station and a UE automatically interpret the resource as a ZP-CSI-RS. Option 2: A rate matching setting corresponds to a set of the 1 number of links' designating a ZP-CSI-RS (or a resource group) to be used for performing rate matching in a ‘resource setting’ configured irrespective of a measurement setting for CSI acquisition and/or beam management frame work.FIG.6illustrates a rate matching setting having a resource setting independent of a ZP-CSI-RS resource configuration.InFIG.6, each link corresponds to a ZP-CSI-RS resource group. In particular, a plurality of resource settings can be set to each link as a rate matching pattern (refer to link 2 of the ‘rate matching setting’ ofFIG.6). In this case, an actually applied ZP-CSI-RS RE pattern corresponds to a union of a plurality of configured ZP-CSI-RS resource RE patterns.A resource setting corresponds to a set of RS RE pattern candidates for ZP-CSI-RS. Each resource setting can include a different type of an RS (e.g., DMRS, SRS, etc.). The resource setting includes the M (M>=1) number of candidate ZP-CSI-RS patterns. In particular, for clarity of configuration and signaling (e.g., in order to reduce overhead), it may be able to define a ZP-CSI-RS pattern for performing RM using a part of available CSI-RE RE pattern candidates only. In particular, a resource setting for rate matching can include all or a part of available CSI-RE RE patterns. For example, a ZP-CSI-RS RE pattern can include a pattern assuming the specific number of antenna ports (e.g., 4 ports) only among CSI-RS patterns. The resource setting can be forwarded to the UE via higher layer signaling such as RRC. Other Configurations:A frequency granularity configuration (i.e., wideband/partial band/subband) can be included in each link. In the present specification, the frequency granularity corresponds to a unit of frequency allocation. For example, if the frequency granularity is configured by a wideband, frequency allocation corresponds to the x number of resource blocks. If the frequency granularity is configured by a partial band, frequency allocation corresponds to the y number of resource blocks. If the frequency granularity is configured by a subband, frequency allocation may correspond to the z number of resource blocks. In this case, x>y>z and the x, they and the z correspond to integers. In general, the frequency granularity may correspond to a unit of frequency allocation for a single UE in a serving cell. A data, a signal, and the like can be transmitted by a base station or a serving cell within the aforementioned number of resource blocks. Moreover, the frequency granularity can be comprehended as a unit of frequency allocation different from the aforementioned frequency allocation or a unit of frequency domain. In particular, it may be able to configure a resource having a plurality of frequency configurations different from each other. For example, it may be able to configure a wideband ZP-CSI-RS resource and a partial band ZP-CSI-RS resource. If an additional frequency granularity-related configuration is not provided, a base station and a UE follow a frequency granularity included in a designated ZP-CSI-RS RE pattern. If a frequency granularity is not included in the ZP-CSI-RS RE pattern, the UE can perform data transmission and reception under the assumption that rate matching is performed on all scheduled bands.A time configuration (i.e., aperiodic/semi-periodic/periodic) can be included in each link. More specifically, an aperiodic/semi-periodic/periodic ZP-CSI-RS can be comprehended as follows.Aperiodic ZP-CSI-RS: an aperiodic ZP-CSI-RS is indicated to a UE via L1 signaling such as DCI. Rate matching can be performed on a corresponding resource pattern in a slot to which the L1 signaling is transmitted or a specific slot(s) designated by the L1 signaling only. In this case, aperiodic ZP-CSI-RS signaling via DCI can designates a semi-persistent ZP-CSI-RS resource or a periodic ZP-CSI-RS resource (i.e., a configuration or a setting to which a period/offset is set). In this case, it may ignore a configured period/offset.Semi-persistent ZP CSI-RS: A semi-persistent ZP CSI-RS enables/disables a rate matching operation on a resource(s) designated via L1 and/or L2 signaling among ZP CSI-RS resource(s) designated via L2 and/or L3 signaling. In this case, it may perform rate matching on a corresponding resource with a designated period/offset during the rate matching operation is enabled.Periodic ZP-CSI-RS: A periodic ZP-CSI-RS is similar to the semi-persistent ZP-CSI-RS. However, separate enable/disable signaling does not exist in the periodic ZP-CSI-RS. The periodic ZP-CSI-RS operates as if a rate matching operation is always enabled. In particular, it may be able to configure a resource having a plurality of time configurations different from each other. For example, it may be able to configure an aperiodic ZP-CSI-RS resource and a semi-persistent ZP-CSI-RS resource. 2. Case that Rate Matching Setting is Included in Measurement Setting A ZP-CSI-RS link is configured under a framework of a measurement setting for CSI acquisition and/or beam management framework. A resource setting corresponds to a set of RS RE pattern candidates for the entire CSI-RSs (NZP and/or ZP CSI-RS). A different type of RS pattern (e.g., DMRS, SRS, etc.) can be included in the resource setting. When a link is configured for a ZP-CSI-RS, it may reuse an RS RE pattern for an NZP-CSI-RS, which is defined for CSI acquisition and/or beam management frame work. In this case, although the NZP-CSI-RS is used, if a resource is linked in the rate matching setting, a base station and a UE can automatically interpret the resource as a ZP-CSI-RS.FIG.7illustrates a ZP-CSI-RS configuration for performing rate matching included in a measurement setting. Similar to a link 4 or a link 5 ofFIG.7, if a reporting setting is not linked with a specific resource setting or a separately configured ‘rate matching setting’ rather than the reporting setting is linked with the specific resource setting in a measurement setting, a CSI-RS resource (or resource group) designated by the link can be comprehended as a ZP-CSI-RS pattern dedicated for rate matching. In this case, a link for performing rate matching can share an index of the link with a link for performing CSI measurement/beam management within the measurement setting. 3. Case that Rate Matching Setting is Included in Measurement Setting Having Individual Resource Setting Although the present case is similar to the aforementioned case that a rate matching setting is included in a measurement setting, according to the present case, it is able to configure a separate resource setting for a ZP CSI-RS for performing rate matching. In this case, the entire PDSCH region as well as a set of RSs may become a target. 4. Case that Rate Matching Setting is Included in Resource Setting1-bit indicator is allocated according to a resource (set) included in a resource setting to configure whether or not the resource setting is used for rate matching.FIG.8illustrates an example of allocating a rate matching setting (i.e., an indicator indicating whether or not a resource setting is used for rate matching) to a resource setting.A UE assumes that rate matching is performed on all resources (or resource set) of which 1-bit indicator set to a resource setting is configured by ‘RM on’.The indicator can be commonly set to a ZP-CSI-RS and an NZP CSI-RS. Both the ZP-CSI-RS and the NZP CSI-RS can be used as a rate matching pattern.In order to avoid transmitting data to an RE in which NZP-CSI-RS is transmitted, the indicator can be configured to be used for ZP-CSI-RS only. A UE or a base station can be configured to perform rate matching on the NZP-CSI-RS by default. In particular, the NZP-CSI-RS on which the rate matching is performed by default can be restricted to an NZP-CSI-RS resource included in a link configured to perform beam management/CSI acquisition among NZP-CSI RS resources. More specifically, a UE or a base station can perform rate matching on both an NZP CSI-RS configured to measure a channel and an NZP CSI-RS configured to measure interference by default. For the flexibility of interference measurement, when an NZP CSI-RS is configured to measure interference, if there is no additional signaling/configuration, a UE or a base station may not perform rate matching. In this case, a time/frequency-related configuration may follow a corresponding NZP-CSI RS configuration.In order to use an NZP-CSI-RS for performing rate matching, it may use a separate time/frequency configuration. In this case, it may use a higher unit (e.g., aperiodic->semi-persistent->periodic, partial band->wideband). To this end, it may use a separate indicator. For example, if 1-bit indicator is included in an NZP-CSI-RS, the indicator can be comprehended as a ‘cell-specific CSI-RS resource’. Since all UEs belonging to a cell are able to use the indicator for the purpose of channel measurement and the like, a UE may operate under the assumption that NZP-CSI-RS is always transmitted to a corresponding resource. In particular, if the indicator indicates ‘on’, a corresponding resource can be comprehended as a semi-persistent/periodic ZP-CSI-RS irrespective of a time configuration for an NZP-CSI-RS. A UE or a base station can perform rate matching on the NZP-CSI-RS. Method of Configuring Resource Pattern for ZP-CSI-RS to Perform Rate Matching 1. A Resource Pattern is Configured According to a Resource Configuration Method of a Different RS. One of objects for performing RM is to protect an NZP CSI-RS. Hence, a method of configuring an RM resource may basically use a resource configuration method of a CSI-RS. To this end, it may define component RM resources and may be able to configure a resource on which RM is to be performed by aggregating the component RM resources. For example, since it is able to define an RMR to protect a CSI-RS, it is able to define a component RM resource having an RE configuration identical to an RE configuration of a component CSI-RS resource(s) for a CSI-RS. In this case, a component RM resource for an RMR can support a partial form or a specific form only among all component CSI-RS resources. For example, since a component RM resource is considered to be used as an IMR, the component RM resource may have a resource unit (e.g., 2 REs) identical to the minimum unit of the IMR. As a different example, among component CSI-RS resource patterns consisting of 2 REs, it may use such a pattern as 2*1. In other word, it may use a pattern that two subcarriers adjacent to each other are used in a symbol. This pattern is appropriate for performing an operation of protecting such an RS as a CSI-RS for managing a beam (i.e., a CSI-RS defined in the random number of symbols). When two or more component RM resources are defined, it may be able to configure a resource to be used (e.g., a resource length and/or RE position (e.g., direction)) among the component RM resources. For example, as shown inFIG.9, it may be able to configure a component RM resource to be used among 1-RE component RM resource and 2-RE component RM resource. When an RM operation is supported for an RS such as a PT (phase tracking)-RS, since the PT-RS is transmitted in every symbol or alternating symbol using 1 subcarrier in frequency direction, it may be able to use the component RM resource. If a CSI-RS does not use 1-RE component resource, it may use a principle identical to a principle of aggregating 2-RE component resources. If component RM resources are aggregated, an RMR can be extended in a unit of a component resource in an order determined according to a configured port number. In this case, the order can include a position to which a component resource is assigned for a single RMR. Unlike a different RS, since an RMR does not need a port concept, a port number parameter set to the RMR may correspond to a size (e.g., number of REs) of a resource at which the RMR is defined. Or, it may define a different parameter having the same meaning. If it is able to define a CSI-RS at a plurality of symbol locations, it may be able to configure a ‘symbol location’ capable of configuring a location of an RMR in a slot (e.g., a location of a first symbol of the RMR). The symbol location can be configured at a location within a region at which an RS is defined. If an RMR according to the present invention is used as an IMR, similar to a DMRS, it is necessary to set the RMR to a location of a different RS rather than a CSI-RS. In particular, when an RM resource is set to a symbol in consideration of configuration flexibility of an RMR (e.g., an RM object for a different RS), the symbol is not restricted to a symbol at which a CSI-RS is defined. It may be preferable to cover the entire slot. FIG.10illustrates an example of a symbol location for an RM pattern. If a symbol location configuration is performed on the entire slot, the symbol location configuration can be more simplified. For example, since RM is mainly performed to protect a CSI-RS, the RM can be divided into a case of performing RM resource configuration on a CSI-RS and a case of performing RM resource configuration on a resource rather than the CSI-RS. In particular, a symbol location can be configured by either a ‘CSI-RS region’ or a ‘non-CSI-RS region’. By doing so, a resource region designated by each symbol location may be different and may not be overlapped. The abovementioned method includes a method for covering characteristics of RSs different from each other. In this case, it may be able to bind several configuration parameters according to a characteristic of an RS to make configuration to be more simplified. In particular, it may set a configuration scheme of a resource to a UE. For example, a ‘resource configuration type indication’ parameter indicating a scheme to be used among CSI-RS/DMRS/PTRS type can be set to a UE. Each of the resource configuration types can be defined by a set of specific parameters. For example, if the resource configuration type parameter indicates a ‘CSI-RS scheme’, as mentioned in the foregoing description, it may configure a resource pattern by aggregating 2*1 component RM resources. Since PT-RS is transmitted in the whole of subcarriers within an RB or alternating symbol, if the resource configuration type parameter indicates a PT-RS type, 1 RE is used as a component RM resource and it may set a limit on a region within an RB in which an RMR is configurable. If the resource configuration type parameter indicates a DMRS type, an RE pattern according to a port number can be defined as a component RM resource (e.g., a port number, a front/back loaded DMRS, additional DMRS configuration). The resource configuration type indication can include parameters described in the following. Component RM Resource(s)Region (e.g., symbol/subcarrier) within RB at which RMR is defined An RMR region may vary depending on an RS for which an RMR is defined. Hence, a symbol/subcarrier location can be differently configured according to an RS type such as a CSI-RS candidate position(s), a DMRS candidate position(s), a PTRS position(s), and the like. RB-Level Density Base density and a configurable density value(s) can be differently configured according to an RS type. Region at which RMR Pattern is Defined (e.g., 1 RB or 2 RBs) When a parameter is configured according to the ‘resource configuration type indication’, the parameter can be defined in advance or can be configured via higher layer signaling such as RRC. More specifically, when an RMR is defined for a PT-RS, it may additionally configure a parameter described in the following.Symbol spacing—determines a symbol to which an RMR for a PT-RS is transmitted within a subcarrier. For example, it may consider 1 subcarrier spacing (i.e., a ZP resource for a PT-RS is configured in all symbols within a PDSCH region), 2 subcarrier spacing or 4 subcarrier spacing. In order to correct inter-symbol phase drift in a slot, it may be able to configure a TRS (tracking RS). The TRS can also be set to a plurality of UEs. In order to precisely measure the phase drift, a method of configuring/signaling rate matching to be performed on the TRS as well is considering. The rate matching of the TRS can also be performed using the aforementioned method. The method can also be configured for CORESET for transmitting PDSCH. The CORESET corresponds to a resource region in which PDCCH is transmittable. A plurality of UE-specifically configured CORESETs can be set to a UE. A base station informs the UE of a CORESET (to search for PDCCH) to be actually used among a plurality of the CORESETs. An REG (12 REs per symbol) and a search space, which are defined in a wideband/PDCCH region of legacy LTE, are defined in the CORESET. Since the CORESET is defined for a band equal to or narrower than a bandwidth part, the remaining resource in a symbol at which the CORESET is defined can be used for transmitting PDSCH. Since it is able to transmit PDCCH of a different UE using the resource, it may consider a method of configuring/signaling rate matching to be performed on the resource as well. The rate matching of the CORESET can also be performed using the aforementioned method. It may also be able to perform rate matching on a synchronization signal (SS) block (SSB) using the aforementioned method. The SS block corresponds to a resource block including a PSS/SSS/PBCH. One or more SS blocks are set to a UE and a base station can designate an SS block to be actually used. When a resource region is occupied by a specific SS block, if the resource region is not used by the resource, the resource region can be used for transmitting PDSCH. It may consider a method of configuring/signaling rate matching to be performed on the resource as well in consideration of a case that the resource is occupied by a different resource. The rate matching of the SS block can also be performed using the aforementioned method. In particular, the ‘resource configuration type indication’ can additionally include a target resource and parameters described in the following.TRSSymbol-level transmission interval within a slotTRS subcarrier positionCORESETPDCCH duration (Number of PDCCH symbols)RB-level allocationRB starting/ending index or RB lengthRB-level bitmapCORESET indexSS blockSS bock index If such a configuration as ‘PDSCH’ or ‘unrestricted’ is defined as a resource configuration type indication, it may consider a method of defining an RMR incapable of being covered by the aforementioned RS configuration method or a method of defining an RMR irrespective of an RS configuration. In case of an RMR for a PT-RS, it may be able to tie an operation of activating the RMR, i.e., an operation of performing rate matching on the RMR, with DCI signaling instead of aperiodic RMR signaling. For example, when signaling indicating that a UE is performing MU (multi-user) operation or explicit DCI signaling for the MU operation is received, the UE can perform a rate matching operation on the RMR. When rate matching is performed on a resource that covers a plurality of RS patterns, if a plurality of RMRs having an RS pattern on which the rate matching is to be performed are configured and the aforementioned RM configuration or a link corresponding to the RM configuration is tied with a plurality of the RMRs, it may be able to configure the rate matching to be performed on all RMRs that construct a resource pattern on which rate matching is to be performed. In this case, a resource performing the rate matching is regarded as a union of configured RMRs. In particular, an overlapped resource is configured to perform a rate matching operation one time only. 2. A Resource Pattern is Configured by a Bitmap. Similar to LTE, it is able to define a resource for performing RM via a bitmap. A predefined grid is defined within an RB (group) and bitmaps are defined as many as the number of grid elements defined in the RB (group). Each bit of a bitmap is tied with each grid element to configure an RMR. FIG.11illustrates a resource grid indicating an RMR or an RMR pattern within 1 RB and the number of bits necessary for expressing the resource grid. Two or more resource grid element configurations (e.g., a resource grid element size, an RE configuration shape in a resource grid element) can be defined. For example, as shown inFIG.12, it may be able to determine a resource grid element size from among 1-RE resource grid element and 2-RE resource grid element. In case of supporting an RM operation for an RS such as a PT-RS, since an RS using 1 subcarrier is transmitted in every symbol or alternating symbol in frequency direction, it may use the abovementioned resource grid element size to support the RM operation. In particular, it may be able to define a different resource grid element configuration according to a region within a slot. For example, since an RM operation in a CSI-RS region is performed to protect a CSI-RS, it is efficient for an RMR to follow a component configuration such as a component resource of a CSI-RS. On the contrary, a DMRS can be transmitted in a different region. A position in which the DMRS is transmitted may vary according to the use of a comb structure or an MU port. In this case, a resource element size of the DMRS may be different from a resource element size of a CSI-RS resource.FIG.13illustrates a resource grid element configuration configured in two regions within a slot. Similar to CSI-RS/DMRS/PDSCH, a resource grid element size can be differently configured according to a target of an RM operation and a location to which the target is set. Similarly, since it is able to perform RM on a specific RE group (e.g., an even-numbered RE for a comb DMRS structure within a symbol), the specific RE group may correspond to 1 bit irrespective of a resource grid element size. The hybrid resource grid can be defined in advance or can be configured via higher layer signaling such as RRC. A resource grid for RM can be differently defined according to a configuration of the resource grid element size. This is aimed for dividing resource configurations into a high-granularity resource configuration using a limited area and a low-granularity resource configuration that covers a larger area (e.g., a slot). In particular, it is able to cover high resolution/low resolution and a larger area/limited area while maintaining a configuration bit size with the (almost) same length. For example, if 1-RE RM component resource is defined, each bit of a bitmap is defined in a resource grid shown inFIG.14(a). If 2-RE RM component resource is defined, each bit of a bitmap is defined in a resource grid shown inFIG.14(b). InFIG.14, a hatched region corresponds to a candidate region capable of being configured as an RMR. In order to efficiently configure a bitmap, a bitmap (hereinafter, a frequency bitmap) for indicating a frequency resource region is configured by x bits and a bitmap (hereinafter, a time bitmap) for indicating a time resource region can be configured by y bits. In this case, an RE pattern in frequency direction is defined within a symbol using the frequency bitmap and an RM pattern is defined in a symbol designated by the time bitmap within the symbol. For example, as shown inFIG.15, in order to support the whole of one slot with an RE level, it may support a bitmap of x=12 and y=14. If a frequency bitmap and a time bitmap are transmitted as a single bitmap in a manner of being bundled, among the bitmap of a length of x+y bits in total, x/y bit(s) located at the forepart of the bitmap is comprehended as a frequency/time bitmap and y/x bit(s) located at the rear of the bitmap can be comprehended as a time/frequency bitmap, respectively. The method above can be commonly applied to a current RS structure that an RE pattern within a symbol is identically transmitted from a plurality of symbols. In this case, at least one bit of the time/frequency bitmap corresponds to 1. In other word, at least one symbol/subcarrier should be included in an RM pattern in a manner of being enabled. If an RMR is restricted to a prescribed region and/or a resource grid element configuration is changed, the x bit and/or the y bit can be reduced in accordance with the condition. For example, if an RMR does not support 2 symbols of a control channel region, y may become 12. In particular, a frequency resource grid element size or a time resource grid element size may become a resource unit (e.g., an RB or a slot) at which an RM pattern is defined (i.e., x=1 or y=1). In this case, a bitmap for corresponding direction may not be defined (i.e., x=1 is redefined by x->0). The abovementioned configuration scheme can be used for RMR configuration of a symbol level or a subcarrier level. In particular, information on whether the configuration scheme corresponds to time direction or frequency direction can be set to a UE. When an RM pattern is configured via a time bitmap and a frequency bitmap, the RM pattern can be used under the assumption that an RE pattern within a symbol is identically used in a symbol designated by a bitmap. Hence, in order to use an RM pattern using a part of a corresponding RE only while having the same time bitmap and the same frequency bitmap, it may be able to configure an RM subset pattern. In particular, since a pattern of an RMR for protecting an RS is determined according to a prescribed rule in general, it may be able to configure patterns described in the following as an actual RM subset pattern.FIG.16illustrates RM subset patterns described in the following. (a) Base pattern: An RM pattern is defined by including all REs corresponding to a given time bitmap and a frequency bitmap. (b) Checker board pattern: An RM pattern is defined by alternating REs with a prescribe period among REs corresponding to a given time bitmap and a frequency bitmap.A period of alternately selecting an RE can be fixed by a specific number (e.g., 2)In this case, it may be able to additionally configure an offset of a checker board pattern. (c) Diagonal pattern: An RM pattern is defined by a diagonal line among REs corresponding to a given time bitmap and a frequency bitmap.In this case, it may be able to additionally configure direction (e.g., upper right/bottom right) of a diagonal line. In particular, although the same time-frequency bitmap is configured, if an RM subset pattern parameter is differently configured, as mentioned in the foregoing description, an actually used pattern can be restricted to an RE group indicated by the RM subset pattern parameter among RE groups designated by the time-frequency bitmap. When an RE pattern is configured using the method above, it may be able to configure a sort of a repeating pattern. To this end, it may be able to define a repeating resource block and the repeating resource block can be configured to be repeated for prescribed duration (e.g., 1 RB). To this end, a size of the repeating resource block can be configured. In particular, it is able to configure a size (a) of a time resource and a size (b) of a frequency resource (e.g., the number of symbols and the number of subcarriers) respectively. A signaling unit may vary according to the aforementioned resource granularity (e.g., the resource grid size). It is able to define a resource pattern using the bitmap described in the present invention (and/or a method of making a resource pattern configuration follow a resource configuration scheme of a different RS) within a given time/frequency resource. For example, when 2 symbols are used as a control channel within 1 RB, in other word, in a PDSCH region of 12 symbols/12 subcarriers, if a repeating resource block defined by a=6 symbols/b=6 subcarriers is defined, as shown inFIG.17, the repeating resource block is repeatedly applied in time/frequency direction. When a size of an available PDSCH region, i.e., a size of a resource region in which an RMR is configurable corresponds to R, a time/frequency size of the repeating resource block can be defined by └R/n┘ (n is natural number) and the n can be respectively configured by time/frequency direction using a′ and b′. For example, the method ofFIG.17can be defined by parameters of a′=2, b′=2 instead of a=6, b=6. In this case, since a′/b′ indicates a repetition count in a corresponding resource, it is able to more clearly indicate a size of the resource. In this case, if a boundary of RB/slot is not matched with a boundary of a repeating resource block at the time of repetition, the repeating resource block is not allocated. In order to prevent the ambiguity, a′ and b′ can be configured as a repetition count of the configured repeating resource block irrespective of the configuration of the repeating resource block. Similarly, a position of the repeating resource block can be allocated via the aforementioned method by handling the position of the repeating resource block to be similar to the resource element of the resource configuration described in the present specification. For example, the position of the repeating resource block can be allocated by assuming the repeating resource block as aggregated component RM resources, defining a resource grid element size identical to a size of the repeating resource block, and configuring the position of the repeating resource block with a bitmap according to the resource grid element size. In particular, the repetition of the resource can be defined except such a region (e.g., a symbol/subcarrier) to which PDSCH is not transmitted as a CSI-RS for managing a beam, a DMRS, and the like. In a time axis, it may fill a repeating resource block from a rear part of a slot. In case of a fore part of a slot, since a PDSCH region varies due to a control channel, a front-loaded DMRS, and the like, it is preferable to fill the repeating resource block from a rear part of a slot. As mentioned in the foregoing description, it may consider performing RM on the whole of a certain symbol/subcarrier rather than using a prescribed repetition pattern. In other word, the whole of a symbol/subcarrier configured via a time bitmap/frequency bitmap can be defined as an RMR. This can be configured by comprehending the time bitmap and the frequency bitmap as a union instead of an intersection. To this end, it may be able to inform a UE of information on whether a method of comprehending the bitmap corresponds to a bit-wise or a symbol/subcarrier-wise (i.e., whether the bitmap corresponds to an intersection or a union) via 1-bit indicator. Or, the configuration can be included in the aforementioned ‘RM subset pattern’ parameter. The bitmap configuration method can be defined for multiple RBs and multiple slots, respectively. For example, if an RMR pattern is defined for the N number of RBs, a frequency bitmap size may become x (e.g., 12 for the number of subcarriers in an RB)*N bits. If an RMR pattern is defined for the M number of slots, a time bitmap size may become y (e.g., 14 for the number of symbols in a slot)*M bits. The number of RBs/slots can be configured in a manner of being included in an RMR configuration together. Among the aforementioned methods, according to a hybrid configuration method configuring the whole of RMR, both a high-granularity (i.e., having a big resource grid element size) and a low-granularity (i.e., having a small resource grid element size) are used and a low-granularity indication is configured in a resource region designated by the high-granularity configuration to reduce signaling overhead. To this end, for example, a bitmap for a symbol-level RMR configuration is included in an RMR configuration and a bitmap for a RE-level RMR configuration can be additionally included in the RMR configuration. On the contrary, according to a hybrid configuration method configuring the whole of RMR, both a high-granularity and a low-granularity are used and a low-granularity indication is configured in a resource region rather than a resource region designated by the high-granularity configuration. In both cases, in case of a bitmap of the low-granularity indication, a region at which a RE-level bitmap is defined varies according to a bitmap of the high-granularity indication. Hence, a length of a bitmap varies according to high-granularity indication of a preceding RMR configuration. Two or more indication methods (e.g., granularity size, direction, etc.) can be used for each of the high-granularity indication method and the low-granularity indication method. In this case, an indication method to be used among the two or more indication methods can be included in an RMR configuration. For example, although a hybrid configuration method in symbol direction has been described in the foregoing description for example, since it is able to similarly use the method in frequency direction as well, a configuration indicating whether a bitmap corresponds to time direction or frequency direction can be included in the method. More specifically, two or more bitmaps can be included in a resource configuration method and it may be able to configure a bitmap to indicate not only a resource granularity of the bitmap but also a purpose of the bitmap.Resource granularityHigh granularity (e.g., symbol)Low granularity (e.g., RE)PurposeRate matching is performed on the whole of a resource designated by a bitmap.A resource on which rate matching is to be actually performed is designated by an additional bitmap in a resource designated by a bitmap.In this configuration, an additional bitmap is provided to a designated resource only. For example, when a symbol-level RM configuration bitmap is given, if ‘rate matching is performed on the whole of a resource’ is configured, the symbol-level RM configuration bitmap is used for the purpose of performing a rate matching operation on the whole of a designated symbol. In this case, an additional bitmap is not provided. On the contrary, ‘a resource on which rate matching is to be actually performed is designated via an additional bitmap’ is configured, an actual RM resource can be configured in more detail via the aforementioned method within a resource consisting of a symbol group designated by a corresponding symbol. In the hybrid configuration method, a ‘purpose’ of the bitmap can be configured in a bitmap indicating high granularity only. In a bitmap indicating low granularity, an operation of performing rate matching on the whole of a designated resource can be fixed. If ‘rate matching is performed on the whole of a resource’ is configured as a ‘purpose’, as mentioned in the foregoing description, a bitmap of the low granularity indication can be configured in a region rather than a resource designated by the high granularity indication. As a method of configuring a bitmap according to a symbol, a bitmap in frequency direction is configured and a symbol index to which the bitmap is applied and/or the number of symbols can be configured. If the number of symbols is configured together, the symbol index can be comprehended as a starting symbol index. The number of symbols may correspond to symbols adjacent to each other in a region (e.g., PDSCH region) in which an RMR is configurable. And, the same method can be applied to a different dimension. For example, a bitmap according to a subcarrier can be configured together with a subcarrier index and/or the number of subframes. If the number of symbols (groups) to which an RMR is to be set using a bitmap is less, the method above can be efficiently used for configuring the RMR. An RMR designated by a plurality of bitmaps/resources operates as a union of a plurality of the bitmaps/resources (unless there is an additional indication). In other word, it is considered as an overlapped resource performs rate matching on the RMR one time only. It may be able to define a resource region at which a high-granularity and/or a low-granularity indication method configuration is to be defined according to a PDSCH starting and/or an ending symbol. To this end, the PDSCH starting and/or ending symbol can be configured/signaled to an RMR. In this case, although a separate PDSCH starting and/or ending symbol signaling exists, a UE ignores the signaling and may be able to use the RMR by assuming the PDSCH starting and/or ending symbol defined in the RMR. An RMR defined in the present invention is applied to a region designated as a PDSCH region only and is not applied to other regions (e.g., PDCCH region, a UL region in a self-contained slot, etc.). For example, when a PDSCH starting and/or ending symbol is signaled, an RMR defined at the outside of a designated region is not used. Or, it may be able to additionally configure a resource region at which an RMR is to be defined via more dynamic signaling such as MAC/DCI. In particular, a region at which an RMR is to be defined within a PDSCH region can be configured via such a high-granularity indication method as a symbol and/or a subcarrier level. As mentioned in the foregoing description, similar to a method of using a resource configuration method of a different RS as a resource pattern configuration for rate matching, it may use a different resource configuration method according to a type (e.g., CSI-RS, DMRS, PT-RS) of an RS. To this end, it may indicate ‘resource configuration type indication’. Each resource configuration type can differently define the aforementioned characteristics. In this case, it may be able to define such a configuration as ‘PDSCH’ or ‘unrestricted’ to freely define an RMR for a region incapable of being covered by the aforementioned RS configuration method or irrespective of the RS configuration. However, if an RMR is freely defined, it may set a limit on a partial parameter (e.g., resource grid element size >1). 3. Hybrid Configuration If the aforementioned two methods (i.e., a method of using a resource configuration method of a different RS as a resource pattern configuration for rate matching and a method of using a bitmap) are supported at the same time, a method to be used among the two methods can be configured in a corresponding resource configuration. In particular, the resource configuration is integrated with a resource configuration type indication of the method of using a resource configuration of a different RS and can be used as a parameter such as a sort of ‘RMR configuration method’. If the method above is used in periodic/semi-persistent RMR, a time-wise comb configuration can be configured. The time-wise comb configuration defines a slot on which RM is performed with a prescribed pattern between specific periods. For example, in case of a semi-persistent RMR of a period of p, it is able to designate a slot in which an RM operation is used by the RMR within a single period using a bitmap of p-bit. In this case, each bit corresponds to a slot within a period by one to one. For example, if an RM operation is used by a corresponding RMR in a first and a second slot using a configuration of a period of 5 slots and an offset 0 (slot), a bitmap configuration is defined as ‘11000’. In this case, the RM operation for the RMR is used at slot indexes 0, 1, 5, 6, 10, 11, . . . . And, it may be able to define RB-level density of an RMR. For example, if density=½ is defined, an RMR can be used in an even-numbered RB or an odd-numbered RM only. In this case, in order to indicate a resource to which an RMR pattern is applied (e.g., even-numbered RB or odd-numbered RB), an RB offset value can be configured. In particular, when an RMR for a PT-RS is configured, presence/non-presence and a configuration of the PT-RS may vary according to scheduling of a UE. Hence, a parameter for determining a pattern of the PT-RS (e.g., density parameter) can be dynamically transmitted to a UE via DCI. The abovementioned RM operation can be used for implementing IFDMA (interleaved FDMA) of a beam management CSI-RS. This can be configured in a manner that a NZP CSI-RS designated by an RPF (repetition factor)/offset is transmitted on an n-symbol RMR defined by the aforementioned configuration method. Method of Signaling Rate Matching L1/L2 Indication for ZP-CSI-RS 1. ‘Rate Matching Setting’ Case (Related toFIG.5)A rate matching setting including a plurality of links can be set to a UE via higher layer signaling such as RRC. A set of ZP-CSI-RS patterns to be used is included in each of a plurality of the links. A separate resource setting can be configured via higher layer signaling such as RRC.In order to have flexibility as much as about dozens ms, it is able to define a ZP-CSI-RS link (group). This scheme is comprehended as being identical to a semi-persistent ZP-CSI-RS configuration. A semi-persistent ZP-CSI-RS performs rate matching on a ZP-CSI-RS RE pattern corresponding to links indicated from a subframe in which enable signaling including an actually used ZP-CSI RS link (group) is received to a subframe in which disable signaling is received.For the flexibility of a subframe (or slot) unit, a ZP-CSI-RS link (group) to be used as dynamic signaling can be set to a UE via L1 signaling such as DCI. This can be performed in a manner of designating a link to be actually used from among a link group (or a link group sorted via MAC signaling) included in a defined rate matching setting. In case of an aperiodic ZP-CSI-RS, it may indicate that rate matching is performed on a ZP-CSI-RS RE pattern corresponding to an indicated link in a subframe in which DCI is transmitted. In case of a semi-persistent ZP-CSI-RS, signaling transmitted via DCI is comprehended as enable/disable signaling. It indicates that rate matching is performed on a ZP-CSI-RS RE pattern corresponding to links indicated from a subframe in which enable signaling is received via DCI to a subframe immediately before a subframe in which disable signaling is received. 2. ‘Measurement Setting’ Case (Related toFIG.7)A ‘measurement setting’ including a ZP-CSI-RS link can be set to a UE via higher layer signaling such as RRC.For the flexibility as much as dozens ms, it is able to define a ZP-CSI-RS link (group) to be actually used via MAC signaling. The link can be selected using a scheme identical to a scheme of selecting a link to be actually used for CSI measurement/beam management from a measurement setting. In this case, the ZP-CSI-RS link (group) can include a link for measuring CSI (e.g., a link including a resource setting and a reporting setting). In this case, the ZP-CSI-RS link is comprehended as a ZP-CSI-RS link according to a resource setting which is designated irrespective of a reporting setting. And, the scheme is comprehended as a scheme identical to a semi-persistent ZP-CSI-RS configuration. The semi-persistent ZP-CSI-RS indicates that rate matching is performed on a ZP-CSI-RS RE pattern corresponding to links indicated from a subframe enable signaling is received to a subframe immediately before a subframe in which disable signaling is received.For the flexibility of a subframe (or slot) unit, a ZP-CSI-RS link (group) to be used as dynamic signaling can be set to a UE via L1 signaling such as DCI. This can be performed in a manner of designating a link to be actually used from among a link group (or a link group sorted via MAC signaling) included in a defined rate matching setting. In case of an aperiodic ZP-CSI-RS, it may indicate that rate matching is performed on a ZP-CSI-RS RE pattern corresponding to an indicated link in a subframe in which DCI is transmitted. In case of a semi-persistent ZP-CSI-RS, signaling transmitted via DCI is comprehended as enable/disable signaling. It indicates that rate matching is performed on a ZP-CSI-RS RE pattern corresponding to links indicated from a subframe in which enable signaling is received via DCI to a subframe immediately before a subframe in which disable signaling is received. 3. ‘Resource Setting’ Case (Related toFIG.8)It may include the aforementioned 1-bit indicator in each resource configuration included in a resource setting.For the flexibility as much as dozens ms, it is able to include L′-bit ZP-CSI-RS indicator via MAC signaling. Each bit of the L′-bit ZP-CSI-RS indicator is matched with a resource configuration (or a resource of which 1-bit indicator indicates ‘rate matching on’ among the resource configuration) of the resource setting by one-to-one. Information on whether or not rate matching is performed on an RE pattern corresponding to a resource can be signaled to a UE by toggling a bit by on/off. The scheme above can be comprehended as a scheme identical to a semi-persistent ZP-CSI-RS configuration. The semi-persistent ZP-CSI-RS indicates that rate matching is performed on a ZP-CSI-RS RE pattern corresponding to links indicated from a subframe in which enable signaling is received to a subframe immediately before a subframe in which disable signaling is received.For the flexibility of a subframe (or slot) unit, it may be able to transmit ‘ZP-CSI-RS indicator’ to a UE via L1 signaling such as DCI. This means that it informs a UE of information on whether or not rate matching is performed using an RE pattern corresponding to a ZP-CSI-RS resource (group) configured via higher layer signaling. In case of an aperiodic ZP-CSI-RS, it may indicate that rate matching is performed on a ZP-CSI-RS RE pattern corresponding to an indicated resource (or a resource group) in a subframe in which DCI is transmitted. In case of a semi-persistent ZP-CSI-RS, signaling transmitted via DCI is comprehended as enable/disable signaling. It indicates that rate matching is performed on a ZP-CSI-RS RE pattern corresponding to resources (or a resource group) indicated from a subframe in which enable signaling is received via DCI to a subframe immediately before a subframe in which disable signaling is received. 4. Other Configurations Frequency-Related Configuration For the flexibility of configuration, a frequency granularity can be set to a UE via L2 signaling such as MAC or L1 signaling such as DCI instead of higher layer signaling. In this case, the configured frequency granularity is identically applied to the whole of a ZP-CSI-RS pattern. In particular, the frequency granularity is configured by one of a partial band and a wideband using 1-bit indicator. In this case, the partial band may correspond to a band (or a band set) having a different numerology and/or a different operation scheme (e.g., eMBB, eMTC) similar to an eMBB (enhanced mobile broadband) band. Or, the partial band may correspond to a configured band group and the band group can be configured via separate signaling via higher layer signaling. If a separate frequency granularity-related configuration is not provided, a base station and a UE may follow a frequency granularity included in higher layer signaling. Or, in order to reduce signaling overhead, the UE can perform data transmission and reception under the assumption that rate matching is performed on all scheduled bands, Time-Related Configuration For the flexibility of configuration, timing characteristic and/or a period/offset (semi-persistent or periodic) can be set to a UE via L2 signaling such as MAC or L1 signaling such as DCI instead of higher layer signaling. Since the L1 signaling corresponds to signaling related to allocation/demodulation of PDSCH, it is preferable to transmit the L1 signaling via DL-related UE-specific DCI together with a DL grant (DL scheduling). In particular, similar to LTE, if a PQI (PDSCH RE mapping and quasi co-location indicator) or DCI signaling similar to the PQI is defined, L1 signaling can be transmitted via the signaling. In this case, if a periodic/semi-persistent RMR is configured and the RMR(S) is configured as an RMR of a PQI, rate matching can be performed on the RMR only when RMR timing designated by the PQI is matched with the timing to which the periodic/semi-persistent RMR is applied. In this case, in case of a semi-persistent RMR, a corresponding rate matching operation can be performed only when the semi-persistent RMR is enabled via a separate signaling/configuration. In case of an aperiodic RMR, rate matching can be performed on the aperiodic RMR at the timing designated by the PQI. In order to transmit matching signaling for the entire cell or a specific UE group, it may use a sort of cell-specific DCI and/or UE group-specific DCI. In particular, it may be able to transmit the rate matching signaling by including the rate matching signaling in the DCI.FIG.18illustrates payload of the cell-specific DCI and/or the UE group-specific DCI. In particular, it may have a structure that the certain numbers of payloads each of which has a specific length are adjacent to each other. A position of each payload (or a payload index) may have a meaning described in the following. 1. UE A position of a payload (or a payload index) may correspond to information for a specific UE.In this case, contents transmitted to a payload may correspond to signaling related to a UE operation configured in advance or configured via RRC/MAC signaling.FIG.19illustrates DCI to which a payload for each UE is set. For example, when a payload 1 is tied with a UE 1, signaling transmitted to a position of the payload 1 can signal an operation (e.g., channel measurement, interference measurement, etc.) to be performed in the UE 1 and/or a target resource performing the operation. In particular, if contents indicating ‘no RS’ are included in signaled information, it may be able to inform a cell/UE group that a corresponding UE does not designate a resource to be used and there is no resource on which rate matching is to be performed by a different UE. The signaling can specify a cell-specific group or a UE-specific group. In particular, an indication of a UE can be replaced with an indication of a DMRS port and/or an indication of a sequence scrambling parameter (e.g., a specific parameter ID such as a virtual cell-ID, and the like and/or a sequence seed ID such as nSCID and the like). For example, assume that a UE indicates using a DMRS port. In this case, an operation indicated in a payload 1 can indicate that the UE currently uses a DMRS port 7. To this end, it may be able to designate a separate payload for a UE to which a DMRS port is not allocated, i.e., a not scheduled UE. In particular, a plurality of DMRS ports and/or a plurality of sequence scrambling parameters may use a single payload in consideration of the frequency of using a DMRS port and/or a sequence scrambling parameter. In this case, a state of the payload can be jointly coded by combining a port (and/or a sequence scrambling parameter)(index) with an operation in a port (and/or a sequence scrambling parameter) group.Or, each payload can indicate an operation to be performed by a resource and a UE.FIG.20illustrates am example that a payload indicates a resource and a UE operation in the resource. For example, when there is a payload of N bits, the payload indicates a resource to a UE using (N−1) bits and indicates an operation (e.g., channel measurement, interference measurement, etc.) to be performed in the indicated resource using the remaining 1 bit. In this case, a UE set to the payload performs a designated operation in a designated resource and the remaining UEs can perform rate matching on all resources not designated as ‘no RS’.Or, each payload can designate a resource.FIG.21illustrates an example that a payload for a UE indicates a resource. A UE performs rate matching on all resources (i.e., a union of resources designated by all payloads) not designated as ‘no RS’ In particular, the UE performs an operation designated by signaling transmitted to a payload corresponding to the UE on a corresponding resource. The operation for the signaling can be configured via higher layer signaling in advance. 2. Resource In this case, each position corresponds to a position of a time-frequency (code division multiplexed) resource configured in advance or configured via RRC/MAC signaling.FIG.22illustrates an example of DCI including a payload for each resource. In this case, signaling transmitted to each payload may correspond to a UE operation for each resource and/or a UE performing the operation. For example, when a payload 1 is tied with a CSI-RS resource 1, signaling transmitted to a position of the payload 1 may correspond to signaling for an operation (e.g., channel measurement, interference measurement, etc.) to be performed in a configured resource 1. The signaling can specify a cell-specific group or a UE-specific group. Or, a payload can indicate an operation for a resource tied with the payload and a UE performing the operation.FIG.23illustrates a payload for a resource including a UE indication and an operation for the indicated UE. For example, each payload consists of 2 bits and each state of the 2 bits includes ‘no measurement’, ‘channel measurement’, interference measurement’, and ‘channel and interference measurement’. In this case, a UE of each state and a higher layer configuration for an operation are provided to each UE. In this case, each UE can perform rate matching on all resources indicating a state rather than ‘no measurement’. And, ‘rate matching only’ is added to signaled states to make UEs receiving DCI perform rate matching only on a corresponding resource without a separate operation. UE signaling can be replaced with DMRS port signaling and/or a sequence scrambling parameter (e.g., a specific parameter ID such as a virtual cell ID and the like and/or a sequence seed ID such as nSCID and the like). For example, when a UE is indicated by a DMRS port, it may indicate ‘DMRS port 7’ instead of a UE index to indicate that an operation indicated by a corresponding payload corresponds to an operation for a UE currently using ‘DMRS port 7’. In this case, if a state indicated by a payload for a non-scheduled UE includes a state indicating ‘non-scheduled UE’, it may be able to signal an operation for a UE to which a DMRS port is not provided. 3. Operation Or, each payload can indicate an operation to be performed by a UE only.FIG.24illustrates DCI indicating an operation to be performed by a UE as a payload for a resource. For example, each payload consists of 2 bits and each state includes ‘no measurement’, ‘channel measurement’, interference measurement’, and ‘channel and interference measurement’. When a payload 1 is configured to be tied with an aperiodic CSI-RS resource 1 and the aperiodic CSI-RS resource 1 is allocated to a UE 1 and a UE 2 for channel measurement, if the payload 1 signals ‘channel measurement’, the UE 1 and the UE 2 perform a channel measurement operation on the CSI-RS resource 1 at the same time. In this case, a higher layer configuration for connecting an operation with a resource is provided to each of the UEs. And, ‘rate matching only’ is added to signaled states to make UEs receiving DCI perform rate matching only on a corresponding resource without a separate operation. In particular, each payload can inform a UE of transmission/non-transmission of an RS only. In other word, each payload can trigger a resource preset to a UE and an operation in the resource. For example, when a payload 1 is configured to be tied with an aperiodic CSI-RS resource 1, a UE 1 is configured to perform channel measurement on the aperiodic CSI-RS resource 1, and a UE 2 is configured to perform interference measurement on the aperiodic CSI-RS resource 1, if signaling indicating ‘measurement’ is transmitted to the payload 1, the UE 1 performs channel measurement in the resource and the UE 2 performs interference measurement in the resource. In this case, a connection between a resource and an operation in the resource can be indicated to a UE via higher layer signaling.FIG.25illustrates the payload. 4. No Meaning A payload position (or index) has no meaning. A payload can include 3 contents including resource indication, a target UE, and an operation.FIG.26illustrates the contents included in a payload. A UE performs rate matching on all resources. If a payload designating the UE exists, the UE performs an operation indicated by the payload in a resource indicated by the payload. In this case, in order to reduce a DCI blind decoding count of the UE, the number of payloads can be determined in advance or can be designated to the UE via higher layer signaling such as RRC and MAC signaling. In this case, if the resource indication indicates ‘no RS’, the UE may not read UE indication and an operation for UE. In this case, it may configure and/or signal resource/UE/operation except RS configuration using the aforementioned scheme. In this case, a UE can operate according to the configured resource/operation based on an RS additionally set to the UE. The aforementioned RM operation may correspond to an aperiodic RM and a semi-persistent RM (enable/disable). More specifically, in case of the semi-persistent RM, if specific DCI is received, an RM operation is persistently applied to each of instances according to a predetermined period at the corresponding timing and thereafter (until different disable or updating DCI is received) using at least one of the aforementioned methods. RNTI (e.g., SI-RNTI or a separate UE-group-RNTI) for decoding DCI is provided to a UE in advance and the UE may attempt to perform blind decoding on cell-specific DCI or UE-group-specific DCI using the RNTI. Or, semi-persistent RM can be set to a UE via MAC signaling and the RM operation can be restricted to an aperiodic RM only. UE operation-related signaling can be included in separate UE-specific DCI. In other word, if RM signaling is transmitted or received via cell-specific DCI/UE group-specific DCI, 1-bit signaling of the separate UE-specific DCI recognizes a resource designated by the cell-specific DCI/UE group-specific DCI as an aperiodic NZP CSI-RS resource and can indicate an operation of performing measurement or an operation of not performing measurement. Moreover, the 1-bit signaling can be combined (jointly encoding) or integrated with aperiodic NZP CSI-RS indication. Similarly, it may designate an RM operation by setting a limit on a size of an aperiodic RM signaling field by 1 bit. And, it may be able to configure to perform RM on a resource designated by the cell-specific DCI/UE group-specific DCI. This operation has a meaning that signaling indicating an RM target resource and signaling indicating whether or not RM is actually performed are separated from each other using cell-specific DCI/UE group-specific DCI and UE-specific DCI, respectively. If an RM target resource designated by the cell-specific DCI/UE group-specific DCI does not exist or is not received, a UE can perform aperiodic reporting on a resource designated for a different reporting (e.g., periodic/semi-persistent reporting). When multiple slots are scheduled using single DCI, RM designated by the DCI can be identically performed using a scheme designated for a slot scheduled by the DCI. In this case, it is able to indicate RM to the multiple slots without additional signaling overhead. On the contrary, it may perform RM on too many resources. In order to solve the problem, it may be able to designate slot timing at which RM is actually performed via separate signaling. The DCI can designate a slot offset on the basis of the timing at which the DCI is signaled. When RM is designated to multiple timings within a slot group scheduled by single DCI, a base station can designate an RM slot pattern designated via higher layer signaling such as RRC/MAC as DCI. The RM slot pattern corresponds to a set of slots performing RM within the slot group scheduled by the single DCI. The RM slot pattern can be designated by a bit map or a period and/or an offset for full flexibility. The RM slot can be signaled in a manner of being combined with the RM signaling to reduce signaling overhead. In case of a DMRS, it may consider applying a ZP CSI-RS for the RM to an additional DMRS pattern. It may use an additional DMRS according to UE environment (e.g., Doppler spread according to a speed of a UE, etc.) irrespective of a DMRS pattern shared by all UEs. The additional DMRS is used in a manner of transmitting the additional DMRS to a legacy DMRS. RM for a DMRS pattern can be used to measure interference. And, when multiple users using a different additional DMRS pattern (e.g., a UE using an additional DMRS and a UE not using the additional DMRS) are scheduled, the additional DMRS can be used for cancelling interference in detecting a DMRS. In this case, when the ZP CSI-RS for the RM is used for a DMRS, the DMRS can be restricted to an additional DMRS to reduce signaling overhead. In FDR (full duplex radio) case, UEs different from each other can perform DL reception/UL transmission in the same slot. In this case, in order to protect an SRS transmitted by a UE performing UL transmission, it may perform RM on an SRS position. The SRS can be transmitted by concentrating power on a partial band for channel measurement performance. In order to perform channel measurement on the whole band (or a configured band) using the SRS transmission scheme, it may consider SRS hopping. It may additionally configure a hopping pattern of an SRS on which RM is to be performed or a parameter determining the hopping pattern in consideration of the SRS hopping. For an (NZP) CSI-RS enabled/disabled by RRC/MAC signaling, in order to mute transmission/measurement of the CSI-RS at the specific timing, it may transmit CSI-RS muting signaling. In particular, similar to RM performed on PDSCH, a UE does not measure an NZP CSI-RS for a signaled resource (time/frequency) to provide additional flexibility to periodic/semi-persistent NZP CSI-RS via the MAC/RRC configuration and/or IMR (interference measurement resource) via DCI. In particular, when NZP CSI-RS resources are configured in a manner of being overlapped, a plurality of UEs are able to share the resources using the abovementioned method. And, a UE measures a channel of the UE from a CSI-RS resource, exclude the channel from the resource, and uses the remaining channels as interference using the method. In other word, when a resource for measuring a channel and an IMR are configured in a manner of being overlapped, a base station can transmit a different interference hypothesis to a UE at the timing at which CSI-RS/IMR is transmitted using the method. To this end, it may provide RM signaling corresponding to the overlapped NZP CSI-RS resource position. In this case, although a legacy RM signaling means that a PDSCH symbol is not transmitted in an NZP CSI-RS resource, if the RM signaling indicates or configures a part of the overlapped NZP CSI-RS, it means that a UE does not measure the NZP CSIRS resource. To this end, it may generate a field for RM signaling indicating an NZP CSI-RS resource in DCI. Or, it may be able to configure a ZP CSI-RS using one of states of aperiodic CSI-RS indication to reduce signaling overhead. In particular, an aperiodic NZP CSI-RS, a periodic/semi-persistent NZP CSI-RS, a ZP-CSI-RS having the same resource, or a resource configuration indicating that a corresponding resource is not measured can be set to one of the states of the aperiodic CSI-RS indication. Hence, if a UE receives the aperiodic CSI-RS indication state, the UE does not use a CMR (channel measurement resource)/IMR transmitted from a corresponding slot for periodically measured and reported CSI. In addition, the UE may not report the CSI or may report non-updated CSI. Or, if the UE receives aperiodic CSI-RS indication, the UE can be configured not to perform measurement on a different CMR/IMR transmitted from a corresponding slot or a CMR/IMR to which a higher layer configuration is transmitted in advance. In particular, it may perform RM on a part of the NZP CSI-RS only. This is because it is not necessary for transmission of the NZP CSI-RS to have high density to measure interference only and it is necessary to enhance channel estimation performance of a UE performing channel estimation in a collision resource. Moreover, in the description mentioned earlier inFIGS.5to8, if a single reporting setting is restricted to be tied with a single link, it is apparent that a scheme of indicating a reporting setting is identical to the signaling scheme of the ‘link’. And, the aforementioned ZP CSI-RS corresponds to a resource for performing RM. As mentioned in the foregoing description, the resource may include a different type of an RS (or an RS resource) such as a DMRS in addition to a NZP CSI-RS and a ZP CSI-RS. Hence, it may consider a different name (e.g., RM resource (RMR) instead of the ZP CSI-RS. In this case, it is apparent that the aforementioned operation is identically applied. A corresponding RMR may not be applied to a PDSCH of a different type (e.g., a broadcast PDSCH). Since the broadcast PDSCH includes information essential for a system operation, it is preferable to guarantee a coding rate of a corresponding resource. And, since it is able to configure/indicate transmission of the broadcast PDSCH using DCI different from DCI for allocating a PDSCH resource, if RMR signaling transmitted via RMR UE-specific DCI is lost, it may be difficult to decode the broadcast PDSCH. Consequently, latency may increase. Hence, it may not perform rate matching on an RMR configured for the broadcast PDSCH. Or, although a partial coding rate is lost, it may perform puncturing on the RMR to prevent a different resource from being measured. When ZP CSI-RS is signaled using DCI (i.e., aperiodic ZP CSI-RS) and the RM scheme is used, if it fails to receive DCI, it is unable to decode the whole of a subframe. Hence, a UE and a base station can promise that the information is used not for RM bur for indicating an RE puncturing pattern of data not. In particular, when the base station maps an RE of data, the base station perform RE mapping under the assumption that data is transmitted in a ZP CSI-RS RE as well and does not transmit the data mapped in the RE at the final transmission timing. And, a UE performs decoding on the data by assuming the transmission operation of the base station. Consequently, the UE assumes that noise and a dummy value are included in a muting RE instead of data. When channel decoding is performed in the muting RE, the UE does not perform LLR (log-likelihood ratio) calculation in the muting RE. Or, the UE can perform the LLR calculation under the assumption that a data bit0and a data bit1have the same probability. In this case, additional signaling is not necessary in a system. Although a UE fails to receive DCI, the UE may have a transmission success probability of a certain level with the help of channel coding. In particular, when data is transmitted and received without receiving DCI (e.g., semi-persistent scheduling (SPS)), if blind decoding is performed on DCI in every subframe for an RM operation using a ZP CSI-RS (i.e., aperiodic ZP CSI-RS), it is not preferable in terms of battery consumption of a UE. In particular, when data is transmitted and received using an SPS, RM signaling provided by DCI among the ZP CSI-RS pattern can be comprehended as a puncturing pattern instead of the RM pattern by a base station and a UE. For example, when SPS data is transmitted to a specific UE, if a base station intends to transmit data using an aperiodic ZP CSI-RS by an aperiodic CSI-RS and the like, the base statin performs data allocation under the assumption that a ZP CSI-RS RE pattern corresponds to an RE muting pattern for the UE receiving the SPS data. In this case, the base station does not transmit additional ZP CSI-RS indication-related DCI. In this case, a ZP CSI-RS configured via higher layer signaling such as RRC or MAC can perform an RM operation. In other word, a base station transmitting SPS data and a UE receiving the SPS data may operate under the assumption that RM is performed on a predetermined periodic (and/or a semi-persistent) ZP CSI-RS in the middle of transmitting and receiving the SPS data and an aperiodic ZP CSI-RS is not indicated by the base station. An RMR described in the present specification can be differently configured according to an (analog and/or digital) transmission beam of a base station. For example, as shown inFIG.18, when an RAM configuration is provided for a UE 2 using a beam 1 to transmit PDSCH, it is necessary to configure an RMR in the beam 1 to protect an NZP CSI-RS transmitted using a beam 3. However, it is not necessary to set the RMR to the UE 1 that uses a beam 2 not affecting transmission of the beam 3. If the UE 2 moves to a position of the UE 1 and uses the beam 2 instead of the beam 1 to transmit PDSCH, it is not preferable to use the same RMR for PDSCH RM of the UE 2. In legacy LTE environment, since switching of a transmission (or reception) beam is semi-statically performed, a ZP CSI-RS, which is configured via RRC configuration, is sufficient for the switching. However, a legacy scheme may be not appropriate for New RAT considering a more dynamic beam change. Hence, a plurality of RMRs are set to a UE in a manner of being associated with a transmission beam of a base station. If a specific transmission beam is used to transmit PDSCH, the base station/UE can be configured to perform RM on an RMR associated with the transmission beam to transmit and receive data. A transmission beam can be associated with an RMR using methods described in the following. 1. RMR is Associated with Transmission Beam IndexIf a transmission beam and a transmission beam index according to the transmission beam are commonly defined/configured between a base station and a UE, a transmission beam index is set to each RMR. If a transmission beam having a specific index is used to transmit PDSCH, it may be able to perform PDSCH RM using an RMR corresponding to the transmission beam index. On the contrary, it may be able to configure a different RMR according to a transmission beam index. In this case, it may be able to inform a UE of a beam index to be currently used via L1/L2 signaling.Similarly, if a link of a pair of beams of a transmission beam and a reception beam is defined, a transmission beam index can be replaced with an index of the link of the pair of beams. 2. RMR is Associated with CRI (CSI-RS Resource Indicator)If a transmission beam is associated with an NZP CSI-RS via a parameter such as QCL (quasi-co-located), a transmission beam of the “1. Association with transmission beam index” case can be replaced with the NZP CSI-RS. In particular, not a ‘transmission beam’ but an NZP CSI-RS is associated with each RMR. The NZP CSI-RS corresponds to an RS to which a transmission beam is reflected in beam management and the like. In particular, the NZP CSI-RS can be used in a manner of being associated with a CRI reported in the beam management stage. The method above can more UE-transparently operate compared to a method of explicitly associating with a transmission beam.In case of using a QCL parameter, the QCL parameter can be restricted to a spatial QCL part (i.e., an arrival angle and/or angle spread). In the methods above, it is not necessary to map a transmission beam (or a parameter corresponding to the beam) and an RMR by one-to-one. In particular, one RMR can be associated with beams different from each other at the same time and one transmission beam can be associated with a plurality of RMRs. And, instead of a transmission beam, a transmission beam group (e.g., cell-center beam group/cell-edge beam group) for an RMR can be defined. In particular, an RMR can be configured according to a transmission beam group for the RMR. The abovementioned configuration can be included in a resource setting of an RMR. In an RM setting and/or a measurement setting, it may be able to define links different from each other (i.e., a plurality of RMR groups) according to a beam index or a parameter related to the beam index in consideration of additional MAC/DCI signaling. And, association between a transmission beam and an RMR can be included in RRC/MAC signaling. If multiple transmission beams are used for transmitting data in a slot, a UE may apply a different RMR in a unit (e.g., symbol) of changing a transmission beam in the slot. In other word, if a transmission beam for transmitting data is changed in every 7 symbols in a slot, an RMR pattern used in the first 7 symbols may be different from an RMR pattern used in the second 7 symbols. Or, in order to reduce complexity, an RMR corresponding to a union of all RMRs corresponding to multiple transmission beams for transmitting data can be used as an RMR of a corresponding slot. According to the method above, since additional dynamic signaling is not used, the method can be usefully utilized for such a configuration as a periodic/semi-persistent RMR having latency longer than DCI. In aperiodic/semi-persistent RMR, an RMR candidate capable of being designated via signaling can be determined according to a transmission beam (or transmission beam group). In this case, transmission beam-related information can be included in signaling of the RMR candidate. If the number of RMR candidates is considerably less, a method of associating the RMR candidate with a transmission beam is omitted. Instead, a base station may select/transmit an appropriate RMR via MAC and/or DCI signaling. FIG.27is a block diagram illustrating a transmitting device10and a receiving device20configured to implement embodiments of the present invention. Each of the transmitting device10and receiving device20includes a transmitter/receiver13,23capable of transmitting or receiving a radio signal that carries information and/or data, a signal, a message, etc., a memory12,22configured to store various kinds of information related to communication with a wireless communication system, and a processor11,21operatively connected to elements such as the transmitter/receiver13,23and the memory12,22to control the memory12,22and/or the transmitter/receiver13,23to allow the device to implement at least one of the embodiments of the present invention described above. The memory12,22may store a program for processing and controlling the processor11,21, and temporarily store input/output information. The memory12,22may also be utilized as a buffer. The processor11,21controls overall operations of various modules in the transmitting device or the receiving device. Particularly, the processor11,21may perform various control functions for implementation of the present invention. The processors11and21may be referred to as controllers, microcontrollers, microprocessors, microcomputers, or the like. The processors11and21may be achieved by hardware, firmware, software, or a combination thereof. In a hardware configuration for an embodiment of the present invention, the processor11,21may be provided with application specific integrated circuits (ASICs) or digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), and field programmable gate arrays (FPGAs) that are configured to implement the present invention. In the case which the present invention is implemented using firmware or software, the firmware or software may be provided with a module, a procedure, a function, or the like which performs the functions or operations of the present invention. The firmware or software configured to implement the present invention may be provided in the processor11,21or stored in the memory12,22to be driven by the processor11,21. The processor11of the transmitter10performs predetermined coding and modulation of a signal and/or data scheduled by the processor11or a scheduler connected to the processor11, and then transmits a signal and/or data to the transmitter/receiver13. For example, the processor11converts a data sequence to be transmitted into K layers through demultiplexing and channel coding, scrambling, and modulation. The coded data sequence is referred to as a codeword, and is equivalent to a transport block which is a data block provided by the MAC layer. One transport block is coded as one codeword, and each codeword is transmitted to the receiving device in the form of one or more layers. To perform frequency-up transformation, the transmitter/receiver13may include an oscillator. The transmitter/receiver13may include Nt transmit antennas (wherein Nt is a positive integer greater than or equal to 1). The signal processing procedure in the receiving device20is configured as a reverse procedure of the signal processing procedure in the transmitting device10. The transmitter/receiver23of the receiving device20receives a radio signal transmitted from the transmiting device10under control of the processor21. The transmitter/receiver23may include Nr receive antennas, and retrieves baseband signals by frequency down-converting the signals received through the receive antennas. The transmitter/receiver23may include an oscillator to perform frequency down-converting. The processor21may perform decoding and demodulation on the radio signal received through the receive antennas, thereby retrieving data that the transmitting device10has originally intended to transmit. The transmitter/receiver13,23includes one or more antennas. According to an embodiment of the present invention, the antennas function to transmit signals processed by the transmitter/receiver13,23are to receive radio signals and deliver the same to the transmitter/receiver13,23. The antennas are also called antenna ports. Each antenna may correspond to one physical antenna or be configured by a combination of two or more physical antenna elements. A signal transmitted through each antenna cannot be decomposed by the receiving device20anymore. A reference signal (RS) transmitted in accordance with a corresponding antenna defines an antenna from the perspective of the receiving device20, enables the receiving device20to perform channel estimation on the antenna irrespective of whether the channel is a single radio channel from one physical antenna or a composite channel from a plurality of physical antenna elements including the antenna. That is, an antenna is defined such that a channel for delivering a symbol on the antenna is derived from a channel for delivering another symbol on the same antenna. An transmitter/receiver supporting the Multiple-Input Multiple-Output (MIMO) for transmitting and receiving data using a plurality of antennas may be connected to two or more antennas. In embodiments of the present invention, the UE or the terminal operates as the transmitting device10on uplink, and operates as the receiving device20on downlink. In embodiments of the present invention, the eNB or the base station operates as the receiving device20on uplink, and operates as the transmitting device10on downlink. The transmitting device and/or receiving device may be implemented by one or more embodiments of the present invention among the embodiments described above. As an embodiment, a terminal for decoding a downlink signal in a wireless communication system is proposed. The terminal includes a transmitter and a receiver, and a processor that controls the transmitter and the receiver, the processor that controls the receiver to receive rate matching pattern information indicating a rate matching resource having a repetition period from a base station, and decodes a downlink shared channel using the rate matching pattern information. Further, the rate matching pattern information may include a first bitmap indicating a frequency resource region, a second bitmap indicating a time resource region, and a third bitmap indicating one or more time units to which a rate matching pattern indicated by the first bitmap and the second bitmap is configured among a plurality of time units within the repetition period. Further, the terminal further performs rate matching on the rate matching resource. Further, the second bitmap may indicate each of consecutive symbols within a predetermined resource unit. The rate matching pattern information may be used only in a region for the downlink shared channel. Further, the region for the downlink shared channel may be configured by a signal indicating a starting symbol of the downlink shared channel or an ending symbol of the downlink shared channel. Detailed descriptions of preferred embodiments of the present invention have been given to allow those skilled in the art to implement and practice the present invention. Although descriptions have been given of the preferred embodiments of the present invention, it will be apparent to those skilled in the art that various modifications and variations can be made in the present invention defined in the appended claims. Thus, the present invention is not intended to be limited to the embodiments described herein, but is intended to have the widest scope consistent with the principles and novel features disclosed herein. The present invention can be used for such a wireless communication device as a terminal, a relay, a base station, and the like.
117,007
11943166
DETAILED DESCRIPTION The detailed description, set forth below, in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well known structures and components are shown in block diagram form in order to avoid obscuring such concepts. Several aspects of telecommunication systems will now be presented with reference to various apparatus and methods. These apparatus and methods will be described in the following detailed description and illustrated in the accompanying drawings by various blocks, components, circuits, processes, algorithms, etc. (collectively referred to as “elements”). These elements may be implemented using electronic hardware, computer software, or any combination thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. By way of example, an element, or any portion of an element, or any combination of elements may be implemented as a “processing system” that includes one or more processors. Examples of processors include microprocessors, microcontrollers, graphics processing units (GPUs), central processing units (CPUs), application processors, digital signal processors (DSPs), reduced instruction set computing (RISC) processors, systems on a chip (SoC), baseband processors, field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. One or more processors in the processing system may execute software. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software components, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Accordingly, in one or more example embodiments, the functions described may be implemented in hardware, software, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise a random-access memory (RAM), a read-only memory (ROM), an electrically erasable programmable ROM (EEPROM), optical disk storage, magnetic disk storage, other magnetic storage devices, combinations of the aforementioned types of computer-readable media, or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer. Conventional mechanisms for downlink (DL) scheduling include semi-persistent scheduling (SPS). As previously described, base stations typically configure SPS resources through the radio resource control (RRC) messages according to a certain periodicity and activate, reactivate, and deactivate SPS transmissions via DL control indicators (DCIs). While some types of services, such as enhanced mobile broadband (eMBB), may implicitly determine acknowledge (ACK) or negative-ACK (NACK) for DCIs based on ACK/NACKs from SPS messages, other services, such as ultra reliable low latency communications (URLLC) are unable to implicitly determine ACK/NACKs from SPS messages due to the added time delays provided through implicit determinations. Aspects of the present application provide techniques for explicit hybrid automatic repeat request acknowledge (HARQ-ACK) feedback for DCI messages in an SPS scheme. As described herein, a user equipment (UE) may use a dynamically indicated resource (e.g., first resource) to provide ACK/NACK for a DCI message, while using a semi-static resource (e.g., second resource) to provide ACK/NACK for an SPS message. Use of explicit HARQ-ACK feedback, allows all services, including URLLC, on 5G NR technologies to receive feedback for resource allocation provided by DCI messages without implicitly determining ACK/NACK such that some time delays may be avoided. Turning now to the figures, examples of techniques for HARQ-ACK resource indications for DL SPS are depicted. It is to be understood that aspects of the figures may not be drawn to scale and are instead drawn for illustrative purposes. Referring toFIG.1, a diagram illustrating an example of a wireless communications system and an access network100is provided. The wireless communications system (also referred to as a wireless wide area network (WWAN)) includes base stations102, UEs104, an Evolved Packet Core (EPC)160, and a 5G Core (5GC)190. The base station102may include a modem140having an SPS component142configured to indicate to the UE104activation, re-activation, and deactivation of SPS messages along with configuration of the SPS messages. The UE104may include a modem144having an ACK/NACK component146configured to organize and transmit ACK/NACKs for DCI messages and SPS messages according to messages from the base station102. In an aspect, the base stations102may include macro cells (high power cellular base station) and/or small cells (low power cellular base station). The macro cells include base stations. The small cells include femtocells, picocells, and microcells. The base stations102configured for 4G LTE (collectively referred to as Evolved Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access Network (E-UTRAN)) may interface with the EPC160through backhaul links132(e.g., S1 interface). The base stations102configured for 5G NR (collectively referred to as Next Generation RAN (NG-RAN)) may interface with 5GC190through backhaul links184. In addition to other functions, the base stations102may perform one or more of the following functions: transfer of user data, radio channel ciphering and deciphering, integrity protection, header compression, mobility control functions (e.g., handover, dual connectivity), inter-cell interference coordination, connection setup and release, load balancing, distribution for non-access stratum (NAS) messages, NAS node selection, synchronization, radio access network (RAN) sharing, multimedia broadcast multicast service (MBMS), subscriber and equipment trace, RAN information management (RIM), paging, positioning, and delivery of warning messages. The base stations102may communicate directly or indirectly (e.g., through the EPC160or 5GC190) with each other over backhaul links134(e.g., X2 interface). Each of the backhaul links132,134, and184may be wired or wireless. The base stations102may wirelessly communicate with the UEs104. Each of the base stations102may provide communication coverage for a respective geographic coverage area110. There may be overlapping geographic coverage areas110. For example, the small cell102′ may have a coverage area110′ that overlaps the coverage area110of one or more macro base stations102. A network that includes both small cell and macro cells may be known as a heterogeneous network. A heterogeneous network may also include Home Evolved Node Bs (eNBs) (HeNBs), which may provide service to a restricted group known as a closed subscriber group (CSG). The communication links120between the base stations102and the UEs104may include uplink (UL) (also referred to as reverse link) transmissions from a UE104to a base station102and/or DL (also referred to as forward link) transmissions from a base station102to a UE104. The communication links120may use multiple-input and multiple-output (MIMO) antenna technology, including spatial multiplexing, beamforming, and/or transmit diversity. The communication links may be through one or more carriers. The base stations102/UEs104may use spectrum up to Y MHz (e.g., 5, 10, 15, 20, 100, 400, etc. MHz) bandwidth per carrier allocated in a carrier aggregation of up to a total of Yx MHz (x component carriers) used for transmission in each direction. The carriers may or may not be adjacent to each other. Allocation of carriers may be asymmetric with respect to DL and UL (e.g., more or less carriers may be allocated for DL than for UL). The component carriers may include a primary component carrier and one or more secondary component carriers. A primary component carrier may be referred to as a primary cell (PCell) and a secondary component carrier may be referred to as a secondary cell (SCell). Certain UEs104may communicate with each other using device-to-device (D2D) communication link158. The D2D communication link158may use the DL/UL WWAN spectrum. The D2D communication link158may use one or more sidelink channels, such as a physical sidelink broadcast channel (PSBCH), a physical sidelink discovery channel (PSDCH), a physical sidelink shared channel (PSSCH), and a physical sidelink control channel (PSCCH). D2D communication may be through a variety of wireless D2D communications systems, such as for example, FlashLinQ, WiMedia, Bluetooth, ZigBee, Wi-Fi based on the IEEE 802.11 standard, LTE, or NR. The wireless communications system may further include a Wi-Fi access point (AP)150in communication with Wi-Fi stations (STAs)152via communication links154in a 5 GHz unlicensed frequency spectrum. When communicating in an unlicensed frequency spectrum, the STAs152/AP150may perform a clear channel assessment (CCA) prior to communicating in order to determine whether the channel is available. The small cell102′ may operate in a licensed and/or an unlicensed frequency spectrum. When operating in an unlicensed frequency spectrum, the small cell102′ may employ NR and use the same 5 GHz unlicensed frequency spectrum as used by the Wi-Fi AP150. The small cell102′, employing NR in an unlicensed frequency spectrum, may boost coverage to and/or increase capacity of the access network. A base station102, whether a small cell102′ or a large cell (e.g., macro base station), may include an eNB, gNodeB (gNB), or other type of base station. Some base stations, such as gNB180may operate in a traditional sub 6 GHz spectrum, in millimeter wave (mmW) frequencies, and/or near mmW frequencies in communication with the UE104. When the gNB180operates in mmW or near mmW frequencies, the gNB180may be referred to as an mmW base station. Extremely high frequency (EHF) is part of the RF in the electromagnetic spectrum. EHF has a range of 30 GHz to 300 GHz and a wavelength between 1 millimeter and 10 millimeters. Radio waves in the band may be referred to as a millimeter wave. Near mmW may extend down to a frequency of 3 GHz with a wavelength of 100 millimeters. The super high frequency (SHF) band extends between 3 GHz and 30 GHz, also referred to as centimeter wave. Communications using the mmW/near mmW radio frequency band has extremely high path loss and a short range. The mmW base station180may utilize beamforming182with the UE104to compensate for the extremely high path loss and short range. The EPC160may include a Mobility Management Entity (MME)162, other MMEs164, a Serving Gateway166, a Multimedia Broadcast Multicast Service (MBMS) Gateway168, a Broadcast Multicast Service Center (BM-SC)170, and a Packet Data Network (PDN) Gateway172. The MME162may be in communication with a Home Subscriber Server (HSS)174. The MME162is the control node that processes the signaling between the UEs104and the EPC160. Generally, the MME162provides bearer and connection management. All user Internet protocol (IP) packets are transferred through the Serving Gateway166, which itself is connected to the PDN Gateway172. The PDN Gateway172provides UE IP address allocation as well as other functions. The PDN Gateway172and the BM-SC170are connected to the IP Services176. The IP Services176may include the Internet, an intranet, an IP Multimedia Subsystem (IMS), a PS Streaming Service, and/or other IP services. The BM-SC170may provide functions for MBMS user service provisioning and delivery. The BM-SC170may serve as an entry point for content provider MBMS transmission, may be used to authorize and initiate MBMS Bearer Services within a public land mobile network (PLMN), and may be used to schedule MBMS transmissions. The MBMS Gateway168may be used to distribute MBMS traffic to the base stations102belonging to a Multicast Broadcast Single Frequency Network (MBSFN) area broadcasting a particular service, and may be responsible for session management (start/stop) and for collecting eMBMS related charging information. The 5GC190may include a Access and Mobility Management Function (AMF)192, other AMFs193, a Session Management Function (SMF)194, and a User Plane Function (UPF)195. The AMF192may be in communication with a Unified Data Management (UDM)196. The AMF192is the control node that processes the signaling between the UEs104and the 5GC190. Generally, the AMF192provides QoS flow and session management. All user Internet protocol (IP) packets are transferred through the UPF195. The UPF195provides UE IP address allocation as well as other functions. The UPF195is connected to the IP Services197. The IP Services197may include the Internet, an intranet, an IP Multimedia Subsystem (IMS), a PS Streaming Service, and/or other IP services. The base station102may also be referred to as a gNB, Node B, evolved Node B (eNB), an access point, a base transceiver station, a radio base station, a radio transceiver, a transceiver function, a basic service set (BSS), an extended service set (ESS), a transmit reception point (TRP), or some other suitable terminology. The base station102provides an access point to the EPC160or 5GC190for a UE104. Examples of UEs104include a cellular phone, a smart phone, a session initiation protocol (SIP) phone, a laptop, a personal digital assistant (PDA), a satellite radio, a global positioning system, a multimedia device, a video device, a digital audio player (e.g., MP3 player), a camera, a game console, a tablet, a smart device, a wearable device, a vehicle, an electric meter, a gas pump, a large or small kitchen appliance, a healthcare device, an implant, a sensor/actuator, a display, or any other similar functioning device. Some of the UEs104may be referred to as IoT devices (e.g., parking meter, gas pump, toaster, vehicles, heart monitor, etc.). The UE104may also be referred to as a station, a mobile station, a subscriber station, a mobile unit, a subscriber unit, a wireless unit, a remote unit, a mobile device, a wireless device, a wireless communications device, a remote device, a mobile subscriber station, an access terminal, a mobile terminal, a wireless terminal, a remote terminal, a handset, a user agent, a mobile client, a client, or some other suitable terminology. As described herein, conventional techniques of SPS do not use explicit ACK/NACK for activation, re-activation, and de-activation of SPS through DCI messages. Instead, these techniques rely on implicit indications from the ACK/NACK of SPS messages. In particular, conventional techniques require a DCI message (e.g., SPS activation DCI) to dynamically indicate a resource (e.g., Resource A or first resource) for the first SPS message subsequent to the DCI message. The dynamic allocation of the ACK/NACK resource allows the base station102to receive an implicit indication from the UE104that the SPS message and the DCI message were received by the UE104. Accordingly, the UE104does not send the ACK/NACK on the dynamically indicated ACK/NACK resource until the SPS message is received from the base station102. The base station102may receive ACK/NACKs for subsequent SPS messages on a scheduled semi-static resource (e.g., Resource B or second resource). In conventional SPS techniques, a similar process may be followed for re-activation of SPS messages, in which the base station102may transmit a re-activation DCI message to the UE104(e.g., subsequent to the activation DCI message) to reconfigure, update, or change certain parameters for SPS transmissions. Further, in conventional SPS techniques, the base station102may transmit a deactivation DCI message (e.g., subsequent to an activation or re-activation DCI message) to deactivate SPS transmissions to the UE104. As disclosed herein, for some services, such as URLLC, the delayed timing caused by the implicit indication of an ACK/NACK for a DCI message, may result in decoding failure for these services. Accordingly, the techniques disclosed herein overcome one or more of these deficiencies. Referring toFIG.2, a SPS scheme200for SPS transmissions using HARQ-ACK is illustrated. Based on the SPS scheme200, the base station102may transmit and the UE104may receive an SPS configuration message202for configuring the UE104for SPS physical DL shared channel (PDSCH) resources. The SPS configuration message202may include information on SPS transmission timing and/or periodicity, indication of resources for the SPS transmissions, etc. In an example, the SPS configuration message202may be a radio resource control (RRC) message. Subsequently, the base station102may transmit an activation message210to activate SPS PDSCH transmissions to the UE104. Examples of the activation message210include a DCI message. The activation message210may include one or more parameters for the UE104to respond to the activation message210and to receive SPS transmissions220. For example, the parameters may include one or more of a modulation and coding scheme (MCS), a DL transmission configuration indicator (TCI) state, a DL SPS time/frequency allocation, and/or offset values (e.g., K1 values). In an example, the offset values indicate a first offset time for an activation ACK/NACK212to be sent from a time of reception of the activation message210and a second offset time for an SPS ACK/NACK222to be sent from a time of reception of an SPS transmission220. In an example, the first and second offset times may be the same or different. In an example, one or more of the parameters may be indicated by a physical UL control channel (PUCCH) resource indicator (PRI) of the activation message210. In response to the activation message210, the UE104may transmit the activation ACK/NACK212based on the parameters indicated in the activation message210. Accordingly, the activation ACK/NACK212may be dynamically indicated in the activation message210. In particular, a PUCCH resource may be dynamically allocated for the activation ACK/NACK206based on the parameters indicated in the activation message210. Subsequent to the activation message210, the base station102may transmit and the UE104may receive the SPS transmission220according to the information from the SPS configuration message202and the parameters indicated in the activation message210. In response, to the SPS transmission220, the UE104may transmit and the base station102may receive the SPS ACK/NACK222based on semi-static configurations from the SPS configuration message202. In this manner, the base station102may know what resources to monitor and when to monitor these resources for reception of the SPS ACK/NACK222. Subsequent to the SPS ACK/NACK222, an SPS communication cycle224may continue between the base station102and the UE104. The SPS communication cycle224may include any number of SPS transmissions220(e.g., updated DL scheduling information communicated between the base station102and the UE104) and SPS ACK/NACKs222(e.g., ACK/NACK the updated DL scheduling transmissions). The SPS communication cycle224may continue until a deactivation message240to deactivate subsequent SPS transmissions208is transmitted by the base station102and received by the UE104or a re-activation message230is transmitted by the base station102and received by the UE104. The re-activation message230may include indications for updates to the one or more parameters received in the activation message210. For example, the indications in the re-activation message230may include updated information for the MCS, the DL TCI state, the DL SPS time/frequency allocation, and/or one or more of the offset values (e.g., K1 values), described herein. In response to the re-activation message230, the UE104may transmit a re-activation ACK/NACK232based on the parameters indicated in the re-activation message230. Accordingly, the re-activation ACK/NACK232may be dynamically indicated in the re-activation message230, as described herein. Subsequent to the re-activation message230, the base station102may transmit and the UE104may receive SPS communications224according to the information from the SPS configuration message202and the updated parameters indicated in the re-activation message230. The SPS communication cycle224may continue until a deactivation message240is transmitted by the base station102and received by the UE104or a subsequent re-activation message is transmitted by the base station102and received by the UE104. As described herein, conventional SPS techniques do not provide an ACK/NACK for the activation/reactivation messages. Conventional SPS techniques also use the PUCCH resource (e.g., dynamic resource, resource A, or first resource) after a first SPS transmission to ACK/NACK for an activation message (or re-activation message) which is different from the PUCCH resource (semi-static resource, resource B, or second resource) after subsequent SPS transmissions to ACK/NACK for remaining SPS transmissions. Use of the different resources and implicit indications of ACK/NACK may add time delay into the services used by 5G NR technologies. In comparison to conventional SPS techniques, implementations of the present disclosure allow services (e.g., URLLC) used by 5G NR technologies to receive explicit HARQ-ACK feedback for SPS. In particular, use of the HARQ-ACKs (e.g., activation ACK/NACK212or re-activation ACK/NACK232) for ACK/NACK of activation/reactivation of SPS and the HARQ-ACKs (e.g., SPS ACK/NACK222) for ACK/NACK of the SPS transmissions provides two separate HARQ-ACKs. Use of the implementations may provide clear indications of the ACK/NACKs of activation/reactivation messages210/230and SPS transmissions220and avoid time delay, introduced by implicit indications of SPS transmissions performed, added by conventional SPS techniques. Referring toFIG.3, a conceptual block diagram of an SPS scheme300for SPS transmissions using HARQ-ACK is illustrated. As described, the base station102may transmit the SPS transmissions (or SPS Tx)220a-220f(cumulatively SPS transmissions220) according to a certain periodicity302. Each of the SPS transmissions220may include DL scheduling information for UEs104. As described herein, the base station102may configure the SPS transmissions220on the UE104through the SPS configuration message202. The base station102may also transmit the activation message210to activate SPS on the UE104. The activation message210may include parameters, including the resource information and the offset values310and320. In this example, the offset value310indicates a time from the reception of the activation message210to transmition of an activation ACK/NACK (e.g., ACK/NACK212), and the offset value320indicates a time from the reception of a first SPS transmission (e.g., SPS Tx220b) that is subsequent to the activation message210to transmition of an SPS ACK/NACK (e.g., ACK/NACK222b). The UE104may transmit the ACK/NACK212according to the parameters indicated by the activation message210. Once the SPS Tx220b(i.e., first SPS transmission after the UE104receives activation message210) is received, the UE104may also transmit the ACK/NACK222bto acknowledge reception of the SPS Tx220b. The ACK NACK22bis transmitted according to one or more parameters (e.g., offset value320) indicated by the activation message210and one or more parameters (e.g., resource information) indicated by the SPS configuration message202. SPS communications (e.g., SPS Tx220cand ACK/NACK222cmay continue between the base station102and the UE104according to the parameters from the SPS configuration message202and the activation message210. At some point in time, the base station102may update any of the SPS transmission parameters by transmitting the re-activation message230. Aside from the SPS transmission parameters, the re-activation message230may include new offset values330and322. One or more of these values may be the same or different from the previous offset values310and320. In response to the re-activation message230, the UE104may transmit the ACK/NACK232according to the parameters indicated by the activation message230. Once the SPS Tx220d(i.e., first SPS transmission after the UE104receives activation message230) is received, the UE104may also transmit the ACK/NACK222dto acknowledge reception of the SPS Tx220d. The ACK NACK222dis transmitted according to one or more parameters (e.g., offset value322) indicated by the activation message230and one or more parameters (e.g., resource information) indicated by the SPS configuration message202. SPS communications (e.g., SPS Tx220eand ACK/NACK222e) may continue between the base station102and the UE104according to the updated parameters. At some point in time, the base station102may deactivate SPS on the UE104by transmitting the deactivation message240, as described herein. In an example, the UE104may respond to the deactivation message240with an ACK/NACK (not shown) or may merely stop ACK/NACKs with the base station102. In some aspects, the UE104may only transmit one HARQ-ACK codebook in a single slot/sub-slot. Therefore, in the case where an ACK/NACK for an activation/re-activation message and an ACK/NACK for the first SPS transmission after the activation/re-activation message fall into a same slot/sub-slot, the UE104may merge (or concatenate) the ACK/NACK for the activation/re-activation message with the ACK/NACK for the first SPS transmission after the activation/reactivation message into a single codebook (e.g., codebook A) and transmit the codebook based on a PUCCH resource indicated by the activation/re-activation-message. For example, the PUCCH resource may be indicated in a PRI of the activation/re-activation message. In an example, the merging of the two ACK/NACKs may include the ACK/NACK for activation/re-activation message on, for example, a most significant bit (MSB) or a least significant bit, followed by the ACK/NACK for the first SPS transmission after the activation/reactivation message, or vice versa. In another example, if the codebook A (e.g., merged codebook) overlaps with another HARQ-ACK codebook (e.g., codebook B) for dynamic scheduled PDSCH, the codebook A may be merged into the codebook B. Further, a counter for a DL assignment index (DAI) in the activation/reactivation message may interpret or map the ACK/NACKs to merge a total number of bits of the codebook A (e.g., 2 bits) into the codebook B. Referring toFIG.4, an example method of wireless communications is disclosed. The method400may be performed by the UE104along with any of the components (see e.g.,FIG.5) of the UE104. For example, the method400, may be performed by one or more of a processor512, a transceiver502, the modem144, the ACK/NACK component146, and/or one or more additional components/subcomponents of the UE104. Turning toFIG.4, at402, the method400may optionally include receiving, from a base station, a configuration message indicating one or more parameters of an SPS message for the UE to receive the SPS message. For example, one or more of the processor512, the transceiver502, the modem144, the ACK/NACK component146, and/or one or more components/subcomponents of the UE104may be configured to receive, from the base station102, the SPS configuration message202(e.g., configuration message) indicating one or more parameters of an SPS transmission220(e.g., SPS message) for the UE104to receive the SPS transmission. Thus, the processor512, the transceiver502, the modem144, the ACK/NACK component146, and/or one or more components/subcomponents of the UE104may define the means for receiving, from the base station102, the SPS configuration message202(e.g., configuration message) indicating one or more parameters of an SPS transmission220(e.g., SPS message) for the UE104to receive the SPS transmission. In an example, the configuration message is an RRC message. In an example, one or more of the processor512, the transceiver502, the modem144, the ACK/NACK component146, and/or one or more components/subcomponents of the UE104may monitor, in response to the configuration message, communications from the base station102to receive the SPS message based on the one or more parameters of the SPS message. At404, the method400may include receiving, from a base station, a message indicating activation or reactivation of SPS between the UE and the base station. For example, one or more of the processor512, the transceiver502, the modem144, the ACK/NACK component146, and/or one or more components/subcomponents of the UE104may be configured to receive, from the base station102, the activation message210indicating activation or reactivation of SPS between the UE104and the base station102. Thus, the processor512, the transceiver502, the modem144, the ACK/NACK component146, and/or one or more components/subcomponents of the UE104may define the means for receiving, from the base station102, the activation message210indicating activation or reactivation of SPS between the UE104and the base station102. In an example, the message may be a DCI message. In an example, the message may include one or more parameters for the activation or the reactivation of the SPS. In an example, the one or more parameters are indicated by the message in a PRI In an example, the one or more parameters includes a modulation and coding scheme for SPS transmissions, a DL SPS time and frequency allocation, a first offset value for the first ACK/NACK, or a second offset value for the second ACK/NACK. In an example, the second offset value may be a different value or a same value as the first offset value. At406, the method400may also include determining, based on the message, a first resource for transmitting a first ACK/NACK to acknowledge reception of the message. For example, one or more of the processor512, the transceiver502, the modem144, the ACK/NACK component146, and/or one or more components/subcomponents of the UE104may be configured to determine, based on the activation message210, a first resource for transmitting the activation ACK/NACK212(e.g., first ACK/NACK) to acknowledge reception of the activation message210. Thus, the processor512, the transceiver502, the modem144, the ACK/NACK component146, and/or one or more components/subcomponents of the UE104may define the means for determining, based on the activation message210, a first resource for transmitting the activation ACK/NACK212(e.g., first ACK/NACK) to acknowledge reception of the activation message210. In an example, the first resource may be determined based on one or more parameters (e.g., PRI) indicated by from the message (e.g., DCI message). At408, the method400may include receiving, from the base station and subsequent to the receiving of the message, an SPS message. For example, one or more of the processor512, the transceiver502, the modem144, the ACK/NACK component146, and/or one or more components/subcomponents of the UE104may be configured to receive, from the base station102and subsequent to the receiving of the activation message210, the SPS transmission220b(e.g., SPS message). Thus, the processor512, the transceiver502, the modem144, the ACK/NACK component146, and/or one or more components/subcomponents of the UE104may define the means for receiving, from the base station102and subsequent to the receiving of the activation message210, the SPS transmission220b. In an example, the UE104may monitor communications with the base station102to receive the SPS message. In particular, the UE104may receive an indication (e.g., via RRC message) indicating the resource to monitor for the SPS message. At410, the method400may include selecting a second resource for transmitting a second ACK/NACK to acknowledge reception of the SPS message. For example, one or more of the processor512, the transceiver502, the modem144, the ACK/NACK component146, and/or one or more components/subcomponents of the UE104may be configured to select a second resource for transmitting the SPS ACK/NACK22b(e.g., second ACK/NACK) to acknowledge reception of the SPS transmission220b(e.g., SPS message). Thus, the processor512, the transceiver502, the modem144, the ACK/NACK component146, and/or one or more components/subcomponents of the UE104may define the means for selecting a second resource for transmitting the SPS ACK/NACK22b(e.g., second ACK/NACK) to acknowledge reception of the SPS transmission220b(e.g., SPS message). In an example, the second resource may be selected based on one or more parameters (e.g., PRI) indicated by the SPS configuration message (e.g., RRC message). At412, the method400may include transmitting, to the base station, the first ACK/NACK on the first resource and the second ACK/NACK on the second resource. For example, one or more of the processor512, the transceiver502, the modem144, the ACK/NACK component146, and/or one or more components/subcomponents of the UE104may be configured to transmit, to the base station102, the activation ACK/NACK212(e.g., first ACK/NACK) on the first resource and the SMS ACK/NACK222b(e.g., second ACK/NACK) on the second resource. Thus, the processor512, the transceiver502, the modem144, the ACK/NACK component146, and/or one or more components/subcomponents of the UE104may define the means for transmitting, to the base station102, the activation ACK/NACK212(e.g., first ACK/NACK) on the first resource and the SMS ACK/NACK222b(e.g., second ACK/NACK) on the second resource. In an example, the method400may also include selecting the first resource and the second resource are within a same slot or a same sub-slot, and merging the first ACK/NACK and the second ACK/NACK into a single codebook in response to the selecting of the first resource and the second resource are within the same slot or the same sub-slot, wherein the second resource is a same resource as the first resource. For example, one or more of the processor512, the modem144, the ACK/NACK component146, and/or one or more components/subcomponents of the UE104may be configured to determine the first resource and the second resource are within a same slot or a same sub-slot, and merge the first ACK/NACK and the second ACK/NACK into a single codebook in response to the determining of the first resource and the second resource are within the same slot or the same sub-slot, wherein the second resource is a same resource as the first resource. Thus, the processor512, the transceiver502, the modem144, the ACK/NACK component146, and/or one or more components/subcomponents of the UE104may define the means for determining the first resource and the second resource are within a same slot or a same sub-slot, and the means for merging the first ACK/NACK and the second ACK/NACK into a single codebook in response to the determining of the first resource and the second resource are within the same slot or the same sub-slot, wherein the second resource is a same resource as the first resource. In an example, the method400may also include determining an overlap exists between the single codebook and a second codebook for dynamic scheduled DL messages and merging the single codebook into the second codebook in response to the determining the overlap exists. For example, one or more of the processor512, the modem144, the ACK/NACK component146, and/or one or more components/subcomponents of the UE104may be configured to determine an overlap exists between the single codebook and a second codebook for dynamic scheduled DL messages and merge the single codebook into the second codebook in response to the determining the overlap exists. Thus, the processor512, the transceiver502, the modem144, the ACK/NACK component146, and/or one or more components/subcomponents of the UE104may define the means for determining an overlap exists between the single codebook and a second codebook for dynamic scheduled DL messages and the means for merging the single codebook into the second codebook in response to the determining the overlap exists. It should be noted that while the method400is described using the activation message210, the SPS transmission220b, the activation ACK/NACK212, and the SPS ACK/NACK222b, one skilled in the art would recognize that these same operations may be applied to other messages and ACK/NACK operations such as the re-activation message230, the SPS transmission220d, the re-activation ACK/NACK232, and the SPS ACK/NACK222d. Referring toFIG.5, one example of an implementation of the UE104may include a variety of components, some of which have already been described above, but including components such as one or more processors512, memory516, and transceiver502in communication via one or more buses544, which may operate in conjunction with the modem144to enable one or more of the functions of the method400described herein. The one or more processors512, modem144, memory516, the transceiver502, RF front end588and one or more antennas565, may be configured to support voice and/or data calls (simultaneously or non-simultaneously) in one or more radio access technologies. In an aspect, the one or more processors512may include the modem144that uses one or more modem processors. The various functions related to the ACK/NACK component146may be included in the modem144and/or the processors512and, in an aspect, may be executed by a single processor, while in other aspects, different ones of the functions may be executed by a combination of two or more different processors. For example, in an aspect, the one or more processors512may include any one or any combination of a modem processor, or a baseband processor, or a digital signal processor, or a transmit processor, or a receiver processor, or a transceiver processor associated with the transceiver502. In other aspects, some of the features of the one or more processors512and/or the modem144may be performed by the transceiver502. Also, the memory516may be configured to store data used herein and/or local versions of applications575or the ACK/NACK component146and/or one or more of its subcomponents being executed by the at least one processors512. The memory516may include any type of computer-readable medium usable by a computer or the at least one processor512, such as random access memory (RAM), read only memory (ROM), tapes, magnetic discs, optical discs, volatile memory, non-volatile memory, and any combination thereof. In an aspect, for example, the memory516may be a non-transitory computer-readable storage medium that stores one or more computer-executable codes defining the ACK/NACK component146and/or one or more of its subcomponents, and/or data associated therewith, when the UE104is operating the at least one processor512to execute the ACK/NACK component146and/or one or more of its subcomponents. The transceiver502may include at least one receiver506and at least one transmitter508. The receiver506may include hardware, firmware, and/or software code executable by a processor for receiving data, the code comprising instructions and being stored in a memory (e.g., computer-readable medium). The receiver506may be, for example, a radio frequency (RF) receiver. In an aspect, the receiver506may receive signals transmitted by at least one of the base station102or another UE104. Additionally, the receiver506may process such received signals, and also may obtain measurements of the signals, such as, but not limited to, Ec/Io, SNR, RSRP, RSSI, etc. The transmitter508may include hardware, firmware, and/or software code executable by a processor for transmitting data, the code comprising instructions and being stored in a memory (e.g., computer-readable medium). A suitable example of the transmitter508may include, but is not limited to, an RF transmitter. The transceiver502, receiver506, and/or transmitter508may be configured to operate in mmW frequencies and/or near mmW frequencies. Moreover, in an aspect, the UE104may include the RF front end588, which may operate in communication with one or more antennas565and the transceiver502for receiving and transmitting radio transmissions, for example, wireless communications received by the UE104or wireless transmissions transmitted by the UE104. The RF front end588may be connected to the one or more antennas565and may include one or more low-noise amplifiers (LNAs)590, one or more switches592, one or more power amplifiers (PAs)598, and one or more filters596for transmitting and receiving RF signals. In an aspect, the LNA590may amplify a received signal at a desired output level. In an aspect, each of the LNAs590may have a specified minimum and maximum gain values. In an aspect, the RF front end588may use the one or more switches592to select a particular LNA590and its specified gain value based on a desired gain value for a particular application. The one or more PA(s)598may be used by the RF front end588to amplify a signal for an RF output at a desired output power level. In an aspect, each of the PAs598may have specified minimum and maximum gain values. In an aspect, the RF front end588may use the one or more switches592to select a particular PA598and its specified gain value based on a desired gain value for a particular application. Also, for example, the one or more filters596may be used by the RF front end588to filter a received signal to obtain an input RF signal. Similarly, in an aspect, for example, a respective filter596may be used to filter an output from a respective PA598to produce an output signal for transmission. In an aspect, each of the filters596may be connected to a specific LNA590and/or PA598. In an aspect, the RF front end588may use the one or more switches592to select a transmit or receive path using a specified filter596, LNA590, and/or PA598, based on a configuration as specified by the transceiver502and/or the processor512. As such, the transceiver502may be configured to transmit and receive wireless signals through the one or more antennas565via the RF front end588. In an aspect, the transceiver502may be tuned to operate at specified frequencies such that the UE104may communicate with, for example, base station102, one or more cells associated with one or more of the base stations102, or one or more other UEs104. In an aspect, for example, the modem144may configure the transceiver502to operate at a specified frequency and power level based on the UE104configuration of the UE104and the communication protocol used by the modem144. In an aspect, the modem144may be a multiband-multimode modem, which may process digital data and communicate with the transceiver502such that the digital data is sent and received using the transceiver502. In an aspect, the modem144may be multiband and be configured to support multiple frequency bands for a specific communications protocol. In an aspect, the modem144may be multimode and be configured to support multiple operating networks and communications protocols. In an aspect, the modem144may control one or more components of the UE104(e.g., RF front end588, transceiver502) to enable transmission and/or reception of signals from the network based on a specified modem configuration. In an aspect, the modem configuration may be based on the mode of the modem144and the frequency band in use. In another aspect, the modem configuration may be based on UE configuration information associated with the UE104as provided by the network during cell selection and/or cell reselection. Referring toFIG.6, another example method of wireless communications is disclosed. The method600may be performed by the base station102along with any of the components (see e.g.,FIG.7) of the base station102. For example, the method600, may be performed by one or more of a processor712, a transceiver702, the modem140, the SPS component142, and/or one or more additional components/subcomponents of the base station102. Turning toFIG.6, at602, the method600may include transmitting, to a UE, a message indicating activation or reactivation of SPS between the base station and the UE. For example, one or more of the processor712, the transceiver702, the modem140, the SPS component142and/or one or more components/subcomponents of the base station102may be configured to transmit, to the UE104, the activation message210(e.g., message) indicating activation or reactivation of SPS between the base station102and the UE104. Thus, the processor712, the transceiver702, the modem140, the SPS component142, and/or one or more components/subcomponents of the base station102may define the means for transmitting, to the UE104, the activation message210(e.g., message) indicating activation or reactivation of SPS between the base station102and the UE104. At604, the method600may also include transmitting, to the UE, an SPS message subsequent to the transmitting of the DCI message. For example, one or more of the processor712, the transceiver702, the modem140, the SPS component142and/or one or more components/subcomponents of the base station102may be configured to transmit, to the UE104, the SPS transmission220(e.g., SPS message) subsequent to the transmitting of the activation message210(e.g., message). Thus, the processor712, the transceiver702, the modem140, the SPS component142, and/or one or more components/subcomponents of the base station102may define the means for transmitting, to the UE104, the SPS transmission220(e.g., SPS message) subsequent to the transmitting of the activation message210(e.g., message). At606, the method600may include receiving, from the UE, a first ACK/NACK to acknowledge reception of the message on a first resource and a second ACK/NACK to acknowledge reception of the SPS message on a second resource. For example, one or more of the processor712, the transceiver702, the modem140, the SPS component142and/or one or more components/subcomponents of the base station102may be configured to receive, from the UE104, the activation ACK/NACK212(e.g., first ACK/NACK) to acknowledge reception of the activation message210(e.g., message) on a first resource and an SMS ACK/NACK222(e.g., second ACK/NACK) to acknowledge reception of the SPS transmission220(e.g., SPS message) on a second resource. Thus, the processor712, the transceiver702, the modem140, the SPS component142, and/or one or more components/subcomponents of the base station102may define the means for receiving, from the UE104, the activation ACK/NACK212(e.g., first ACK/NACK) to acknowledge reception of the activation message210(e.g., message) on a first resource and an SMS ACK/NACK222(e.g., second ACK/NACK) to acknowledge reception of the SPS transmission220(e.g., SPS message) on a second resource. It should be noted that while the method600is described using the activation message210, the SPS transmission220b, the activation ACK/NACK212, and the SPS ACK/NACK222b, one skilled in the art would recognize that these same operations may be applied to other messages and ACK/NACK operations such as the re-activation message230, the SPS transmission220d, the re-activation ACK/NACK232, and the SPS ACK/NACK222d. Referring toFIG.7, one example of an implementation of the base station102may include a variety of components, some of which have already been described above, but including components such as one or more processors712, memory716and transceiver702in communication via one or more buses744, which may operate in conjunction with the modem140and the SPS component142to enable one or more of the functions of the method600described herein. The transceiver702, receiver706, transmitter708, one or more processors712, memory716, applications775, buses744, RF front end788, LNAs790, switches792, filters796, PAs798, and one or more antennas765may be the same as or similar to the corresponding components of the UE104, as described above, but configured or otherwise programmed for base station operations as opposed to UE operations. SOME FURTHER EXAMPLES An example method of wireless communications by a user equipment (UE), comprising: receiving, from a base station, a message indicating activation or reactivation of semi-persistent scheduling (SPS) between the UE and the base station; selecting, based on the message, a first resource for transmitting a first acknowledgement/negative-acknowledgement (ACK/NACK) to acknowledge reception of the message; receiving, from the base station and subsequent to the receiving of the message, an SPS message; selecting a second resource for transmitting a second ACK/NACK to acknowledge reception of the SPS message; and transmitting, to the base station, the first ACK/NACK on the first resource and the second ACK/NACK on the second resource. The above example method, wherein the message comprises one or more parameters for the activation or the reactivation of the SPS. One or more of the above-example methods, wherein the one or more parameters are indicated by the message in a PRI. One or more of the above-example methods, wherein the one or more parameters includes a modulation and coding scheme for SPS transmissions, a DL SPS time and frequency allocation, a first offset value for the first ACK/NACK, or a second offset value for the second ACK/NACK. One or more of the above-example methods, wherein the second offset value is different from the first offset value. One or more of the above-example methods, wherein the message is a DCI message. One or more of the above-example methods, further comprising: receiving, from the base station, a configuration message indicating one or more parameters of the SPS message for the UE to receive the SPS message; and monitoring, in response to the configuration message, communications from the base station to receive the SPS message based on the one or more parameters of the SPS message. One or more of the above-example methods, wherein the configuration message is a radio resource control (RRC) message. One or more of the above-example methods, further comprising: determining the first resource and the second resource are within a same slot or a same sub-slot; and merging the first ACK/NACK and the second ACK/NACK into a single codebook in response to the determining of the first resource and the second resource are within the same slot or the same sub-slot, wherein the second resource is a same resource as the first resource. One or more of the above-example methods, further comprising: determining an overlap exists between the single codebook and a second codebook for dynamic scheduled DL messages; and merging the single codebook into the second codebook in response to the determining the overlap exists. An example apparatus (e.g., UE) comprising means for wireless communication, means for storing instructions and data, and means for performing all or part of one or more of the above example methods. An example computer-readable medium storing computer executable code, comprising code to: perform all or part of one or more of the above example methods. An example UE, comprising: a memory storing instructions; and one or more processors coupled with the memory and configured to: receive, from a base station, a message indicating activation or reactivation of SPS between the UE and the base station; select, based on the message, a first resource for transmitting a first ACK/NACK to acknowledge reception of the message; receive, from the base station and subsequent to the receiving of the message, an SPS message; select a second resource for transmitting a second ACK/NACK to acknowledge reception of the SPS message; and transmit, to the base station, the first ACK/NACK on the first resource and the second ACK/NACK on the second resource. The above example UE wherein the message comprises one or more parameters for the activation or the reactivation of the SPS. One or more of the above-example UEs, wherein the one or more parameters are indicated by the message in a PRI. One or more of the above-example UEs, wherein the one or more parameters includes a modulation and coding scheme for SPS transmissions, a DL SPS time and frequency allocation, a first offset value for the first ACK/NACK, or a second offset value for the second ACK/NACK. One or more of the above-example UEs, wherein the second offset value is different from the first offset value. One or more of the above-example UEs, wherein the message is a DCI message. One or more of the above-example UEs, wherein the one or more processors is further configured to: receive, from the base station, a configuration message indicating one or more parameters of the SPS message for the UE to receive the SPS message; and monitor, in response to the configuration message, communications from the base station to receive the SPS message based on the one or more parameters of the SPS message. One or more of the above-example UEs, wherein the configuration message is an RRC message. One or more of the above-example UEs, wherein the one or more processors is further configured to: determine the first resource and the second resource are within a same slot or a same sub-slot; and merge the first ACK/NACK and the second ACK/NACK into a single codebook in response to the determining of the first resource and the second resource are within the same slot or the same sub-slot, wherein the second resource is a same resource as the first resource. One or more of the above-example UEs, wherein the one or more processors is further configured to: determine an overlap exists between the single codebook and a second codebook for dynamic scheduled DL messages; and merge the single codebook into the second codebook in response to the determining the overlap exists. A second example method of wireless communications by a base station, comprising: transmitting, to a user equipment (UE), a message indicating activation or reactivation of semi-persistent scheduling (SPS) between the base station and the UE; transmitting, to the UE, an SPS message subsequent to the transmitting of the message; and receiving, from the UE, a first acknowledgement/negative-acknowledgement (ACK/NACK) to acknowledge reception of the message on a first resource and a second ACK/NACK to acknowledge reception of the SPS message on a second resource. The above second example method, wherein the message comprises one or more parameters for the activation or the reactivation of the SPS. One or more of the above-second example methods, wherein the one or more parameters are indicated by the message in a PRI. One or more of the above-second example methods, wherein the one or more parameters includes a modulation and coding scheme for SPS transmissions, a DL SPS time and frequency allocation, a first offset value for the first ACK/NACK, or a second offset value for the second ACK/NACK. One or more of the above-second example methods, wherein the second offset value is different from the first offset value. One or more of the above-second example methods, wherein the messages is a DCI message. One or more of the above-second example methods, further comprising: transmitting, to the UE, a configuration message indicating one or more parameters of the SPS message for the UE to monitor. One or more of the above-second example methods, wherein the configuration message is a radio resource control (RRC) message. An example apparatus (e.g., base station) comprising means for wireless communication, means for storing instructions and data, and means for performing all or part of one or more of the above-second example methods. An example computer-readable medium storing computer executable code, comprising code to: perform all or part of one or more of the above-second example methods. An example base station, comprising: a memory storing instructions; and one or more processors coupled with the memory and configured to: transmit, to a UE, a message indicating activation or reactivation of SPS between the base station and the UE; transmit, to the UE, an SPS message subsequent to the transmitting of the message; and receive, from the UE, a first ACK/NACK to acknowledge reception of the message on a first resource and a second ACK/NACK to acknowledge reception of the SPS message on a second resource. The above example base station, wherein the message comprises one or more parameters for the activation or the reactivation of the SPS. One or more of the above example base stations, wherein the one or more parameters are indicated by the message in a PRI. One or more of the above example base stations, wherein the one or more parameters includes a modulation and coding scheme for SPS transmissions, a DL SPS time and frequency allocation, a first offset value for the first ACK/NACK, or a second offset value for the second ACK/NACK. One or more of the above example base stations, wherein the second offset value is different from the first offset value. One or more of the above example base stations, wherein the messages is a DCI message. One or more of the above example base stations, wherein the one or more processors is further configured to: transmit, to the UE, a configuration message indicating one or more parameters of the SPS message for the UE to monitor. One or more of the above example base stations, wherein the configuration message is an RRC message. The above detailed description set forth above in connection with the appended drawings describes examples and does not represent the only examples that may be implemented or that are within the scope of the claims. The term “example,” when used in this description, means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details for the purpose of providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and apparatuses are shown in block diagram form in order to avoid obscuring the concepts of the described examples. Information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, computer-executable code or instructions stored on a computer-readable medium, or any combination thereof. The various illustrative blocks and components described in connection with the disclosure herein may be implemented or performed with a specially-programmed device, such as but not limited to a processor, a digital signal processor (DSP), an ASIC, a FPGA or other programmable logic device, a discrete gate or transistor logic, a discrete hardware component, or any combination thereof designed to perform the functions described herein. A specially-programmed processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A specially-programmed processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a non-transitory computer-readable medium. Other examples and implementations are within the scope and spirit of the disclosure and appended claims. For example, due to the nature of software, functions described above may be implemented using software executed by a specially programmed processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Also, as used herein, including in the claims, “or” as used in a list of items prefaced by “at least one of” indicates a disjunctive list such that, for example, a list of “at least one of A, B, or C” means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage medium may be any available medium that may be accessed by a general purpose or special purpose computer. By way of example, and not limitation, computer-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to carry or store desired program code means in the form of instructions or data structures and that may be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media. The previous description of the disclosure is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the common principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Furthermore, although elements of the described aspects and/or embodiments may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated. Additionally, all or a portion of any aspect and/or embodiment may be utilized with all or a portion of any other aspect and/or embodiment, unless stated otherwise. Thus, the disclosure is not to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
64,034
11943167
DETAILED DESCRIPTION In the following description, references are made to various embodiments in accordance with which the disclosed subject matter can be practiced. Some embodiments may be described using the expressions one/an/another embodiment or the like, multiple instances of which do not necessarily refer to the same embodiment. Particular features, structures or characteristics associated with such instances can be combined in any suitable manner in various embodiments unless otherwise noted. By way of example, this disclosure may set out a set or list of a number of options or possibilities for an embodiment, and in such case, this disclosure specifically contemplates all clearly feasible combinations and/or permutations of items in the set or list. In certain implementations, mesh network communications devices may be or may include battery-powered sensor devices S, which typically operate in sleep mode and only wake up to communication modes when they need to communicate. In the routing and data collection methods described in the background above, however, communication must be scheduled in advance. A technical limitation affecting the energy efficiency of the sensor device in this case is the need to wake it up precisely just before receiving an initiation message from the control communication device or controller C so that the sensing device can receive the message and execute the appropriate sequences. Earlier wake-up results in an efficiency reduction due to spending more time in the energy-intensive communication mode, which leads to higher power consumption and consequently to limited battery life, while later wake-up results in missing of the initialization message and to routing failure. Precise timing then usually means higher economic costs due to the need to use precise timers with very low power consumption. Moreover, even if accurate timing is achieved on sensor devices S, their energy efficiency may be further reduced, for example, if the control network device C is busy with execution of priority commands from the parent system and therefore sends the initialization packet later than scheduled. Similarly, it fails if the initialization message, for example, due to ambient interference, does not reach the sensor device S. In these cases, the sensor devices S have to wait for a longer time in receive mode, in which the power consumption is few orders of magnitude higher than in sleep mode. FIG.1illustrates an example network topology including a controller C, three slave routing communication devices N1-N3, and six sensor communication devices N4-N9, also labeled S1 to S6. The three slave routing communication devices N are referenced by N1-N3 and have been assigned virtual routing numbers R1-R3 (more precisely 1-3) in accordance with the routing scheme as described in U.S. Pat. No. 8,681,656. The six sensor communication devices S are referenced by N4-N9 to indicate that they are additional devices communicating on the network, but they are also referenced as S1-S6 to differentiate them from the slave routing communication devices N. Solid lines show direct links within this mesh network topological arrangement that link the controller C to the three routing slave communication devices N in bi-directional communication using initiation messages and acknowledgement messages as described in U.S. Pat. No. 9,615,308, referenced above. Referring again toFIG.1, the dashed lines for each sensor device S1 to S6 show which routing devices N can receive their transmissions/data in this topological arrangement. The sensing devices can be configured to asynchronously transmit or broadcast sensor data periodically, which is then received by one or more of the routing devices N. The asynchronous transmissions can be configured using a different frequency or protocol so as not to interfere with the bidirectional mesh network communication between the controller and the routing communication devices. Any sensor can be configured to send various data in one broadcasted message since there can be multiple sensors collecting multiple elements of data in one sensor device. The sensor may also send other diagnostic or operating data, such as a state of its battery, during its broadcast. Each routing slave communication device N receiving asynchronous transmissions from sensors within range can in turn store the received sensor data in a local memory of the device. The data for each sensor can be stored separately in association with a unique identifier of that sensor. In order to propagate the received sensor data back to the controller C, each routing slave communication device N awaits an initiation message transmitted directly by the controller C, or indirectly from the controller and through one or more other routing slave communication devices N. The initiation message is transmitted through the network during an initiation message frame that defines the timing of transmissions by the controller and each of the slave communication devices N. The initiation message from the controller can be configured to specify what information is being requested (e.g. what type of sensor data and/or which sensors from which it is being requested). The initialization message, for example, can include a list of sensors and/or routing devices, such as in the form of a bit field, whose data are to be selected, or a definition of the requested type of data can be inserted, for example temperature data or bit information about exceeding the limit value of some physical quantity. Depending on the data requested in the initiation message, only certain data may be selected by the slave communication devices and subsequently aggregated for return to the controller. This can be advantageous, for example, in the case of optimization of heterogeneous telemetry networks with different types of sensor devices, where data from sensors designed for monitoring slow events do not need to be available, and therefore not collected, as often as critical data. The initiation message frame initiated by the controller establishes a timing with respect to which the various routing devices then transmit responses in an acknowledgement frame that is time synchronized with respect to the initiation message frame. Each routing communication device, in response to receiving the initiation message, selects the requested data to be collected, which may be one or more bits, from the received sensing data stored in its memory. The routing slave communication device then inserts the selected data or bits into an acknowledgement message, possibly aggregating the data with other data received from other sensors and/or slave communication devices in an acknowledgement message that is transmitted back through the network to the controller C in an acknowledgement frame. In one embodiment, data transmissions during the acknowledgement message frame are performed sequentially, starting with routing devices with higher virtual routing numbers and proceeding to routing devices with lower virtual routing numbers until the data is finally returned to the controller. Each subsequent routing device can aggregate, merge, or add any additional data (received and stored from local sensors or received from other routing devices) into the acknowledgement message as the message progresses back to the controller. As illustrated inFIG.1, multiple routing slave devices N may receive asynchronous communications from any single sensor. For example, the transmissions of sensor S4 are shown as being received by all of slave communication devices N1-N3. Once a first routing device inserts or aggregates data from a specific sensor into an acknowledgement message, other subsequent routing devices that may have also received the same data from the same sensor no longer need to insert it into the acknowledgement message. During an acknowledgement frame, each routing device can incorporate the sensor data stored in its memory and requested by the controller C into an acknowledgement message that accumulates data from various sensor and routing devices for return to the controller in a similar manner to the methods disclosed in U.S. Pat. No. 9,615,308, referenced above. Since data is collected asynchronously from sensors by a smaller number of routing devices, the sensors themselves need not be given time slots or assigned virtual routing numbers in the communication frames. This, in turn can result in a massive reduction in the duration of the communication frame compared to U.S. Pat. No. 9,615,308. The initiation message can also include additional information, such as, for example, a requirement for a delay before the start of the acknowledgement frame. Such a delay may be advantageous in networks with routing devices of different computational power in order to time synchronization of acknowledgement transmissions by individual routing devices. For example, when more computationally intensive selection and aggregation operations must be performed, a routing communication device may be delayed and would otherwise miss its respective time slot during the acknowledgement frame due to longer execution time of aggregation operations. By inserting additional information into the initialization message and then interpreting it at the routing devices, the system can be further optimized for a specific application and bring additional benefits such as increasing the data collection rate, reducing the load on the transmission channel, or ensuring higher reliability of the entire data collection method. FIG.2is a table that shows an example data aggregation during an acknowledgement frame in the topological network arrangement shown inFIG.1. The second line of the table shows, for each of the routing devices N1-N2 and the controller C what data has been received and stored in the device's memory. For the device N1, it has received sensor data from sensors S1-S4; for N2, it has received sensor data from S2 and S4-S6; for N3, it has received sensor data from S3, S4 and S6 (all as shown by the dashed lines inFIG.1). Referring again toFIG.2, the fifth through seventh lines of the table show, for each of three time slots 0-2 in the acknowledgement message frame, the sensor data transmitted by the active/transmitting device during the time slot as well as the data aggregated or received by each of the devices N1-N3 and the controller at the end of the time slot. For time slot 0, the device N2 transmits sensor data for S2 and S4-S6, which is received by N1 and N3. At the end of slot 0, device N1 has now received sensor data for S5 and S6 (as well as S2) from device N2, and in combination with the stored values from S1-S4, N1 has data from all of S1-S6. Device N2 has completed its transmission, which is indicated by the word “end” in the table. Device N3 has now received sensor data for S2 and S5 from device N2, and in combination with its stored values, N3 has data from S2-S6. The controller has not received any data yet. For time slot 1, the device N3 transmits sensor data for S2-S6, which is received by the controller. At the end of slot 1, device N1 still has data from all of S1-S6, as it did not transmit or receive during slot 1. The controller, having received a transmission from N3, now has data from S2-S6. Device N3 has also now completed its transmission as indicated. For time slot 2, the device N1 transmits sensor data for S1-S6, which is also received by the controller. With the value for S1 now having been received from N1, the controller now also has data for S1-S6, device N1 has also completed it transmission, and the end of the acknowledgement frame has been reached. FIG.3illustrates two example physical implementations of a routing slave communication device N.FIG.4illustrates a block diagram of a routing slave communication device N in accordance with one embodiment. While the disclosure above describes the sensor devices as battery operated, the methods and devices described herein can also be used for non-battery operated sensor devices that receive power from a constant source. Although the subject matter has been described in terms of certain embodiments, other embodiments that may or may not provide various features and aspects set forth herein shall be understood to be contemplated by this disclosure. The specific embodiments set forth herein are disclosed as examples only, and the scope of the patented subject matter is defined by the claims that follow. In the claims, the terms “based upon” and “based on” shall include situations in which a factor is taken into account directly and/or indirectly, and possibly in conjunction with other factors, in producing a result or effect. In the claims, a portion shall include greater than none and up to the whole of a thing; encryption of a thing shall include encryption of a portion of the thing. In method claims, any reference characters are used for convenience of description only, and do not indicate a particular order for performing a method.
13,091
11943168
DETAILED DESCRIPTION In an NR access technology, various services have been designed so that they can be freely multiplexed on time and frequency resources. Accordingly, waveform/numerology and reference signals can be dynamically or freely allocated as needed for the corresponding services. In order to provide optimum services to a terminal in wireless communications, it is important to optimize the data transmission through measurement of the channel quality and the interference amount. Thus, it is essential to measure an accurate channel state. However, in case of the 5G channel, in contrast with the 4G communication in which the channel and interference characteristics are not greatly changed depending on the frequency resources, the channel and interference characteristics are greatly changed depending on the services. Thus, it is necessary to support a subset of frequency resource group (FRG) dimensions that make it possible to dividedly measure the channel and interference characteristics. In the NR system, supported services may be divided into categories of an enhanced mobile broadband (eMBB), massive machine type communications (mMTC), and ultra-reliable and low-latency communications (URLLC). The eMBB may be a service aimed at high-speed transmission of high-capacity data, the mMTC may be a service aimed at minimization of a terminal power and accesses of a plurality of terminals, and the URLLC may be a service aimed at high reliability and low latency. Different requirements may be applied in accordance with the kinds of services applied to the terminal. Hereinafter, embodiments of the disclosure will be described in detail with reference to the accompanying drawings. In describing the embodiments, explanation of technical contents that are well known in the art to which the disclosure pertains and are not directly related to the disclosure will be omitted. This is to describe the subject matter of the disclosure more clearly without obscuring the same through omission of unnecessary explanations. For the same reason, in the accompanying drawings, sizes and relative sizes of some constituent elements may be exaggerated, omitted, or briefly illustrated. Further, sizes of the respective constituent elements do not completely reflect the actual sizes thereof. In the drawings, the same drawing reference numerals are used for the same or corresponding elements across various figures. The aspects and features of the disclosure and methods for achieving the aspects and features will be apparent by referring to the embodiments to be described in detail with reference to the accompanying drawings. However, the disclosure is not limited to the embodiments disclosed hereinafter, and it can be implemented in diverse forms. The matters defined in the description, such as the detailed construction and elements, are only specific details provided to assist those of ordinary skill in the art in a comprehensive understanding of the disclosure, and the disclosure is only defined within the scope of the appended claims. In the entire description of the disclosure, the same drawing reference numerals are used for the same elements across various figures. In this case, it will be understood that each block of the flowchart illustrations, and combinations of blocks in the flowchart illustrations, can be implemented by computer program instructions. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart block or blocks. These computer program instructions may also be stored in a computer usable or computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer usable or computer-readable memory produce an article of manufacture including instruction means that implement the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks. Also, each block of the flowchart illustrations may represent a module, segment, or portion of code, which includes one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the blocks may occur out of the order. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In this case, the term “unit”, as used herein, means, but is not limited to, a software or hardware component, such as field programmable gate array (FPGA) or application specific integrated circuit (ASIC), which performs certain tasks. However, the term “unit” is not meant to be limited to software or hardware. The term “unit” may advantageously be configured to reside on the addressable storage medium and configured to execute on one or more processors. Thus, the term “unit” may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. The functionality provided for in the components and units may be combined into fewer components and units or further separated into additional components and units. Further, the components and units may be implemented to operate one or more CPUs in a device or a security multimedia card. Further, in an embodiment, a unit may include one or more processors. A wireless communication system was initially developed for the purpose of providing a voice-oriented service, but it has been expanded to, for example, a broadband wireless communication system that provides a high-speed and high-quality packet data service together with the communication standards, such as 3GPP high speed packet access (HSPA), long term evolution (LTE) or evolved universal terrestrial radio access (E-UTRA), LTE-Advanced (LTE-A), 3GPP2 high rate packet data (HRPD), ultra-mobile broadband (UMB), and IEEE 802.16e. Also, for the 5th generation wireless communication system, 5G or NR communication standards have been developed. In the NR system, which is a representative example of broadband wireless communication systems, the downlink (DL) and uplink (UL) adopt orthogonal frequency division multiplexing (OFDM) schemes. More specifically, the DL adopts a cyclic-prefix OFDM (CP-OFDM) scheme, and the UL adopts a discrete Fourier transform spreading OFDM (DFT-S-OFDM) scheme in addition to the CP-OFDM. The UL means a radio link in which a terminal (or user equipment (UE) or mobile station (MS)) transmits data or a control signal to a base station (or gNodeB or base station (BS)), and the DL means a radio link in which the base station transmits data or a control signal to the terminal. Such a multi-access scheme may discriminate data or control information of respective users from each other by allocating and operating time-frequency resources on which the data or control information of the respective users is to be carried so that the time-frequency resources do not overlap each other, that is, so as to establish orthogonality. The NR system adopts a hybrid automatic repeat request (HARQ) scheme in which a physical layer retransmits the corresponding data if decoding failure occurs during an initial transmission. According to the HARQ scheme, a receiver may transmit information (negative acknowledgement (NACK)) for notifying a transmitter of the decoding failure if the receiver has not accurately decoded the data, and the transmitter may make a physical layer retransmit the corresponding data. The receiver may combine the data that is retransmitted by the transmitter with the previous data of which the decoding has failed to heighten the data reception performance. Further, if the receiver has accurately decoded the data, the HARQ scheme may transmit information (acknowledgement (ACK)) for notifying of a decoding success to the transmitter, so that the transmitter can transmit new data. FIG.1is a diagram illustrating the basic structure of a time-frequency domain that is a radio resource region in which data or a control channel is transmitted on a downlink or an uplink in an NR system. With reference toFIG.1, a horizontal axis represents a time domain, and a vertical axis represents a frequency domain. In the time domain, the minimum transmission unit is an OFDM symbol, and NsymbOFDM symbols102constitute one slot106. The length of the subframe is defined as 1.0 ms, and a radio frame is defined as 10 ms. In the frequency domain, the minimum transmission unit is a subcarrier, and the transmission bandwidth of the whole system is composed of NBWsubcarriers104in total. In the time-frequency domain, the basic unit of resources is a resource element (RE)112, which may be expressed by an OFDM symbol index and a subcarrier index. A resource block (RB)108or a physical resource block (PRB) is defined by NRBcontiguous subcarriers110in the frequency domain. In general, the minimum transmission unit of data is the RB as described above. In the NR system, Nsymb=14 and NRB=12, and NBW is in proportion to the bandwidth of the system transmission band. The data rate may be increased in proportion to the number of RBs that are scheduled to the terminal. For an FDD system that operates to discriminate a DL and an UL by means of frequencies in the NR system, the DL transmission bandwidth and the UL transmission bandwidth may differ from each other. A channel bandwidth indicates an RF bandwidth corresponding to the system transmission bandwidth. Tables 1 and 2 present a part of the corresponding relationship among the system transmission bandwidth that is defined by the NR system in a frequency band that is lower than 6 GHz and in a frequency band that is higher than 6 GHz, subcarrier spacing, and channel bandwidth. For example, the NR system having 100 MHz channel bandwidth with 30 kHz subcarrier spacing has the transmission bandwidth that is composed of 273 RBs. Hereinafter, N/A may be a bandwidth-subcarrier combination that is not supported by the NR system. TABLE 1ChannelbandwidthBWChannelSubcarrier510205080100[MHz]SpacingMHzMHzMHzMHzMHzMHzTransmission15 kHz2552106270N/AN/Abandwidth30 kHz112451133217273configuration60 kHzN/A112465107135NRB TABLE 2ChannelbandwidthBWChannelSubcarrier10205080100[MHz]Spacing5 MHzMHzMHzMHzMHzMHzTransmission15 kHz2552106270N/AN/Abandwidth30 kHz112451133217273configuration60 kHzN/A112465107135NRB In the NR system, the frequency range may be dividedly defined by FR1 and FR2 as shown in Table 3 below. TABLE 3Frequeney rangedesignationCorresponding frequency rangeFR1450 MHz-7125MHzFR224250 MHz-52600MHz As described above, it may be possible that the range of the FR1 and FR2 may be differently applied. For example, the frequency range of FR1 may be changed and applied from 450 MHz to 6000 MHz. In the NR system, scheduling information on DL data or UL data is transferred from the base station to the terminal through downlink control information (DCI). The DCI may be defined in accordance with various formats, and it may corresponds to whether the DCI is scheduling information on UL data (UL grant) or scheduling information on DL data (DL grant) according to each format, whether the DCI is compact DCI having a small size of control information, whether spatial multiplexing using multiple antennas is applied, and whether the DCI is DCI for power control. For example, DCI format 1-1 that is the scheduling control information on the DL data (DL grant) may include at least one piece of the following control information.Carrier indicator: indicating on which frequency carrier the corresponding DCI is transmitted.DCI format indicator: an indicator discriminating whether the corresponding DCI is for a downlink or an uplink.Bandwidth part (BWP) indicator: indicating from which BWP the corresponding DCI is transmitted.Frequency domain resource assignment: indicating the RB of the frequency domain allocated to the data transmission. An expressed resource is determined in accordance with the system bandwidth and resource allocation scheme.Time domain resource assignment: indicating from what OFDM symbol of what slot a data related channel is to be transmitted.VRB-to-PRB mapping: indicating in which scheme a virtual RB (VRB) index and a physical RB (PRB) index are mapped onto each other.Modulation and coding scheme (MCS): indicating a modulation scheme and the size of a transport block that is data intended to be transmitted.HARQ process number: indicating a process number of HARQ.New data indicator: indicating whether HARQ is initially transmitted or retransmitted.Redundancy version: indicating a redundancy version of HARQ.Transmit power control (TCP) command for a physical uplink control channel (PUCCH): indicating a transmission power control command for PUCCH that is an uplink control channel. For data transmission through a physical uplink shared channel (PUSCH), as described above, the time domain resource assignment may be transferred by information on a slot on which the PUSCH is transmitted, a start OFDM symbol location S on the corresponding slot, and the number L of symbols onto which the PUSCH is mapped. As described above, the location S may be a relative location from the start of the slot, L may be the number of contiguous symbols, and S and L may be determined by a start and length indicator value (SLIV) defined as follows. if (L − 1) ≤ 7 thenSLIV = 14 (L − 1) + SelseSLIV = 14 (14 − L + 1) + (14 − 1 − S)where 0 < L ≤ 14 − S In the NR system, the terminal can be configured with information on the SLIV value, the PUSCH mapping type, and the PUSCH transmission slot in one row through radio resource control (RRC) configuration (e.g., the above-described information may be configured in the form of a table). Thereafter, in the time domain resource assignment of the DCI, the base station can transfer the information on the SLIV value, the PUSCH mapping type, and the PUSCH transmission slot to the terminal by indicating index values in the configured table. In the NR system, type A and type B have been defined as the PUSCH mapping type. According to the PUSCH mapping type A, the first symbol of DMRS symbols is located on the second or third OFDM symbol of the slot. According to the PUSCH mapping type B, the first symbol of the DMRS symbols is located on the first OFDM symbol in the time domain resource allocated through the PUSCH transmission. The PUSCH resource mapping method, as described above, may also be applied to the downlink data transmission through the physical downlink shared channel (PDSCH) in a similar manner. In the NR system, the PDSCH mapping type may be defined as type A and type B, and particularly in the mapping type B, the first symbol of the DMRS symbols may be located on the first symbol of the PDSCH. The DCI may pass through a channel coding and modulation process, and may be transmitted on a PUCCH that is a downlink physical control channel. Herein, the control information being transmitted on the PDCCH or PUCCH may be expressed as a case in which the PDCCH or PUCCH is transmitted. In the same manner, the data being transmitted on the PUSCH or PDSCH may be expressed as a case in which the PUSCH or PDSCH is transmitted. In general, the DCI is scrambled with a specific radio network temporary identifier (RNTI) (or terminal identifier) independently of respective terminals to be added with a CRC, is channel-coded, and then is configured as independent PDCCHs to be transmitted. The PDCCH is mapped onto a control resource set (CORESET) configured to the terminal to be transmitted. The downlink data may be transmitted on a PDSCH that is a physical channel for downlink data transmission. The PDSCH may be transmitted after a control channel transmission interval, and scheduling information, such as a detailed mapping location in the frequency domain and a modulation scheme, is determined based on the DCI being transmitted on the PDCCH. Through the MCS among the control information constituting the DCI, the base station notifies the terminal of a modulation scheme applied to the PDSCH intended to be transmitted to the terminal and the size of data (transport block size (TBS)) intended to be transmitted. The MCS may be composed of 5 bits or more or less. The TBS corresponds to the size of the data (transport block (TB)) that the base station intends to transmit before the channel coding for error correction is applied thereto. Herein, the TB may include a medium access control (MAC) header, a MAC control element (CE), one or more MAC service data units (SDUs), and padding bits. Further, the TB may indicate a data unit being delivered from the MAC layer to the physical layer or a MAC protocol data unit (PDU). The modulation scheme that is supported in the NR system may be quadrature phase shift keying (QPSK), 16 quadrature amplitude modulation (16 QAM), 64 QAM, and 256 QAM, and respective modulation orders Qm correspond to 2, 4, 6, and 8. That is, for the QPSK modulation, 2 bits per symbol may be transmitted, and for the 16 QAM, 4 bits per symbol may be transmitted. Further, for the 64 QAM, 6 bits per symbol may be transmitted, and for the 256 QAM, 8 bits per symbol may be transmitted. FIGS.2and3are diagrams illustrating a state where data for eMBB, URLLC, and mMTC, which are services being considered in a 5G or NR system, are allocated with frequency-time resources. With reference toFIGS.2and3, it can be identified that frequency and time resources are allocated for information transmission in the respective systems.FIG.2is a diagram illustrating frequency and time resources allocated for information transmission in the NR system, according to an embodiment. FIG.2illustrates that data for eMBB, URLLC, and mMTC are allocated in a whole system frequency band200. If URLLC data203,205, and207is generated while eMBB201and mMTC209are allocated and transmitted in a specific frequency band, and transmission of the generated URLLC data is necessary, the URLLC data203,205, and207may be transmitted without emptying or transmitting a portion in which the eMBB201and the mMTC209have already been allocated. Because it is necessary to reduce a latency of the URLLC among the above-described services, the URLLC data203,205, and207is allocated to a portion of the resource201allocated to the eMBB to be transmitted. Of course, if the URLLC is additionally allocated and transmitted on the eMBB-allocated resource, the eMBB data may not be transmitted on the redundant frequency-time resources, and thus, the transmission performance of the eMBB data may be lowered. An eMBB data transmission failure due to the URLLC allocation may occur. FIG.3is a diagram illustrating frequency and time resources allocated for information transmission in the NR system, according to another embodiment. InFIG.3, respective subbands302,304, and306that are obtained through division of a whole system frequency band300may be used for the purpose of transmitting services and data. Information related to subband configuration may be predetermined, and this information may be transmitted from a base station to a terminal through higher signaling. Further, information related to the subbands may be optionally divided by the base station or a network node, and services may be provided to the terminal without transmission of separate subband configuration information to the terminal.FIG.3illustrates a state where the subband302is used to transmit eMBB data, the subband304is used to transmit URLLC data, and the subband306is used to transmit mMTC data. The length of a transmission time interval (TTI) that is used for URLLC transmission may be shorter than the length of the TTI that is used to transmit the eMBB or mMTC. Further, a response to the information related to the URLLC may be transmitted earlier than that of the eMBB or mMTC, and thus, the information can be transmitted and received with a low latency. Physical layer channels used for respective types to transmit the three kinds of services or data as described above may have different structures. For example, at least one of the TTI length, frequency resource allocation unit, control channel structure, and data mapping method may differ Although three kinds of services and three kinds of data have been described, more than three kinds of services and corresponding data may exist, and even in such a case, the contents of the disclosure will be able to be applied. In order to explain a method and an apparatus of the embodiments herein, the terms “physical channel” and “signal” in an NR system may be used. However, the contents of the disclosure may also be applied to a wireless communication system that is not the NR system. A sidelink (SL) is also referred to as a signal transmission/reception path between terminals, and may be interchangeably used with a PC5 interface. Hereinafter, the base station is the subject that performs resource allocation to the terminal, and may be a base station supporting both vehicle-to-everything (V2X) communication and general cellular communication or a base station supporting only V2X communication. That is, the base station may mean an NR base station (gNB), LTE base station (eNB), or road site unit (RSU) (or fixed station). The terminal may include user equipment, mobile station, vehicle supporting vehicular-to-vehicular communication (V2V), vehicle supporting vehicular-to-pedestrian (V2P), pedestrian's handset (e.g., smart phone), vehicle supporting vehicular-to-network communication (V2N), vehicle supporting vehicular-to-infrastructure communication (V2I), RSU mounted with a terminal function, RSU mounted with a base station function, or RSU mounted with a part of a base station function and a part of a terminal function. In the disclosure, a DL is a radio transmission path of a signal that is transmitted from the base station to the terminal, and an UL means a radio transmission path of a signal that is transmitted from the terminal to the base station. Hereinafter, although the NR system is exemplified in embodiments of the disclosure, the embodiments of the disclosure can be applied to even other various communication systems having similar technical backgrounds or channel types. Further, the embodiments of the disclosure may also be applied to other communication systems through partial modifications thereof in a range that does not greatly deviate from the scope of the disclosure by the judgment of those skilled in the art. In the disclosure, the terms “physical channel” and “signal” in the related art may be interchangeably used with data or a control signal. For example, although the PDSCH is a physical channel on which data is transmitted, it may be called data in the disclosure. Hereinafter, in the disclosure, higher signaling is a signal transfer method in which the base station transfers a signal to the terminal using a downlink data channel of a physical layer, or the terminal transfers a signal to the base station using an uplink data channel of the physical layer, and it may also be mentioned as RRC signaling or MAC CE. In the following embodiments, a method and an apparatus for performing data transmission/reception between the base station and the terminal or between the terminals are provided. Data may be transmitted from one terminal to a plurality of terminals, or data may be transmitted from one terminal to one terminal. Further, data may be transmitted from a base station to a plurality of terminals. However, the data transfer is not limited thereto, but the disclosure will be able to be applied to various cases. FIG.4is a diagram illustrating a process in which one transport block is divided into several code blocks and CRCs are added thereto, according to an embodiment. With reference toFIG.4, a CRC403is added to the last or head portion of one TB401intended to be transmitted on an uplink or a downlink. The CRC403may be composed of 16 bits, 24 bits, or a prefixed bit number, or may be composed of a variable bit number in accordance with channel situations. The CRC403may be used to determine whether channel coding has succeeded. A block including the TB401and the CRC403added thereto is divided into several code blocks (CBs)407,409,411, and413, in operation405. The divided code blocks may have predetermined maximum sizes, and the last code block413may have a size that is smaller than the size of other code blocks407,409, and411. However, this is merely exemplary, and according to another example, the last code block413may be set to have the same length as the length of other code blocks407,409, and411through insertion of 0, a random value, or 1 into the last code block413. CRCs417,419,421, and423are respectively added to the code blocks407,409,411, and413, at operation415. The CRC may be composed of 16 bits, 24 bits, or a prefixed bit number, and may be used to determine whether channel coding has succeeded. In order to create the CRC403, the TB401and a cyclic generator polynomial may be used, and the cyclic generator polynomial may be defined in various methods. For example, if it is assumed that a cyclic generator polynomial for the CRC of 24 bits is gCRC24A(D)=D24+D23+D18+D17+D14+D11+D10+D7+D6+D5+D4+D3+D+1, and L is L=24, with respect to TB data a0, a1, a2, a3, . . . , aA−1, CRC p0, p1, p2, p3, . . . , pL−1may be determined as a value obtained by dividing a0DA+23+a1DA+22+ . . . +aA−1D24+p0D23+p1D22+ . . . +p22D1+p23by gCRC24A(D) with a remainder of 0. In the above-described example, although it is assumed that the CRC length L is 24, the CRC length L may be determined to include various lengths, such as 12, 16, 24, 32, 40, 48, 64, and the like. After the CRC is added to the TB in the process as described above, the TB is divided into N CBs407,409,411, and413. CRCs417,419,421, and423are added to the divided CBs407,409,411, and413, respectively, at operation415. The CRC added to the CB may have a length that is different from the length of the CRC added to the TB, or another cyclic generator polynomial may be used. However, the CRC403added to the TB and the CRCs417,419,421, and423added to the code blocks may be omitted depending on the kind of channel code that is to be applied to the code blocks. For example, if an LDPC code, rather than a turbo code, is to be applied to the code blocks, the CRCs417,419,421, and423to be inserted into the respective code blocks may be omitted. However, even when the LDPC is applied, the CRCs417,419,421, and423may be added to the code blocks as they are. Further, even when a polar code is used, the CRCs may be added or omitted. As described above with reference toFIG.4, in the TB intended to be transmitted, the maximum length of one code block may be determined in accordance with the kind of the applied channel coding, and in accordance with the maximum length of the code blocks, division of the TB and the CRC added to the TB into the code blocks may be performed. In an LTE system, a CRC for a CB is added to a divided CB, and data bits of the CB and the CRC are encoded with a channel code to determine coded bits, and as pre-engaged with respect to the respective coded bits, the number of the rate-matched bits may be determined. In an NR system, the size of the TB may be calculated through the following steps. Step 1: In one PRB within an allocated resource, N′REthat is the number of REs allocated to PDSCH mapping is calculated. Here, N′REmay be calculated by NscRB·Nsymbsh−NDMRSPRB−NohPRB. Here, NscRBis 12, and Nsymbshmay indicate the number of OFDM symbols allocated to the PDSCH. NDMRSPRBis the number of REs in one PRB occupied by a DMRS of a CDM group. NPRBOHis the number of REs occupied by an overhead in one PRB configured through higher signaling, and may be configured to one of 0, 6, 12, and 18. Thereafter, the total number NREof REs allocated to the PDSCH may be calculated. Here, NREis calculated as min(156, N′RE)·nPRB, and nPRBindicates the number of PRBs allocated to the terminal. Step 2: The number Ninfoof temporary information bits may be calculated as NRE*R*Qm*v. Here, R is a code rate, Qmis a modulation order, and information of these values may be transferred using a table pre-engaged with an MCS bit field in control information. Further, v is the number of allocated layers. If Ninfo≤3824, a TBS may be calculated through step 3 below. Otherwise, the TBS may be calculated through step 4. Step 3: N′infomay be calculated through formulas of Ninfo′=max⁡(24,2n*⌊Ninfo2n⌋)⁢and⁢n=max⁡(3,⌊log2(Ninfo)⌋-6). The TBS may be determined as a value that is closest to N′infoamong values that are not smaller than N′infoin Table 4 below. TABLE 4IndexTBS124232340448556664772880988109611104121121312014128151361614417152181601916820176211842219223208242242524026256272722828829304303203133632352333683438435408364323745638480395044052841552425764360844640456724670447736487684980850848518885292853984541032551064561128571160581192591224601256611288621320631352641416651480661544671608681672691736701800711864721928732024742088752152762216772280782408792472802536812600822664832728842792852856862976873104883240893368903496913624923752933824 Step 4: N′infomay be calculated through formulas of Ninfo′=max⁡(3840,2n×round(Ninfo-242n))⁢andn=⌊log2(Ninfo-24)⌋-5. The TBS may be determined through N′infovalue and [pseudo-code 1] below. Pseudo-code 1if R ≤ ¼TBS=8*C*⌈Ninfo′+248*C⌉-24,where⁢⁢C=⌈Ninfo′+243816⌉elseif Ninfo′ > 8424TBS=8*C*⌈Ninfo′+248*C⌉-24,where⁢⁢C=⌈Ninfo′+248424⌉elseTBS=8*⌈Ninfo′+248⌉-24end ifend if If one CB is inputted to an LDPC encoder in an NR system, parity bits may be added to the CB to be outputted. The quantity of parity bits may differ in accordance with an LDCP base graph. A method for sending all parity bits created by LDPC coding with respect to a specific input may be called full buffer rate matching (FBRM). A method for limiting the number of transmittable parity bits may be called a limited buffer rate matching (LBRM). If resources are allocated for data transmission, an LDPC encoder output is made as a circular buffer, and bits of the made buffer are repeatedly transmitted to the extent of the allocated resources. The length of the circular buffer may be Ncb. If the number of all parity bits being created by the LDPC coding is N, the length of the circular buffer becomes Ncb=N in the FBRM method. In the LBRM method, Ncbbecomes min(N,Nref), Nrefis given as ⌊TBSLBRMC*RLBRM⌋, and RLBRMmay be determined as ⅔. In order to obtain TBSLBRM, the above-described method for obtaining the TBS, and the maximum number of layers supported by the terminal in the corresponding cell and the maximum modulation order configured to the terminal in the corresponding cell may be assumed, and 64 QAM may be assumed in case that the maximum modulation order is not configured. Further, it may be assumed that the code rate is 948/1024 that is the maximum code rate, NREis 156·nPRB, and nPRBis nPRB,LBRM. Here, nPRB,LBRMmay be given as in Table 5 below. TABLE 5Maximum number of PRBs across allconfigured BWPs of a carriernPRB, LBRMLess than 333233 to 666667 to 107107108 to 135135136 to 162162163 to 217217Larger than 217273 In the NR system, the maximum data rate supported by the terminal may be determined through mathematical Equation (1) below. data⁢rate⁢(in⁢Mbps)=10-6·∑j=1J(vLayers(j)·Qm(j)·f(j)·Rmax·NPRBBW⁡(j)⁢μ·12Tsμ·(1-OH(j)))(1) In Equation (1), J is the number of carriers tied through carrier aggregation, Rmax=948/1024, vLayers(j)is the maximum number of layers, Qm(j)is the maximum modulation order, f(j)is a scaling index, μ is a subcarrier spacing. Here, f(j)is one value of 1, 0.8, 0.75, and 0.4, which can be reported by the terminal, and μ may be given as in Table 6 below. TABLE 6μΔƒ = 2μ· 15[kHz]Cyclic prefix015Normal130Normal260Normal, Extended3120Normal4240Normal Further, Tsμis an average OFDM symbol length, Tsμmay be calculated as 10−3/14·2μ, and NPRRBW(j).μis the maximum number of RBs in BW(j). Further, OH(j)is an overhead value, which may be given as 0.14 in a downlink of FR1 (not higher than 6 GHz band) and may be given as 0.18 in an uplink, and which may be given as 0.08 in a downlink of FR2 (higher than 6 GHz band) and may be given as 0.10 in an uplink. The maximum data rate in the downlink in the cell having 100 MHz frequency bandwidth in 30 kHz subcarrier spacing through the Equation (1) may be calculated as in Table 7 below. TABLE 7f(j)vLayers(j)Qm(j)RmaxNBWPRB(j),μTsμOH(j)data rate1480.925781252733.57143E−050.142337.00.8480.925781252733.57143E−050.141869.60.75480.925781252733.57143E−050.141752.80.4480.925781252733.57143E−050.14934.8 In contrast, the actual data rate that can be measured by the terminal in the actual data transmission may be a value obtained by dividing the data amount by the data transmission time. This may be TBS in 1 TB transmission, and may be a value obtained by dividing the sum of TBSs by the TTI length in 2 TB transmission. As an example, in the same manner as the assumption to obtain Table 7 above, the maximum actual data rate in the downlink in the cell having the 100 MHz frequency bandwidth in the 30 kHz subcarrier spacing may be determined as in Table 8 below in accordance with the number of allocated PDSCH symbols. TABLE 8TTIlengthdata rateNsynthshNDMRSPRBN′RENRENinfonN′infoCTBS(ms)(Mbps)38287644226453.512225,28027225,4800.1071432,104.48484010920323505.013319,48838319,7840.1428572,238.49585214186420556.513417,79250417,9760.1785712,340.67686417472517608.013516,09662516,3120.2142862,409.46787620748614659.514622,59274622,7600.2500002,491.04888824024711711.014704,51284704,9040.2857142,467.169810027300808762.514802,81696803,3040.3214292,499.1710811230576905814.014901,120107901,3440.3571432,523.76118124338521002865.514999,424119999,5760.3928572,544.38128136371281099917.0151,114,1121331,115,0480.4285712,601.78138148404041196968.5151,212,4161441,213,0320.4642862,612.68148160436801294020.0151,277,9521521,277,9920.5000002,555.98 Through Table 7, it is possible to identify the maximum data rate supported by the terminal, and through Table 8, it is possible to identify the actual data rate following the allocated TBS. The actual data rate may be higher than the maximum data rate in accordance with scheduling information. In a wireless communication system, and particularly, in an NR system, the data rate that can be supported by the terminal may be pre-engaged between the base station and the terminal. This may be calculated using the maximum frequency band supported by the terminal, the maximum modulation order, and the maximum number of layers. However, the calculated data rate may be different from the value calculated from the TBS being used for the actual data transmission and the length of the TTI. Accordingly, the terminal may be allocated with a TBS that is larger than the value corresponding to the data rate supported by the terminal itself, and to prevent this, there may be limitations in schedulable TBS in accordance with the data rate supported by the terminal. FIG.5Ais a diagram illustrating one-to-one communication, that is, unicast communication, performed between two terminals through a sidelink, according to an embodiment. FIG.5Aillustrates a signal503transmitted from a first terminal501to a second terminal505, and the direction of the signal transmission may be opposite to the above-described direction. That is, the signal may be transmitted from the second terminal505to the first terminal501. Other terminals507and509are unable to receive the signal being exchanged through the unicast communication between the first terminal501and the second terminal505. The signal exchange through the unicast between the first terminal501and the second terminal505may include processes of mapping resources engaged between the first terminal501and the second terminal505, scrambling using an engaged value, control information mapping, data transmission using a configured value, and identifying inherent ID values. The terminal may be a terminal that moves together with a vehicle. For the unicast, transmission of separate control information, physical control channel, and data may be performed. FIG.5Bis a diagram illustrating a groupcast communication in which one terminal transmits common data to a plurality of terminals through a sidelink, according to an embodiment. FIG.5Billustrates a first terminal551transmitting common data to other terminals553,555,557, and559in a groupcast561through a sidelink, and other terminals561and563which are not included in the group are unable to receive signals being transmitted for the groupcast561. The terminal that transmits the signal for the groupcast may be another terminal in the group, and resource allocation for the signal transmission may be provided by the base station, may be provided by the terminal that serves as a leader in the group, or may be selected by the terminal that transmits the signal. The terminal may be a terminal that moves together with a vehicle. For the groupcasting, transmission of separate control information, physical control channel, and data may be performed. FIG.6is a diagram illustrating a process in which terminals having received common data through groupcasting transmit information related to data reception success or failure to a terminal having transmitted the data, according to an embodiment. With reference toFIG.6, terminals603,605,607, and609, having received the common data through the groupcasting, transmit the information related to the data reception success or failure to a terminal601having transmitted the data. The information may be HARQ-ACK feedback611. Further, the terminals may be terminals having LTE-based sidelink or NR-based sidelink function. The terminal having only the LTE-based sidelink function may be unable to transmit/receive NR-based sidelink signal and a physical channel. The sidelink may be interchangeably used with PC5, V2X, or D2D. InFIGS.5B and6, the transmission/reception in accordance with the groupcasting is exemplified, but it may also be applied to unicast signal transmission/reception between the terminals. FIG.7is a diagram illustrating a state in which a synchronization signal of an NR system and a PBCH are mapped onto each other in the frequency and time domain, according to an embodiment. A primary synchronization signal (PSS)701, a secondary synchronization signal (SSS)703, and a PBCH705are mapped onto each other over 4 OFDM symbols. The PSS701and the SSS703are mapped onto 12 RBs, and the PBCH705is mapped onto 20 RBs. It is illustrated in the table ofFIG.7how the frequency bands of 20 RBs are varied in accordance with a subcarrier spacing (SCS). A resource region on which the PSS701, SSS703, and PBCH705are transmitted may be referred to as an SS/PBCH block. Further, the SS/PBCH block may be referred to as an SSB block. FIG.8is a diagram illustrating what symbols one SS/PBCH block is mapped onto in a slot, according to an embodiment. FIG.8illustrates an LTE system using a subcarrier spacing of 15 kHz and an NR system using a subcarrier spacing of 30 kHz. SS/PBCH blocks811,813,815, and817of the NR system are transmitted in locations801,803,805, and807in which cell-specific reference signals (CRS) being always transmitted in the LTE system can be avoided. This allows the LTE system and the NR system to coexist in one frequency band. FIG.9is a diagram illustrating symbols on which SS/PBCH blocks can be transmitted in accordance with subcarrier spacing, according to an embodiment. With reference toFIG.9, the subcarrier spacing may be configured as 15 kHz, 30 kHz, 120 kHz, and 240 kHz, and in accordance with the subcarrier spacing, the location of a symbol in which an SS/PBCH block (or SSB block) can be located may be determined.FIG.9illustrates the symbol location in which the SSB in accordance with the subcarrier spacing can be transmitted on each symbol within 1 ms, and it is not necessary for the SSB to always be transmitted in the region indicated inFIG.9. Accordingly, the location in which the SSB block is transmitted may be configured in the terminal through system information or dedicated signaling. FIG.10is a diagram illustrating symbols on which SS/PBCH blocks can be transmitted in accordance with subcarrier spacing, according to another embodiment. With reference toFIG.10, the subcarrier spacing may be configured as 15 kHz, 30 kHz, 120 kHz, and 240 kHz, and in accordance with the subcarrier spacing, the location of a symbol in which an SS/PBCH block (or SSB block) can be located may be determined.FIG.10illustrates a symbol location1009in which the SSB block, in accordance with the subcarrier spacing, can be transmitted on each symbol within S ms, and the location in which the SSB block is transmitted may be configured in the terminal through system information or dedicated signaling. It is not necessary for the SS/PBCH block to always be transmitted in the region in which the SS/PBCH block can be transmitted, and the SS/PBCH block may be or may not be transmitted depending on the selection of the base station. Accordingly, the location in which the SSB block is transmitted may be configured in the terminal through the system information or the dedicated signaling. Herein, a sidelink control channel may be referred to as a physical sidelink control channel (PSCCH), and a sidelink shared channel or a data channel may be referred to as a physical sidelink shared channel (PSSCH). Further, a broadcast channel that is broadcasted together with a synchronization signal may be referred to as a physical sidelink broadcast channel (PSBCH), and a channel for feedback transmission may be referred to as a physical sidelink feedback channel (PSFCH). However, the feedback transmission may be performed using the PSCCH or PSSCH. In accordance with the transmitting communication system, the channel may be referred to as LTE-PSCCH, LTE-PSSCH, NR-PSCCH, or NR-PSSCH. Herein, a sidelink means a link between terminals, and a Uu link means a link between a base station and a terminal. FIG.11is a diagram illustrating a resource pool that is defined as a set of resources on time and frequency being used for sidelink transmission and reception, according to an embodiment. A resource pool1110is non-contiguously allocated on time and frequency. Herein, although explanation has been made focused on a case in which a resource pool is non-contiguously allocated on frequency, the resource pool can also be contiguously allocated on the frequency. “A non-contiguous resource allocation1120is performed on the frequency. The granularity of resource allocation on the frequency may be a PRB. A resource allocation1121on the frequency is performed based on a sub-channel. The sub-channel may be defined in the unit on the frequency composed of a plurality of RBs. The sub-channel may be defined as an integer multiple of the RB. The resource allocation1121denotes a sub-channel composed of four contiguous PRBs. The size of the sub-channel may be differently configured, and although it is general that one sub-channel is composed of contiguous PRBs, it is not necessary that the sub-channel is composed of the contiguous PRBs. The sub-channel may become the basic unit of resource allocation on a PSSCH or PSCCH, and thus, the size of the sub-channel may be differently configured depending on whether the corresponding channel is the PSSCH or PSCCH. Further, the term “sub-channel” may be replaced by another term, such as a RBG. A start location of a sub-channel on the frequency in a resource pool is, startRBSubchanel1122. The resource block that is a frequency resource that belongs to a resource pool for the PSSCH in an LTE V2X system may be determined in the following method.The resource block pool consists of NsubCHsub-channels where NsubCHis given by higher layer parameter numSubchannel.The sub-channel m for m=0,1, . . . , NsubCH−1 consists of a set of nsubCHsizecontiguous resource blocks with the physical resource block number nPRB=nsubCHRBstart+m*nsubCHsize+j for j=0,1, . . . , nsubCHsize−1 where nsubCHRBstartand nsubCHsizeare given by higher layer parameters startRBSubchannel and sizeSubchannel, respectively Non-contiguous resource allocation1130is performed on time. The granularity of resource allocation on time may be a slot. Herein, although the resource pool is non-contiguously allocated on the time, the resource pool can also be contiguously allocated on the time. A start location of a slot on time is startSlot1131. Subframes (t0SL, t1SL, . . . tTmaxSL) that are time resources that belong to the resource pool for the PSSCH in the LTE V2X system may be determined in the following method.0≤tiSL<10240,the subframe index is relative to subframe #0 of the radio frame corresponding to SFN 0 of the serving cell or DFN 0,the set includes all the subframes except the following subframes,subframes in which SLSS resource is configured,downlink subframes and special subframes if the sidelink transmission occurs in a TDD cell,reserved subframes which are determined by the following steps:1) the remaining subframes excluding Nslssand Ndssfsubframes from the set of all the subframes are denoted by (l0, l1, . . . , l(10240−Nslss−Ndssf−1)) arranged in increasing order of subframe index, where Nslssis the number of subframes in which SLSS resource is configured within 10240 subframes and Ndssfis the number of downlink subframes and special subframes within 10240 subframes if the sidelink transmission occurs in a TDD cell.2) a subframe lr(0≤r<(10240−Nslss−Ndssf)) belongs to the reserved subframes if r=⌊m·(10240-Nslss-Ndssf)Nreserved⌋ where m=0, . . . , Nreserved−1 and Nreserved=(10240−Nslss−Ndssf) mod Lbitmap. Here, Lbitmapthe length of the bitmap is configured by higher layers.the subframes are arranged in increasing order of subframe index.A bitmap (b0, b1, . . . , bLbitmap−1) associated with the resource pool is used where Lbitmapthe length of the bitmap is configured by higher layers.A subframe tkSL(0≤k<(10240−Nslss−Ndssf−Nreserved)) belongs to the subframe pool if bk′=1 where k′=k mod Lbitmap. FIG.12is a diagram illustrating a scheduled resource allocation (mode 1) method in a sidelink, according to an embodiment. The scheduled resource allocation (mode 1) is a method in which a base station allocates resources being used for sidelink transmission in a dedicated scheduling method to RRC-connected terminals. The base station can manage the resources of the sidelink, and thus, it may be effective in performing interference management and resource pool management. InFIG.12, a terminal1201camps on, at1205, and receives a sidelink system information block (SL SIB) from a base station1203, at1210. The system information may include resource pool information for transmission/reception, configuration information for a sensing operation, information for synchronization configuration, and information for inter-frequency transmission/reception. If data traffic for V2X is created, the terminal1201performs an RRC connection with the base station, at1220. The RRC connection between the terminal and the base station may be called a Uu-RRC. The above-described Uu-RRC connection process may be performed before the data traffic is created. The terminal1201requests the base station to provide a transmission resource for performing V2X communication, at1230. The terminal1201may request the transmission resource from the base station using an RRC message or a MAC CE. A SidelinkUEInformation or UEAssistanceInformation message may be used as the RRC message. The MAC CE may be, for example, a buffer status report MAC CE of a new format (including at least an indicator notifying of a buffer status report for V2X communication or information on the size of data being buffered for D2D communication). With respect to the detailed format and the contents of the buffer status report being used in the 3GPP, the 3GPP standards TS36.321 “E-UTRA MAC Protocol Specification” are referred to. The base station1203allocates the V2X transmission resource to the terminal1201through a dedicated Uu-RRC message. This message may be included in an RRCConnectionReconfiguration message. The allocated resource may be a V2X resource through the Uu or a resource for PC5 depending on the kind of traffic requested by the terminal or the congestion degree of the corresponding link. For the above-described determination, the terminal may additionally send ProSe per packet priority (PPPP) or logical channel ID information of the V2X traffic through UEAssistanceInformation or MAC CE. Because the base station is also aware of information on resources being used by other terminals, the base station allocates a remaining resource pool among the resources requested by the terminal1201, at1235. The base station may indicate the final scheduling to the terminal1201by means of DCI transmission through the PDCCH, at1240. For broadcast transmission, the terminal1201broadcasts sidelink control information (SCI) to other terminals1202on the PSCCH without additional RRC configuration of the sidelink, at1260. Further, the terminal1201may broadcast data to other terminals1202on the PSSCH, at1270. In contrast with this, for unicast and groupcast transmission, the terminal1201may perform the RRC connection with other terminals in a one-to-one manner. Here, for discrimination against the Uu-RRC, the RRC connection between the terminals may be referred to as a PC5-RRC. Even in case of the groupcast, the PC5-RRC is individually connected between the terminals in the group, at1215. AlthoughFIG.12illustrates that the connection of the PC5-RRC, at1215, is performed after1210, it may be performed any time before1210or1260. If the RRC connection is necessary between the terminals, the terminal1201performs the PC5-RRC connection of the sidelink, at1215, and transmits the SCI to other terminals1202on the PSCCH through the unicast and groupcast, at1260. The groupcast transmission of the SCI may be construed as the group SCI. Further, the terminal1201transmits data to other terminals1202on the PSSCH through the unicast and groupcast, at1270. FIG.13is a diagram illustrating a UE autonomous resource allocation (mode 2) method in a sidelink, according to an embodiment. In the UE autonomous resource allocation (mode 2), the base station provides a sidelink transmission/reception resource pool for V2X as system information, and the terminal selects the transmission resource in accordance with a determined rule. The resource selection method may be zone mapping or sensing based resource selection or random selection. In contrast with the scheduled resource allocation (mode 1) method in which the base station directly participates in the resource allocation, the UE autonomous resource allocation (mode 2) method ofFIG.13is different from the scheduled resource allocation (mode 1) method in that the terminal1301autonomously selects the resource based on the resource pool pre-received through the system information, and transmits the data. In the V2X communication, a base station1303allocates various kinds of resource pools (V2X resource pool and V2P resource pool) for a terminal1301. The resource pool may be composed of a resource pool on which the terminal can autonomously select an available resource pool after sensing the resources being used by other neighboring terminals, and a resource pool on which the terminal randomly selects a resource from a predetermined resource pool. The terminal1301camps on, at1305, and receives an SL SIB from the base station1303, at1310. The system information may include resource pool information for transmission/reception, configuration information for a sensing operation, information for synchronization configuration, and information for inter-frequency transmission/reception. The operation illustrated inFIG.13differs greatly from the operation illustrated inFIG.12in that forFIG.12, the base station1203and the terminal1201operate in an RRC-connected state, whereas forFIG.13, they may operate even in an idle mode in which the RRC is not connected. Further, even in the RRC-connected state, the base station1303does not directly participate in the resource allocation, and may operate so that the terminal autonomously selects the transmission resource. The RRC connection between the terminal and the base station may be referred to as Uu-RRC, at1320. If data traffic for V2X is created, at1330, the terminal1301selects the resource pool of the time and/or frequency region in accordance with the transmission operation configured among the resource pools transferred from the base station1303through the system information. For the broadcast transmission, the terminal1301broadcasts the SCI to other terminals1302on the PSCCH through broadcasting without additional RRC configuration of the sidelink, at1350. Further, the terminal1201may broadcast data to other terminals1302on the PSSCH, at1360. In contrast, when the unicast and groupcast transmission, the terminal1301may perform the RRC connection with other terminals in a one-to-one manner, at1315. Here, for discrimination against the Uu-RRC, the RRC connection between the terminals may be called a PC5-RRC. Even in case of the groupcast, the PC5-RRC is individually connected between the terminals in the group. This connection of the RRC layer in the sidelink may be called the PC5-RRC. Through the PC5-RRC connection, UE capability information for the sidelink may be exchanged between the terminals, or the exchange of configuration information required for the signal transmission/reception may be performed. AlthoughFIG.13illustrates that the connection of the PC5-RRC at1315is after1310″, it may be performed any time before1310or1350. If the RRC connection is necessary between the terminals, the terminal1301performs the PC5-RRC connection of the sidelink, and transmits the SCI to other terminals1302on the PSCCH through the unicast and groupcast, at1350. The groupcast transmission of the SCI may be construed as the group SCI. Further, the terminal1301transmits data to other terminals1302on the PSSCH through the unicast and groupcast, at1360. In order to effectively perform the sensing in a situation where periodic and aperiodic traffics coexist, sensing window A and sensing window B are defined. FIG.14Ais a diagram illustrating a method for configuring sensing window A for UE autonomous resource allocation (mode 2) of a sidelink, according to an embodiment. As illustrated in1400ofFIG.14A, when triggering for selecting a transmission resource occurs in slot n1401, a sensing window A1402may be defined as follows.Sensing window A1402may be defined as a slot section of [n−T0, n−1]. Here, T0may be determined as a fixed value, and may be determined to be configurable.As an example of a case in which T0is determined as a fixed value, it may be indicated as T0=1000*2with respect to the periodic traffic. In contrast, T0may be configured as a fixed value of T0=100*2μwith respect to the aperiodic traffic. As described above, the fixed T0value may be changed to another value in accordance with the traffic characteristic being considered, and may be fixed to the same value with respect to the periodic and aperiodic traffics. Here, μ is an index corresponding to numerology, and is configured as the following values in accordance with the subcarrier spacing.SCS=15 kHz, μ=0SCS=30 kHz, μ=1SCS=60 kHz, μ=2SCS=120 kHz, μ=3When T0is determined to be configurable, the configuration for this may be indicated through the SL SIB or UE-specific higher signaling. When indicated through the SL SIB, the corresponding value may be configured within resource pool information among the corresponding system information. If T0is configured within the resource pool information, always constant T0is used within the resource pool.In the sensing window A1402, SCI decoding and sidelink measurement for another terminal may be performed.The terminal that performs the sensing may acquire resource allocation information for another terminal and QoS information for a packet from the received SCI within the sensing window A1402. The resource allocation information may include a reservation interval for the resource. Further, the QoS information may be latency, reliability, and priority information in accordance with the minimum required communication range for the transmitted traffic and data rate requirements. Further, the terminal may acquire location information of another terminal from the received SCI. The terminal may calculate a TX-RX distance from the location information of another terminal and its own location information.The terminal may measure a sidelink reference signal received power (SL RSRP) from the received SCI within the sensing window A1402.The terminal may measure a sidelink received signal strength indicator (SL RSSI) within the sensing window A1402. The sensing window A1402may be used for the main purpose of determining resources for the UE autonomous resource allocation (mode 2) through sensing of the periodic traffic. The terminal may grasp the periodic resource allocation information of another terminal through the SCI decoding, and if the terminal determines that allocation of the transmission resource to the resource to be used by another terminal is not effective using the result of measuring the sidelink, such as the SL RSRP or SL RSSI, the corresponding resource may be excluded from a resource selection window1403. As illustrated inFIG.14A, when the triggering for selecting the transmission resource occurs in slot n, at1401, the resource selection window1403may be defined as follows.The resource selection window1403may be defined as a slot section of [n+T1, n+T2]. Here, T1and T2may be determined as fixed values or may be determined to be configurable. In contrast, T1and T2may be determined in a fixed range, and the terminal may configure proper values within the fixed range in consideration of the implementation thereof.T1and T2may be determined in a fixed range, and in consideration of the implementation thereof, the terminal may configure proper values within the fixed range, for example, in the range of T1≤4 and 20≤T2≤100.A final transmission resource1406may be selected within the resource selection window1403using the result of the sensing performed in the sensing window A1402. When sensing is performed using only the sensing window A1402as illustrated inFIG.14A, and the transmission resource selection is performed through this, the following transmission resource selection method may be used. Transmission Resource Selection Method-1Step-1: The number Mtotalof resource candidates capable of performing resource allocation is determined based on the resource pool information within the resource selection window1403.Step-2: The terminal excludes resources of which the usage is determined to be ineffective due to occupation by another terminal within the resource selection window1403using the sensing result in the sensing window A1402, and remains X(≤Mtotal) resource candidates capable of performing resource allocation. For this, a method for excluding resources through SCI decoding for another terminal and sidelink measurement may be used.Step-3: A resource candidate list X is reported to a higher layer of the terminal, and the final transmission resource among X candidates is randomly selected on the higher layer of the terminal. FIG.14Bis a diagram illustrating a method for configuring sensing window B for UE autonomous resource allocation (mode 2) of a sidelink, according to an embodiment. As illustrated in1430ofFIG.14B, when triggering for selecting a transmission resource occurs in slot n1401, sensing window B1404may be defined as follows.Sensing window B1404may be defined as a slot section of [n+T1′, n+T2′]. T1′ and T2′ may be determined as fixed values, or may be determined to be configurable. In contrast, T1′ and T2′ may be determined in a fixed range, and the terminal may configure proper values within the fixed range in consideration of the implementation thereof. Further, when k indicates the slot in which the resource is finally selected, the sensing window B is interrupted in k slot, and in this case, the sensing window B becomes [n+T1′, k].T1′ and T2′ may be configured to have the same values as the values of T1and T2of the resource selection window1403ofFIG.14A, respectively, or may be configured to have different values.For example, if T1′ is configured as T1′=0, it means that sensing is performed from a triggering slot n for selecting the transmission resource.By the configured T1′ and T2′ values, the sensing window B may be configured as one slot or more slots.In the sensing window B1404, the SCI decoding for another terminal and sidelink measurement may be performed.Sensing in the sensing window B1404is performed. The sensing window B1404may be used for the purpose of determining resources for UE autonomous resource allocation (mode 2) through additional sensing of periodic and aperiodic traffics with respect to the sensing window A. In the sensing window B1404configured hereinafter based on a triggering slot n for selecting the transmission resource, it is possible to sense aperiodic traffic that is unable to be predicted in the sensing window A1402using the sidelink measurement for the slot to which an actual transmission resource can be allocated. The sensing through the sensing window B1404may be understood as an operation of performing the sensing with respect to the traffic sensed for each slot regardless of whether the traffic is periodic or aperiodic. When sensing is performed using the sensing window B1404as illustrated inFIG.14B, and the transmission resource selection is performed through this, the following transmission resource selection method may be used. Transmission Resource Selection Method-2Step-1: It is determined whether a corresponding resource is idle by performing sensing in the corresponding slot within the sensing window B1404.The resource allocation unit on frequency may be defined as A(≥1) sub-channels or all sub-channels. The number Ntotalof resource candidates capable of performing resource allocation within the corresponding slot is determined in accordance with the resource allocation unit on the frequency.The sensing may be performed through SCI decoding and sidelink measurement.Step-2-1: If it is determined that the corresponding resource is idle through the sensing in Step-1 as described above, the final transmission resource1406among the number Ntotalof resource candidates capable of performing resource allocation within the corresponding slot is determined.Step-2-2: If it is determined that all the corresponding resources are busy through the sensing in Step-1 as described above, the following operation may be selected.If the next slot is also configured as the sensing window B1404, the operation skips to the next slot, and Step-1 as described above is performed.If the next slot is not configured to the sensing window B1404, the following operation may be considered.In the current slot, the final transmission resource1406is determined using QoS information or the result of energy detection. The QoS information may be priority information in accordance with at least one of priority, latency, reliability, proximity service PPPP, ProSe per-packet reliability (PPPR), minimum required communication range for traffic being transmitted, or data rate requirements. The priority may mean to include the PPPP and the PPPR, and may be a value selected within a range of predetermined values, and data that is necessary to be transmitted in the sidelink may have one priority value.The transmission in the current slot may be canceled, and a backoff operation may be performed. As defined throughFIGS.14A and14B, the sensing window A and the sensing window B may be divided based on a time point where triggering for selecting the transmission resource comes down. Specifically, based on the triggering slot n for selecting the transmission resource, the previously configured sensing section may be defined as the sensing window A1402, and the sensing section configured thereafter may be defined as the sensing window B1404. FIG.14Cis a diagram illustrating a method for configuring sensing window A and sensing window B for UE autonomous resource allocation (mode 2) of a sidelink, according to an embodiment. In1460ofFIG.14C, sensing window A and sensing window B are simultaneously configured. When the triggering for selecting the transmission resource occurs in slot n1401, the sensing window A1402and the sensing window B1404may refer to the above-described definition. When the sensing is performed using both the sensing window A1402and sensing window B1404, as illustrated inFIG.14C, and the transmission resource selection is performed, the following transmission resource selection method may be used. Transmission Resource Selection Method-3Step-1: The number Mtotalof resource candidates capable of performing resource allocation is determined based on the resource pool information within the resource selection window1403.Step-2: The terminal performing the sensing excludes resources of which the usage is determined to be ineffective due to occupation by another terminal within the resource selection window1403using the sensing result in the sensing window A1402, and remains X (≤Mtotal) resource candidates capable of performing resource allocation. SCI decoding for another terminal and sidelink measurement may be used to exclude the resources.Step-3: A resource candidate list X is reported to a higher layer of the terminal, and Y candidates among X candidates are randomly down-selected on the higher layer of the terminal.Step-4-1: If the sensing window B1404is included in the resource selection window1403, the terminal selects the final transmission resource1406among Y candidates determined on the higher layer by the transmission resource selection method-2 using the sensing result of the sensing window B1404on the physical layer.If the sensing window B1404is included in the resource selection window1403, this corresponds to a section of [n+T1, k] inFIG.14C. Such a condition may be determined by the configuration of T1and T2, and T1′ and T2′.Step-4-2: If the sensing window B1404is not included in the resource selection window1403, the final transmission resource1406is selected by the transmission resource selection method-2 using the sensing result in the sensing window B on the physical layer.When the sensing window B1404is not included in the resource selection window1403corresponds to a section of [n+T1′, n+T1−1] inFIG.14C. Such a condition may be determined by the configuration of T1and T2, and T1′ and T2′. In the transmission resource selection method-e, the selection of Y candidates on the higher layer may be omitted, and the following method may be used. Transmission Resource Selection Method-4Step-1: The number Mtotalof resource candidates capable of performing resource allocation is determined based on the resource pool information within the resource selection window1403.Step-2: The terminal performing the sensing excludes resources of which the usage is determined to be ineffective due to occupation by another terminal within the resource selection window1403using the sensing result in the sensing window A1402, and remains X (≤Mtotal) resource candidates capable of performing resource allocation. SCI decoding for another terminal and sidelink measurement may be used to exclude the resources.Step-3-1: If the sensing window B1404is included in the resource selection window1403, the terminal selects the final transmission resource1406among X candidates by the transmission resource selection method-2 using the sensing result of the sensing window B1404on the physical layer.If the sensing window B1404is included in the resource selection window1403, this corresponds to a section of [n+T1, k] inFIG.14C. Such a condition may be determined by the configuration of T1and T2, and T1′ and T2′.Step-3-2: If the sensing window B1404is not included in the resource selection window1403, the final transmission resource1406is selected by the transmission resource selection method-2 using the sensing result in the sensing window B on the physical layer.The sensing window B1404not being included in the resource selection window1403corresponds to a section of [n+T1′, n+T1−1] inFIG.14C. Such a condition may be determined by the configuration of T1and T2, and T1′ and T2′. If the sensing window A1402and the sensing window B1404are simultaneously configured, the final resource selection may be determined by the resource selection window1403and the sensing window B1404. The transmission resource selection method-3 and the transmission resource selection method-4 are methods for performing the sensing in a situation where the periodic and aperiodic traffics coexist by simultaneously configuring the sensing window A1402and the sensing window B1404and optimizing the selection of the transmission resource through the sensing. The sensing and the transmission resource selection in the UE autonomous resource allocation (mode 2) of the sidelink as described above may be implemented in various methods. For example, when simultaneously configuring the sensing window A1402and the sensing window B1404, if the triggering for selecting the transmission resource occurs in slot n in a state where the terminal is always performing the sensing for the sensing window A1402, the terminal may be implemented to select the final transmission resource by sensing the sensing window B1404. However, the terminal, which always performs the sensing for the sensing window A1402, can immediately use the sensing result of the sensing window A1402anytime, and thus, it has the advantage on the side of the latency in selecting the transmission resource, but it has the disadvantage on the side of energy consumption. Accordingly, as another method, the terminal may be implemented to immediately perform the sensing for the sensing window A1402if a traffic to be transmitted occurs, and to select the final transmission resource by performing the sensing for the sensing window B1404after performing triggering for selecting the transmission resource. The latter method has the advantage that it can minimize the energy consumption of the terminal, but has the disadvantage on the side of the latency in selecting the transmission resource. From the foregoing, an example has been described, in which an empty frequency-time resource is searched for the communication between the terminals in the sidelink, and the signal is transmitted on the searched resources. However, the method and the apparatus provided in the disclosure are not limited thereto, and can be applied to various channel occupation and channel reservation methods. FIG.15is a diagram illustrating a mode 1 method for performing sidelink data transmission through reception of scheduling information from a base station as illustrated inFIG.12, according to an embodiment. A method for receiving scheduling information from a base station and performing sidelink communication based on the scheduling information is referred to as mode 1. A terminal1501intended to perform transmission in a sidelink receives scheduling information1509for sidelink communication from a base station1511. The terminal1501intended to perform transmission in the sidelink may be referred to as a transmitting terminal, and a terminal1503performing data reception in the sidelink may be referred to as a receiving terminal. However, the transmitting terminal1501and the receiving terminal1503may be able to perform both data transmission and reception in the sidelink. The scheduling information1509for the sidelink communication may be obtained through reception of DCI transmitted by the base station1511, and the DCI may include following information.Carrier indicator: used for the purpose of scheduling the sidelink of another carrier in a situation where carrier aggregation (CA) is applied.Lowest index of sub-channel allocation for initial transmission: used for frequency resource allocation of the initial transmission.Information to be included in sidelink control informationThis may include frequency resource allocation information, the frequency resource allocation information for initial transmission and retransmission, and resource allocation for subsequent N-times transmission or resource reservation information.Time interval information between initial transmission and retransmissionThis may include information on the sidelink slot structure, and information on what slot and what symbols can be used for the sidelink.This may include HARQ-ACK/CSI feedback timing information, and timing information for transmitting HARQ-ACK or CSI feedback in the sidelink to the base station.Addressee ID: ID information on what terminals are to receive informationQuality-of-Service (QoS) information such as priority: Information on what priority data is to be transmitted with The scheduling may be used for one sidelink transmission, or may be used for periodic transmission, semi-persistent scheduling (SPS), or configured grant. The scheduling methods may be discriminated by an indicator included in the DCI, RNTI scrambled in a CRC added to the DCI, or ID value. Zero (0) bit may be additionally added to the DCI to make the size of the DCI equal to the size of other DCI formats, such as DCI for downlink scheduling or uplink scheduling. The transmitting terminal1501receives the SCI for sidelink scheduling from the base station1511, transmits a PSCCH including the sidelink scheduling information1507, and transmits a PSSCH that is the corresponding data1505. The sidelink scheduling information1507may be SCI, and the SCI may include the following information.HARQ process number: HARQ process ID for HARQ-related operation of data being transmitted.New data indicator (NDI): Information on whether the currently transmitted data is new data.Redundancy Version: Information on what parity bit is to be sent when mapping is performed through channel coding of data.Layer-1 source ID: ID information on a physical layer of a sending terminal.Layer-1 destination ID: ID information on a physical layer of a receiving terminal.Frequency-domain resource assignment for scheduling PSSCH: Frequency-domain resource configuration information of data being transmitted.MCS: modulation order and coding rate information.QoS indication: indicates a priority, target latency/delay, target distance, and target error rate.Antenna port(s): Antenna port information for data transmission.DMRS sequence initialization: information on an ID value for initialization of a DMRS sequence.PTRS-DMRS association: information on PTRS mapping.CBGTI: utilized as an indicator for code block group (CBG) unit retransmission.Resource reservation: Information for resource reservation.Time gap between initial transmission and retransmission: Time interval information between initial transmission and retransmission.Retransmission index: Indicator for discriminating retransmission.Transmission format/cast type indicator: Discrimination indicator of a transmission format or unicast/groupcast/broadcast.Zone ID: Location information of a transmitting terminal.NACK distance: Reference indicator determining whether a receiving terminal transmits HARQ-ACK/NACK.HARQ feedback indication: indicates whether a feedback is to be transmitted or whether the feedback is being transmitted.Time-domain resource assignment for scheduling PSSCH: Time-domain resource information of sidelink data being transmitted.Second SCI indication: Indicator including mapping information of the second SCI for 2-stage control information.DMRS pattern: DMRS pattern (e.g., DMRS-mapped symbol location) information. The control information may be included in one SCI to be transmitted to the receiving terminal, or may be included in two SCIs to be transmitted. The transmission of the control information through two SCIs may be called a 2-stage SCI method. The disclosure provides a method in which a terminal having received data in a sidelink transmits a feedback including an HARQ-ACK feedback and a method and an apparatus in which a terminal having transmitted data receives a feedback including an HARQ-ACK feedback. First Embodiment In a first embodiment, a method and an apparatus are provided, in which a terminal having received data configures an HARQ-ACK feedback. A method is provided for determining the size of an HARQ-ACK codebook regardless of whether actual sidelink data is transmitted and in accordance with pre-configuration, and a method is provided for determining the size of an HARQ-ACK codebook based on whether actual sidelink data is transmitted. FIG.16is a diagram illustrating a mapping structure of physical channels mapped onto one slot on a sidelink, according to an embodiment. A transmitting terminal transmits a preamble signal1602on one or more symbols before transmitting a corresponding slot1601. The preamble signal1602may be used by a receiving terminal to properly perform an automatic gain control (AGC) for adjusting the level of amplification when amplifying a power of a received signal. Further, whether to transmit the preamble signal1602may be determined depending on whether the transmitting terminal has transmitted the previous slot of the corresponding slot1601. That is, when the corresponding transmitting terminal transmits a signal to the same terminal as in the previous slot, the preamble transmission may be omitted. A PSCCH1603including control information may be transmitted on initial symbols of a slot. A PSSCH1604being scheduled by the control information of the PSCCH may be transmitted on the initial symbols. Apart of SCI that is the control information may be mapped onto the PSSCH to be transmitted. Further,FIG.16illustrates an example in which a physical sidelink feedback channel (PSFCH)1605that is a physical channel on which feedback information is transmitted is located at the last part of the slot. The terminal enables the terminal having transmitted/received the PSSCH to prepare for transmission or reception of the PSFCH by securing a predetermined empty time between the PSSCH1604and the PSFCH1605. After the transmission/reception of the PSFCH1605, the terminal may secure a section emptied for a predetermined time. The terminal may be preconfigured with the location of the slot in which the PSFCH can be transmitted. The term “preconfigured” may mean “predetermined” in the terminal preparation process, “transferred” when a sidelink-related system is accessed, “transferred” from the base station when the base station is accessed, or “transferred” from another terminal. FIG.17is a diagram illustrating a resource capable of transmitting and receiving a PSFCH for each slot, according to an embodiment. If it is possible to configure a period of resources capable of transmitting and receiving the PSFCH by a parameter, such as periodicity_PSFCH_resource,FIG.17illustrates a case where periodicity_PSFCH_resource=1 slot. Further, it may be possible for the period to be configured in units of milliseconds (ms), and a PSFCH resource to be configured in each slot in accordance with the SCS. FIG.18is a diagram illustrating a resource capable of transmitting and receiving a PSFCH every four slots, according to an embodiment. As shown inFIG.18, only a last slot1804among four slots1801,1802,1803, and1804is configured to be able to transmit and receive a PSFCH1811. In a similar manner, only a last slot1808among four slots1805,1806,1807, and1808is configured to be able to transmit and receive a PSFCH1813. The index of the slot may be a slot that is determined in a resource pool. That is, the four slots1801,1802,1803, and1804may not be physically contiguous slots, but may be contiguously appearing slots among slots belonging to the resource pool (or slot pool) that is used by a transceiver. Arrows inFIG.18indicate slots on the PSFCH on which HARQ-ACK feedback information of the PSSCH is transmitted. The HARQ-ACK information of the PSSCH being transmitted in slots1801,1802, and1803may be included in the PSFCH capable of being transmitted in slot1804. In a similar manner, the HARQ-ACK information of the PSSCH being transmitted in slots1804,1805,1806, and1807may be included in the PSFCH capable of being transmitted in slot1808. The reason why the HARQ-ACK feedback information of the PSSCH transmitted in slot1804is unable to be transmitted in the same slot1804is that there is not enough time for the terminal to transmit the PSFCH for the PSSCH in the slot1804after completing decoding of the PSSCH transmitted in the same slot1804. That is, the minimum processing time required to prepare the PSFCH after processing the PSSCH is not small enough. In order to properly perform the transmission/reception of the PSFCH, it is necessary to know the number of HARQ-ACK feedback bits included in the PSFCH. Determination of the number of HARQ-ACK feedback bits included in the PSFCH and of which PSSCH the HARQ-ACK bits are included may be performed based on at least one or more combinations of the following parameters.Slot period in which the PSFCH can be transmitted and received by parameters such as periodicity_PSFCH_resourceWhether to bundle HARQ-ACK. HARQ-ACK bits of the PSSCH transmitted in a predetermined number of slots before the PSFCH transmission and reception may have values determined through an AND operation (i.e., when bundling is applied, even if one is NACK, NACK is determined as a whole).The number of transport blocks (TBs) included in PSSCHWhether to use and configure retransmission in the unit of a code block group (CBG)Whether to activate HARQ-ACK feedbackThe number of PSSCHs actually transmitted and receivedMinimum processing time of a terminal for PSSCH processing and PSFCH transmission preparation Slots in which the terminal should transmit the HARQ-ACK may be calculated using the following method. If a PSFCH transmission/reception resource is configured to be located every N slots, the HARQ-ACK for the PSSCH transmitted in slot n is transmitted in the slot having the index of ┌(n+1)/N┐×N. Herein, ┌W┐ means the smallest integer among integers that are larger than x. The above-described formula may be a formula determined to perform an embodiment ofFIG.18. The above-described formula can be used to be replaced by a formula n+N−mod(n,N). Herein, mod(a,b) means a remainder obtained by dividing a by b. For example, if N is N=4, HARQ-ACK bits of the PSSCH transmitted in slots n=4, 5, 6, and 7 may be transmitted using the PSFCH in slot8. The above-described equation can be normalized and applied as follows. If the PSFCH transmission/reception resource is located every N slots, the HARQ-ACK for the PSSCH transmitted in slot n is transmitted in a slot having an index of ┌(n+A)/N┐×N. The above-described formula can be used to be replaced by the formula n+N+Δ−1−mod(n+Δ−1,N). Here, Δ is a parameter that means a gap between the PSSCH received by the terminal and the transmission slot of the HARQ-ACK, and may be configured for each resource pool, may be a pre-configured value, or may be understood as the same value between the terminals through the PC5-RRC configuration. As described above, the Δ value may be differently defined in accordance with the SCS, and for example, may be a value that is determined or configured as in Table 9 below. TABLE 9SCSΔ15 kHz130 kHz160 kHz2120 kHz2 The proposed Δ value may be variously modified and applied. The HARQ-ACK timing method may be provided as follows. If the PSFCH transmission/reception resource is configured to be located every N slots, the PSFCH being transmitted in slot n includes the HARQ-ACK feedback information of the PSSCH transmitted in slots n−N, n−N+1, . . . , n−2, n−1. That is, the PSFCH may include the HARQ-ACK feedback information of the PSSCH transmitted in N slots before the slot n. InFIG.18, in the PSFCH being transmitted in slot n, the HARQ_ACK feedback information for the PSSCH being transmitted in slots n−1, n−2, n−3, and n−4 may be included. The above-described method can be normalized and applied in the following method. That is, it may mean that the PSFCH including the HARQ-ACK feedback for the PSSCH transmitted in slots n−N−Δ+1, n−N−Δ+2, . . . , n−Δ−1, n−Δ is transmitted in slot n. The above-described method may be applied to secure a time corresponding to about at least Δ slots as the time for the terminal to process the PSSCH and to prepare the PSFCH. If the PSCCH or PSSCH is not received in the coreesponding slot, the receiving terminal may determine the HARQ-ACK value as a fixed value, such as “0”. As described above, the Δ is the parameter that means the gap between the PSSCH received by the terminal and the transmission slot of the HARQ-ACK, and may be configured for each resource pool, may be a pre-configured value, or may be understood as the same value between the terminals through the PC5-RRC configuration. Further, the Δ may be differently defined in accordance with the transmission type, such as the unicast or groupcast. Second Embodiment The second embodiment provides a method and an apparatus for determining a HARQ-ACK codebook based on SCI being transmitted from a transmitting terminal. The transmitting terminal may transmit a PSCCH for scheduling a PSSCH to transmit the PSSCH, and SCI including scheduling information may be mapped onto the PSCCH. The SCI may include the following information.Sidelink assignment index (SAI): an indicator indicating which PSSCH for configuring the HARQ-ACK codebook is transmitted in a sidelink and indicating how many PSSCHs are configured in one HARQ-ACK codebook to be transmitted on one PSFCH. As described above, an SAI bit field may or may not exist in the SCI depending on the HARQ-ACK codebook configuration information. Further, a part of the SAI bit field may mean the order of PSSCHs in one codebook, such as a counter, in accordance with the configuration information, and the remaining bits may be used as information indicating the size of the codebook. Further, the whole SAI bit field information may mean the order of HARQ-ACK bit values of the corresponding scheduled PSSCHs in accordance with the configuration information. Third Embodiment The third embodiment provides a method and an apparatus for applying different timing parameters in accordance with the processing capability of a terminal when applying the first embodiment. The terminal may process the SL-SCH included in the PSSCH or data during reception of the PSSCH, and the processing may include channel estimation, modulation or demodulation, and channel code decoding of data. In processing the data, a terminal having a high processing capability may require a short time required to perform the decoding of the data. In contrast, a terminal having a low processing capability may require a long time required to perform the decoding of the data. Accordingly, the timing to transmit the feedback of the data may differ depending on the processing capability of the terminal. As described above in the first embodiment, if the PSFCH transmission/reception resource is configured to be located every N slots, the HARQ-ACK for the PSSCH transmitted in slot n may be transmitted in a slot having an index of ┌(n+A)/N┐×N. The above-described formula can be used to be replaced by the formula n+N+Δ−1−mod(n+Δ−1,N). Here, Δ is a parameter that means a gap between the PSSCH received by the terminal and the transmission slot of the HARQ-ACK, and may be a value that is determined or configured in accordance with the processing time capability of the terminal, that is, the capability of how fast the terminal can process the PSSCH. In the above-described formula, the Δ value may be determined as 2 with respect to the terminal capable of performing normal processing (in the disclosure, it may be referred to as “capability type 1”), whereas in the above-described formula, the Δ value may be determined as 1 with respect to the terminal capable of performing fast processing (in the disclosure, it may be referred to as “capability type 2”). For example, the Δ value may be provided as in Table 10 below. The information on the processing capability of the terminal may be exchanged between the terminals using the PC5-RRC. TABLE 10Δ for ProcessingΔ for ProcessingSCSCapability Type 1Capability Type 215 kHz2130 kHz2160 kHz32120 kHz32 Embodiment 3-1 Embodiment 3-1 provides a method for determining the number of HARQ-ACK bits or the size of the HARQ-ACK codebook and a time to transmit the feedback when the terminal intends to transmit the HARQ-ACK on the PSFCH in a state where the PSFCH resource does not exist for each slot and N (N is an integer larger than “1”) is configured. If the resource capable of transmitting the PSFCH is configured or given in slot n+x when the terminal receives the PSSCH in slot n, the terminal receiving the PSSCH maps the HARQ-ACK feedback information of the PSSCH onto the PSFCH in slot n+x to be transmitted using x which is the smallest one of integers that are equal to or larger than Δ. As described above, Δ may be a value preconfigured by the transmitting terminal or may be a value configured in the resource pool from which the corresponding PSSCH or PSFCH is transmitted. For the configuration, each terminal may pre-exchange its own capability with the transmitting terminal. As provided in Table 10, Δ may be a value determined in accordance with at least one of the SCS, the terminal capability, configuration value with the transmitting terminal, or the resource pool configuration. According to the above-described method, when N is N=2, and Δ is “1”, that is, when the PSFCH transmission resource is configured every N slots in the resource pool, and the HARQ-ACK of the PSSCH can be transmitted in the slot after the minimum Δ=1 from the transmission of the PSSCH (i.e., in this case, just next slot), the slot in which the HARQ-ACK feedback is transmitted may be determined as inFIG.19.FIG.19is a diagram illustrating the terminal transmitting the HARQ-ACK feedback, according to an embodiment. RegardingFIG.19, the number of HARQ-ACK feedback bits that should be transmitted by the terminal may be 2 bits on all PSFCHs. For example, when the receiving terminal is unable to receive the transmitted PSSCH, or is unable to receive the PSCCH for scheduling the PSSCH both in slot n and in slot n+2, it is not necessary to transmit the PSFCH including the HARQ-ACK feedback information in slot n+3. Further, when the receiving terminal has received the transmitted PSSCH in slot n+3, but is unable to receive the transmitted PSSCH or is unable to receive the transmitted PSCCH for scheduling the PSSCH in slot n+4, the receiving terminal will be able to transmit one bit of the HARQ-ACK information for slot n+3 in slot n+8. Further, when the receiving terminal has received the transmitted PSSCH in slot n+3, but is unable to receive the transmitted PSSCH or is unable to receive the transmitted PSCCH for scheduling the PSSCH in slot n+4, the receiving terminal will be able to transmit the HARQ-ACK information for slot n+3 and the HARQ-ACK information for slot n+4 in slot n+8. In this case, because the receiving terminal is unable to receive the PSSCH in slot n+4, the receiving terminal will be able to transmit the feedback through configuration of the HARQ-ACK feedback for slot n+4 to NACK. That is, when transmitting the PSFCH in a specific slot in consideration of the slot included in the resource pool and the slot in which the PSFCH resource is configured, the period N in which the PSFCH resource is configured, and the Δ is configured or determined in accordance with the processing time of the terminal, the receiving terminal may determine the number of HARQ-ACK feedback bits to be included in the PSFCH. The number of HARQ-ACK feedback bits may be determined by Equation (2) below. The number of HARQ-ACK bits to be included in PSFCH being transmitted in slot n=the number of slots included in a corresponding resource pool among: slot(k−Δ+1) to slot(n−Δ)  (2) In Equation (2), slot k may be a slot including a PSFCH resource configured to be able to be transmitted just before the PSFCH that can be transmitted in slot n. Accordingly, if N and Δ are given, the maximum number of HARQ-ACK feedback bits to be transmitted by the terminal on one PSFCH may be determined as illustrated inFIG.20.FIG.20is a diagram illustrating the maximum number of HARQ-ACK feedback bits that a terminal should transmit on one PSFCH, according to an embodiment. That is, inFIG.20, the terminal may transmit HARQ-ACK feedback bits as many as the number of corresponding slots on the PSFCH in slot n in consideration of slot (n−Δ−N+1−Δ+1) to slot (n−Δ). Of course, if the terminal is unable to receive even one transmitted PSSCH or is unable to receive the transmitted PSCCH for scheduling the PSSCH in slot (n−Δ−N+1−Δ+1) to slot (n−Δ), it may not necessary for the terminal to transmit the PSFCH in slot n. If the N and Δ are given as described above, the maximum number of HARQ-ACK feedback bits that should be transmitted by the terminal on one PSFCH may be given as in Equation (3) below. The maximum number of HARQ-ACK feedback bits that should be transmitted by the terminal on one PSFCH: N+Δ−1  (3) As an example, when N is N=2, and Δ is “2”, that is, when the PSFCH transmission resource is configured every N slots in the resource pool, and the terminal can transmit the HARQ-ACK of the PSSCH in the slot after the minimum Δ=2 from the reception of the PSSCH (i.e., after 2 slots, or the slot after next), the slot in which the HARQ-ACK feedback is transmitted may be determined as inFIG.21.FIG.21is a diagram illustrating the terminal transmitting the HARQ-ACK feedback, according to another embodiment. That is, with reference toFIG.21, the number of HARQ-ACK feedback bits that should be transmitted by the terminal may be 1 bit, 2 bits, or 3 bits in accordance with the slot. For example, the terminal will be able to transmit, in slot n+8, the PSFCH including the HARQ-ACK feedback information as in slot n+2, slot n+3, and slot n+4. When receiving control information for scheduling at least one PSSCH in slot n+2, slot n+3, and slot n+4, the terminal includes 3-bit HARQ-ACK feedback information in the PSFCH to be transmitted, and in the slot in which the PSSCH is unable to be received, the terminal may configure the feedback information to NACK to be transmitted. Accordingly, when transmitting the feedback through sidelink unicast or groupcast communication, the number of feedback bits may be determined as N+Δ−1 as is given in the mathematical expression 3. That is, in this method, the example proposed inFIG.21is N+Δ−1=2+2−1=3, and thus, it may be determined to transmit always 3 bits. As another example, when transmitting the feedback through sidelink unicast or groupcast communication, the number of feedback bits may be determined as the maximum number of bits that should be transmitted in all cases in consideration of the slots belonging to the resource pool, N, and Δ. That is, in this method, in consideration ofFIG.21, the maximum bit number that can be transmitted in all cases is 3, and thus, it may be determined to transmit always 3 bits. As another example, when transmitting the feedback through sidelink unicast or groupcast communication, the number of feedback bits may be determined by a method for calculating the number of slots where the PSSCH, which may be related to the HARQ-ACK feedback to be transmitted on the PSFCH in the slot in which the PSFCH is to be transmitted, can be transmitted in consideration of the slots belonging to the resource pool, N, and Δ. That is, in this method, in consideration ofFIG.21, the terminal may be determined to transmit, on the PSFCH, one bit in slot n, 2 bits in slot n+3, 3 bits in slot n+8, one bit in slot n+12, 2 bits in slot n+14, and 2 bits in slot n+16, respectively. Of course, in the above-described example, if no control signal for scheduling the PSSCH or PSSCH in the slot related to the HARQ-ACK bits to be determined to be transmitted by the terminal is received, it may be considered that the transmitting terminal has not transmitted even one PSSCH, and thus, it may not be necessary for the terminal to transmit the PSFCH including the HARQ-ACK. As another example, when N is N=2, and Δ is “3”, that is, when the PSFCH transmission resource is configured every N slots in the resource pool, and the HARQ-ACK of the PSSCH can be transmitted in the slot after the minimum Δ=3 from the transmission of the PSSCH (i.e., after 3 slots, or in three slots), the slot in which the HARQ-ACK feedback is transmitted may be determined as inFIG.22.FIG.22is a diagram illustrating the terminal transmitting the HARQ-ACK feedback, according to another embodiment. That is, with reference toFIG.22, the number of HARQ-ACK feedback bits that should be transmitted by the terminal may be 0 bit, 1 bit, 2 bits, 3 bits, or 4 bits in accordance with the slot. For example, the terminal will be able to transmit, in slot n+8, the PSFCH including the HARQ-ACK feedback information as in slot n+2, slot n+3, slot n+4, and slot n+5. When receiving control information for scheduling at least one PSSCH in slot n+2, slot n+3, slot n+4, and slot n+5, the terminal includes 4-bit HARQ-ACK feedback information in the PSFCH to be transmitted, and in the slot in which the PSSCH is unable to be received, the terminal may configure the feedback information to NACK to be transmitted. As another example, when N is N=4, and Δ is “3”, that is, in case that the PSFCH transmission resource is configured every 4 slots in the resource pool, and the HARQ-ACK of the PSSCH can be transmitted in the slot after the minimum Δ=3 from the transmission of the PSSCH (i.e., in this case, after 3 slots, i.e., in three slots), the slot in which the HARQ-ACK feedback is transmitted may be determined as inFIG.23.FIG.23is a diagram illustrating the terminal transmitting the HARQ-ACK feedback, according to another embodiment. That is, with reference toFIG.23, the number of HARQ-ACK feedback bits that should be transmitted by the terminal may be 2 bits, 3 bits, 4 bits, 5 bits, or 6 bits in accordance with the slot. For example, the terminal will be able to transmit, in slot n+12, the PSFCH including the HARQ-ACK feedback information as in slot n+1, slot n+2, slot n+3, slot n+4, slot n+5, and slot n+6. When receiving control information for scheduling at least one PSSCH in slot n+1, slot n+2, slot n+3, slot n+4, slot n+5, and slot6, the terminal includes 6-bit HARQ-ACK feedback information in the PSFCH to be transmitted, and in the slot in which the PSSCH is unable to be received, the terminal may configure the feedback information to NACK to be transmitted. As illustrated inFIG.23, the number of HARQ-ACK feedback bits that should be transmitted by the terminal may be increased over N bits in accordance with N and A. In this case, it is necessary to transmit information corresponding to a large number of bits on the PSFCH, and this may cause a decoding error probability of the PSFCH to be increased. Accordingly, the terminal may send only the last K bit of the feedback that should be sent, and may not transmit the remaining bits. As described above, K may be equal to N that is the PSFCH resource configuration period, but is not limited thereto. As another example, when N is N=2, and Δ is “3”, that is, when the PSFCH transmission resource is configured every N slots in the resource pool, and the HARQ-ACK of the PSSCH can be transmitted in the slot after the minimum Δ=3 from the transmission of the PSSCH (i.e., after 3 slots, or in three slots), the slot in which the HARQ-ACK feedback is transmitted may be determined as inFIG.24.FIG.24is a diagram illustrating the terminal transmitting the HARQ-ACK feedback, according to another embodiment. That is, with reference toFIG.24, the number of HARQ-ACK feedback bits that should be transmitted by the terminal may be 0 bit, 1 bit, 2 bits, 3 bits, or 4 bits in accordance with the slot. For example, in slot n+12, there may be no sidelink slot in which the corresponding PSSCH, on which the HARQ-ACK feedback should be sent, is to be received. That is, a case may exist, in which there is not the feedback bit to be transmitted on the PSFCH resource of a specific slot in accordance with N, Δ, and resource pool configuration, and the minimum number of bits to transmit the HARQ-ACK feedback may be given in Equation (4) below. The minimum number of HARQ-ACK feedback bits that should be transmitted by the terminal on one PSFCH: max(N−Δ+1,0)  (4) As described above, max(a, b) is a larger value between a and b. That is, in the example provided inFIG.23, the HARQ-ACK to be transmitted does not always exist in slot n+12, and thus, the terminal may consider that the PSFCH resource does not exist in the corresponding slot. That is, although the PSFCH resource exists, the PSSCH transmission/reception will be able to be performed through disregarding of the corresponding PSFCH resource. As an example, N may be configured among values including at least one of 1, 2, and 4, but is not limited to such an example. Further, the configuration may differ for each resource pool. With respect to HARQ-ACK, the corresponding PSSCH may be a PSSCH for unicast or groupcast, which is configured or indicated to transmit the HARQ-ACK, and is transmitted from the same terminal. That is, on the PSSCH on which it is not necessary to send the HARQ-ACK, it may not be necessary to apply the proposed technique. Further, the PSCCH for scheduling the PSSCH may be control information for scheduling the PSSCH, and it is not always necessary to transmit the control information on the PSCCH. Further, although the control information may be one piece of control information, a plurality of pieces of control information may schedule one PSSCH. The above-described contents may be modified and applied as follows. When receiving the PSSCH in slot n, the terminal having received the PSSCH transmits HARQ-ACK feedback information of the PSSCH on the PSFCH that is fastest among PSFCHs having a gap between the PSSCH and the PSFCH that is equal to or larger than y symbols. Here, y may be a value preconfigured by the transmitting terminal or a value configured in the resource pool from which the corresponding PSSCH or PSFCH is transmitted. For the above-described configuration, each terminal may pre-exchange its own capability with the transmitting terminal, or y may be determined in accordance with the subcarrier spacing. Embodiment 3-2 Embodiment 3-2 provides a method and an apparatus for configuring a PSFCH resource in a resource pool. In accordance with the resource pool configuration, physically contiguous or non-contiguous slots may belong to one resource pool. With respect to the slots belonging to the resource pool, logical slot numbers (indexes) may be contiguously given. The slots belonging to a specific resource pool may have logically contiguous slot indexes, but may not be physically contiguous slots.FIG.25is a diagram illustrating physical slot indexes and logical slot indexes of slots included in a resource pool configured in accordance with resource pool configuration in physical slots, according to an embodiment. As illustrated inFIG.25, when physical slots n+4, n+5, and n+6 are not included in the resource pool, slot n+7 may be included in the resource pool after slot n+3, and logical slot indexes may be contiguously given to slot k+3 and slot k+4. As in the method provided in the previous embodiments, the PSFCH resource may be periodically configured every N slots in the resource pool (i.e., the PSFCH resource is periodically configured every N slots using the logical slot indexes), and in contrast, the PSFCH resource may be periodically configured every N slots using physical slot indexes. According to this method, the PSFCH resource may not be periodically configured in the logical slot indexes. The number of HARQ-ACK feedback bits that should be transmitted by the terminal may be increased over N bits in accordance with N and Δ. In this case, it is necessary to transmit information corresponding to a large number of bits on the PSFCH, and this may cause a decoding error probability of the PSFCH to be increased. Accordingly, the terminal may send only the last K bit of the feedback that should be sent, and may not transmit the remaining bits. As described above, K may be equal to N that is the PSFCH resource configuration period, but is not limited thereto. Fourth Embodiment The fourth embodiment provides a method and an apparatus capable of solving or mitigating the problem that one terminal should perform both transmission and reception of the PSFCH in a state where the PSFCH resource does not exist for each slot and N (N is an integer that is larger than “I”) is configured. This problem may occur due to half-duplex restriction in that the terminal is unable to simultaneously perform transmission and reception of signals. FIG.26is a diagram illustrating terminal1(UE1) and terminal2(UE2) transmitting PSFCHs in the same slot for HARQ-ACK feedback transmission for respective transmitted PSSCHs when the terminal1and the terminal2perform signal transmission and reception by a connection through unicast or groupcast communication in a sidelink, according to an embodiment. Terminal1has transmitted a PSSCH2602to terminal2in slot12600, and terminal2has transmitted a PSSCH2612to terminal1in slot22610. In accordance with the feedback transmission timing, the feedback timing may be determined so that terminal2transmits, in slot42620, the HARQ-ACK feedback for the PSSCH transmitted from terminal1to terminal2in slot1, and terminal1transmits, in slot42620, the HARQ-ACK feedback for the PSSCH transmitted from terminal2to terminal1in slot2. Terminal2may transmit the PSFCH including the feedback information for the PSSCH transmitted in slot1to terminal1, and terminal1may transmit the PSFCH including the feedback information for the PSSCH transmitted in slot2to terminal2. Terminal1and terminal2should perform both the transmission and reception of a PSFCH2622, and the transmission and reception of the PSFCH2622may be performed in the same symbol. If a terminal is unable to simultaneously perform transmission and reception of certain signals, the above-described support may not be performed. The terminal being unable to simultaneously perform transmission and reception of a certain signal may be a case in which the terminal has a half-duplex restriction, and in contrast, the terminal being able to simultaneously perform transmission and reception of a certain signal may be a case in which the terminal has a full-duplex function. When the terminal that is considered as a half-duplex terminal is configured to perform transmission or reception of the PSFCH without simultaneously performing transmission and reception, as the terminal selects and performs the transmission or reception of the sidelink, it may be possible to combine and apply one or more of the following methods.Method 1: Whether to perform transmission/reception of the PSFCH may be determined based on SCI for scheduling the PSSCH. In this method, it is possible to determine what terminal receives the PSFCH and what terminal transmits the PSFCH based on a bit field value included in the SCI transmitted or received by the terminal. For example, terminal1may compare a QoS value (a value of priority, latency, delay, PQI, or 5QI) that is included in the SCI included in the PSCCH when terminal1transmits the PSSCH2602in slot1with a QoS value included in the PSCCH for scheduling the PSSCH2612received from terminal2in slot2, and may determine to transmit and receive the HARQ-ACK feedback of the PSSCH corresponding to higher QoS (or having a value corresponding to a higher priority). If the priority value that is included in the SCI included in the PSCCH2612transmitted together when terminal1transmits the PSSCH in slot1is “1”, and the priority value included in the PSCCH for scheduling the PSSCH2612received from terminal2in slot is “4” (it is assumed that priority1is higher than priority4), terminal1receives the PSFCH in the slot42620, terminal2transmits the PSFCH in the slot42620, and the PSFCH may include the HARQ-ACK feedback information for the PSSCH transmitted by terminal1in the slot12600. If the QoS values corresponding to the PSSCHs transmitted by the terminals are equal to each other, whether to transmit the PSFCH may be determined so that the PSFCH including the HARQ-ACK information for the earlier transmitted PSSCH is transmitted based on the earlier transmitted PSSCH.Method 2: Whether to perform transmission/reception of the PSFCC may be determined in accordance with the order of PSSCH transmission. In this method, it may be determined what terminal receives the PSFCH and what terminal transmits the PSFCH based on the transmission/reception slot index order of the PSSCH transmitted or received by the terminal, that is, based on what PSSCH is first transmitted. This method may be determined by the timing in which the PSFCH including the HARQ-ACK for the transmitted PSSCH is transmitted. If the PSFCH transmission/reception resource is located every N slots, the HARQ-ACK for the PSSCH transmitted in slot n is transmitted in slot ┌(n+A)/N┐×N. Further, the HARQ-ACK may be transmitted in slot n+N+Δ−1−mod(n+Δ−1, N). Here, Δ is a parameter defining a gap between the PSSCH received by the terminal and the transmission slot of the HARQ-ACK, and may be configured for each resource pool, may be a pre-configured value, or may be understood to the terminals through the PC5-RRC configuration. As described above, the Δ value may be differently defined in accordance with the SCS. That is, it means that the PSFCH including the HARQ-ACK feedback for the PSSCH transmitted in slot n−N−Δ+1, n−N−Δ+2, . . . , n−Δ−1, n−Δ is transmitted in slot n. If terminal1has transmitted the PSCCH to terminal2in slot n−N−Δ+1, and terminal2has transmitted the PSSCH to terminal1in slot n−N−Δ+2, all the HARQ-ACK feedback information for the two PSSCHs should be transmitted in slot n. According to this method, the first transmitted PSSCH, that is, the PSFCH including the HARQ-ACK for the PSSCH that terminal1has transmitted to terminal2in slot n−N−Δ+1, may be transmitted/received in slot n. As described above, terminal1may receive the PSFCH in slot n, terminal2may transmit the PSFCH in slot n, and the PSFCH may include the HARQ-ACK for the PSSCH that the terminal1has transmitted to terminal2in slot. Fifth Embodiment The fifth embodiment provides a method and an apparatus capable of solving or mitigating the problem that one terminal should transmit the PSFCH to several terminals in a state where the PSFCH resource does not exist for each slot and N (N is an integer that is larger than “1”) is configured. This problem may occur due to a restriction in that the terminal is able to transmit only one physical channel at a time. A PSFCH transmission method has been described when one terminal should transmit the PSFCH to several terminals, but it will be also possible to apply the contents of this embodiment even when one terminal transmits a plurality of PSFCHs according to a plurality of transmission types to one or two or more terminals. That is, this embodiment can also be applied when one terminal should transmit the PSFCH for unicast and the PSFCH for groupcast to other terminals in the same symbol in the same slot. FIG.27is a diagram illustrating a time when terminal1(UE1) should transmit two PSFCHs in the same slot for HARQ-ACK feedback transmission for PSSCHs that terminal2(UE2) and terminal3(UE3) have transmitted to terminal1(UE1) when UE1 performs signal transmission and reception with UE2 and UE3 by a connection through unicast or groupcast communication, according to an embodiment. As described above, terminal2and terminal3may be different from each other, or may be the same terminal. The PSFCH transmission method has been described when one terminal should transmit the PSFCH to several terminals, but if this embodiment is applied when one terminal transmits a plurality of PSFCHs according to a plurality of transmission types to one terminal, the terminal2and terminal3may be the same terminal. UE2 has transmitted PSSCH2702to UE1 in slot12700, and UE3 has transmitted PSSCH2712to UE1 in slot22710. The PSFCH transmission/reception timing may be determined so that UE1 transmits HARQ-ACK feedback information for two PSSCHs to UE2 and UE3 in slot42720by encoding the HARQ-ACK feedback information on PSFCHs2722. If UE1 can transmit the PSFCH (hereinafter, PSFCH2) to UE2 and the PSFCH (hereinafter, PSFCH3) to UE3 in all in the slot42720, UE2 and UE3 can simultaneously receive the PSFCH from UE1, and can identify the HARQ-ACK feedback information for the PSSCH sent by UE2 and UE3 themselves. However, if the UE1 is unable to simultaneously transmit PSFCH2and PSFCH3in the slot42720, it is necessary for UE1 to determine which PSFCH UE1 should transmit. The reason why UE1 is unable to simultaneously transmit PSFCH2and PSFCH3may be that insufficient transmission power is provided, and if divided power is used for each of the PSFCHs, the coverage of the respective PSFCHs may be reduced. It may be possible to combine and apply one or more of the following as methods in which the terminal selects and transmits (or receives) one PSFCH to prevent the terminal from simultaneously transmitting or receiving two or more PSFCHs.Method 1: The PSFCH that the terminal is to transmit/receive may be determined based on SCI for scheduling the PSSCH. In this method, it is possible to determine the PSSCH to which the PSFCH corresponds that the terminal is to transmit based on bit field values included in the received SCI. For example, as illustrated inFIG.27, terminal1may compare a QoS value (a value of priority, latency, delay, PQI, or 5QI) that is included in the SCI included in the PSCCH transmitted together for the corresponding PSSCH scheduling when terminal2transmits the PSSCH2702to terminal1in the slot12700with a QoS value included in the SCI included in the PSCCH transmitted together for the corresponding PSSCH scheduling when terminal3transmits the PSSCH2712to terminal1in the slot22710, and the terminal1may determine to transmit and receive the HARQ-ACK feedback of the PSSCH corresponding to higher QoS (or having a value corresponding to a higher priority). For example, if the priority value that is included in the SCI included in the PSCCH transmitted for scheduling of the PSSCH2702received from terminal2in the slot12700is “1”, and the priority value included in the SCI included in the PSCCH transmitted for scheduling of the PSSCH2712received from terminal3in the slot22710is “4” (it is assumed that priority1is higher than priority4), terminal1transmits the PSFCH to terminal2in the slot42720, and the PSFCH may include the HARQ-ACK feedback information for the PSSCH transmitted by terminal2in the slot12700. If the QoS values corresponding to the PSSCHs transmitted by the terminals are equal to each other, whether to transmit the PSFCH may be determined so that the PSFCH including the HARQ-ACK information for the earlier transmitted PSSCH is transmitted based on the earlier transmitted PSSCH.Method 2: The PSFCH that the terminal is to transmit may be determined in accordance with the order of PSSCH transmission. In this method, it may be determined whether to transmit the PSFCH including the HARQ-ACK for what PSSCH based on the transmission/reception slot index order of the PSSCH transmitted or received by the terminal, that is, based on what PSSCH is first transmitted. This method may be determined by the timing in which the PSFCH including the HARQ-ACK for the transmitted PSSCH is transmitted. For example, if the PSFCH transmission/reception resource is located every N slots, the HARQ-ACK for the PSSCH transmitted in slot n is transmitted in slot ┌(n+A)/N┐×N. Further, the HARQ-ACK may be transmitted in slot n+N+Δ−1−mod(n+Δ−1, N). Here, Δ is a parameter defining a gap between the PSSCH received by the terminal and the transmission slot of the HARQ-ACK, and may be configured for each resource pool, may be a pre-configured value, or may be understood to the terminals through the PC5-RRC configuration. As described above, the Δ value may be differently defined in accordance with the SCS. That is, the PSFCH including the HARQ-ACK feedback for the PSSCH transmitted in slot n−N−Δ+1, n−N−Δ+2, . . . , n−Δ−1, n−Δ is transmitted in slot n. If terminal1has received the PSCCH from terminal2in slot n−N−Δ+1, and terminal1has also received the PSSCH from terminal3in slot n−N−Δ+2, all the HARQ-ACK feedback information for the two PSSCHs should be transmitted in slot n. According to this method, the first transmitted PSSCH, that is, the PSFCH including the HARQ-ACK for the PSSCH that terminal1has received from terminal2in slot n−N−Δ+1, may be transmitted/received in slot n. As described above, both terminal2and terminal3attempt to perform decoding of the PSFCH in slot n, and because terminal1has transmitted the PSFCH to terminal2, only terminal2may be successful in receiving the PSFCH.Method 3: The terminal can simultaneously transmit feedback information to two or more terminals using one PSFCH. This method may be determined by the timing in which the PSFCH including the HARQ-ACK for the transmitted PSSCH is transmitted. If the PSFCH transmission/reception resource is located every N slots, the HARQ-ACK for the PSSCH transmitted in slot n is transmitted in slot ┌(n+A)/N┐×N. Further, the HARQ-ACK may be transmitted in slot n+N+Δ−1−mod(n+Δ−1, N). Here, Δ is a parameter defining a gap between the PSSCH received by the terminal and the transmission slot of the HARQ-ACK, and may be configured for each resource pool, may be a pre-configured value, or may be understood to the terminals through the PC5-RRC configuration. As described above, the Δ value may be differently defined in accordance with the SCS. That is, it may mean that the PSFCH including the HARQ-ACK feedback for the PSSCH transmitted in slot n−N−Δ+1, n−N−Δ+2, . . . , n−Δ−1, n−Δ is transmitted in slot n. If terminal1has received the PSCCH from terminal2in slot n−N−Δ+1, and terminal1has also received the PSSCH from terminal3in slot n−N−Δ+2, all the HARQ-ACK feedback information for the two PSSCHs should be transmitted in slot n. According to this method, when transmitting N pieces of HARQ-ACK information in slot n, terminal1may deploy the HARQ-ACK information for the PSSCH received from terminal2in slot n−N−Δ+1 at the first location of the N pices of HARQ-ACK information, and may deploy the HARQ-ACK information for the PSSCH received from terminal3in slot n−N−Δ+2 at the second location of the N pieces of HARQ-ACK inforamtion to be transmitted. If terminal1is unable to receive the PSSCH in a certain slot in a predetermined section, the terminal may configure the HARQ-ACK feedback codebook by configuring the HARQ-ACK feedback corresponding to the slot to a predetermined value. For example, terminal1may determine the HARQ-ACK feedback value for the PSSCH that has not been trasnmitted as a value that means NACK.Method 4: Whether to transmit the PSFCH may be determined in accordance with the number of feedback bits that should be transmitted by the terminal. If it is required for terminal1to transmit the feedback to terminal2and terminal3, the feedback is transmitted to the terminal that requires a larger amount of feedback on the PSFCH. If the same amount of feedback is to be transmitted to terminal2and terminal3, the terminal may optionally determine to what terminal the feedback is to be transmitted, or parts or combination of the above-described methods 1, 2, and 3 may be applied.Method 5: The terminal may transmit a plurality of PSFCHs including the feedback to be transmitted. However, when transmitting a plurality of PSFCHs, if the sum of powers being used for the PSFCH transmission is larger than the maximum power the terminal Pc,maxthat can be used by the terminal for the sidelink transmission, the terminal reduces the power of the ASFCH in the original PSFCH power ratio so that the sum of all the PSFCH powers becomes Pc,max, and transmits the reduced power. For example, when the terminal should transmit PSFCH1and PSFCH2, respective calculated powers are P1 and P2, and P1+P2>Pc,max, the terminal may determine the power of PSFCH1as Pc,maxP1/(P1+P2), and may determine the power of PSFCH2as Pc,maxP2/(P1+P2). The sum of the powers of PSFCH1and PSFCH2becomes Pc,max.Method 6: Whether to transmit the PSFCH may be determined in accordance with the number of pieces of scheduling control information received by the terminal. That is, whether to transmit the PSFCH is determined in accordance with the number of PSSCHs scheduled to be received by the terminal. If it is required for terminal1to transmit the feedback to terminal2and terminal3, the feedback is transmitted to the terminal that is scheduled to receive a larger number of PSSCHs on the PSFCH. If the same number of PSSCHs are scheduled from terminal2and terminal3, the terminal may optionally determine to what terminal the feedback is to be transmitted, or parts or combination of the above-described methods 1, 2, 3, 4, and 5 may be applied. As described above, for convenience in explanation, the first to fifth embodiments have been described individually. However, the respective embodiments include related operations, and thus, it is possible to combine at least two embodiments. In order to perform the above-described embodiments of the disclosure, transmitters, receivers, and processors of a terminal and a base station are illustrated inFIGS.28and29. In the first and second embodiments, in order to perform operations of configuring the HARQ-ACK feedback information, determining whether to transmit the HARQ-ACK feedback, and transmitting the feedback, the transmission/reception method between the base station and the terminal or between the transmitting end and the receiving end has been provided, and in order to perform the method, the receivers, processors, and transmitters of the base station and the terminal should operate according to the respective embodiments. Specifically,FIG.28is a block diagram illustrating the internal structure of a terminal, according to an embodiment. As illustrated inFIG.28, the terminal includes a terminal receiver2800, a terminal transmitter2804, and a terminal processor2802. The terminal receiver2800and the terminal transmitter2804may be commonly called a transceiver. The transceiver may transmit/receive a signal with a base station. The signal may include control information and data. For this, the transceiver may be composed of an RF transmitter for up-converting and amplifying the frequency of a transmitted signal, and an RF receiver for low-noise-amplifying and down-converting the frequency of a received signal. Further, the transceiver may receive a signal through a radio channel, and may output the received signal to the terminal processor2802. The transceiver may also transmit the signal that is output from the terminal processor2802on the radio channel. The terminal processor2802may control a series of processes so that the terminal operates according to the above-described embodiments of the disclosure. For example, the terminal receiver2800receives control information from the base station, the terminal processor2802determines whether to transmit the HARQ-ACK feedback and feedback information in accordance with the control information and preconfigured configuration information. Thereafter, the terminal transmitter2804may transfer the scheduled feedback to the base station. FIG.29is a diagram illustrating the internal structure of a base station, according to an embodiment. As illustrated inFIG.29, a base station includes a base station receiver2901, a base station transmitter2905, and a base station processor2903. The base station receiver2901and the base station transmitter2905may be commonly called a transceiver. The transceiver may transmit/receive a signal with a terminal. The signal may include control information and data. For this, the transceiver may be composed of an RF transmitter for up-converting and amplifying the frequency of a transmitted signal, and an RF receiver for low-noise-amplifying and down-converting the frequency of a received signal. Further, the transceiver may receive a signal through a radio channel, and may output the received signal to the base station processor2903. The transceiver may also transmit the signal that is output from the base station processor2903through the radio channel. The base station processor2903may control a series of processes so that the base station operates according to the above-described embodiments of the disclosure. For example, the base station processor2903may control to configure control information in accordance with HARQ-ACK feedback information of the terminal and to receive the feedback in accordance with the control information. Thereafter, the base station transmitter2905transmits related scheduling control information, and the base station receiver2901receives the feedback information together with the scheduling information. Embodiments of the disclosure described herein are merely for easy explanation of the technical contents of the disclosure and proposal of specific examples to help understanding of the disclosure, but are not intended to limit the scope of the disclosure. That is, it will be apparent to those of ordinary skill in the art to which the disclosure pertains that other modified examples that are based on the technical idea of the disclosure can be embodied. Further, according to circumstances, the respective embodiments may be operated in combination. Further, other modified examples based on the technical idea of the above-described embodiments may be embodied in an LTE system and 5G system. While the disclosure has been shown and described with reference to certain embodiments thereof, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims.
128,102
11943169
DESCRIPTION OF EMBODIMENTS Embodiments of the present invention ill be described below. FIG.1is a conceptual diagram of a radio communication system according to the present embodiment. InFIG.1, the radio communication system includes terminal apparatuses1A to1C and a base station apparatus3. Each of the terminal apparatuses1A to1C is hereinafter also referred to as a terminal apparatus1. The terminal apparatus1is also referred to as a user terminal, a mobile station apparatus, a communication terminal, a mobile device, a terminal, User Equipment (UE), and a Mobile Station (MS). The base station apparatus3is also referred to as a radio base station apparatus, a base station, a radio base station, a fixed station, a NodeB (NB), an evolved NodeB (eNB), a Base Transceiver Station (BTS), a Base Station (BS), an NR NodeB (NR NB), NNB, a Transmission and Reception Point (TRP), or gNB. InFIG.1, in a radio communication between the terminal apparatus1and the base station apparatus3, Orthogonal Frequency Division Multiplexing (OFDM) including a Cyclic Prefix (CP), Single-Carrier Frequency Division Multiplexing (SC-FDM), Discrete Fourier Transform Spread OFDM (DFT-S-OFDM), or Multi-Carrier Code Division Multiplexing (MC-CDM) may be used. InFIG.1, in the radio communication between the terminal apparatus1and the base station apparatus3, Universal-Filtered Multi-Carrier (UFMC). Filtered OFDM (F-OFDM), Windowed OFDM, or Filter-Bank Multi-Carrier (FBMC) may be used. Note that the present embodiment will be described by using OFDM symbol with the assumption that a transmission scheme is OFDM, but use of any other transmission schemes described above is also included in the present invention. InFIG.1, in the radio communication between the terminal apparatus1and the base station apparatus3, the CP may not be used, or the above-described transmission scheme with zero padding may be used instead of the CP. The CP or zero passing may be added both forward and backward. InFIG.1, in a radio communication between the terminal apparatus1and the base station apparatus3, Orthogonal Frequency Division Multiplexing (OFDM) including a Cyclic Prefix (CP), Single-Carrier Frequency Division Multiplexing (SC-FDM), Discrete Fourier Transform Spread OFDM (DFT-S-OFDM), or Multi-Carrier Code Division Multiplexing (MC-CDM) may be used. InFIG.1, the following physical channels are used for the radio communication between the terminal apparatus1and the base station apparatus3.Physical Broadcast CHannel (PBCH)Physical Downlink Control CHannel (PDCCH)Physical Downlink Shared CHannel (PDCCH)Physical Uplink Control CHannel (PDCCH)Physical Uplink Shared CHannel (PUSCH)Physical Random Access CHannel (PRACH) The PBCH is used to broadcast essential information block ((Master Information Block (MIB), Essential Information Block (EIB), and Broadcast Channel (BCH)) which includes essential information needed by the terminal apparatus1. The PBCH may be used to broadcast a time index within a period of a block of a synchronization signal (also referred to as an SS/PBCH block), Here, the time index is inthrmation for indicating the index of the synchronization signal and the PBCH within the cell. For example, in a case that the SS/PBCH block is transmitted by using three transmit beams (transmission filter configuration, Quasi-CoLocation (QCL) for reception spatial parameters), the order of time within a predetermined period or within a configured period may be indicated. The terminal apparatus may recognize a difference in the time index as a difference in the transmit beam. The PDCCH is used to transmit (or carry) Downlink Control information (DCI) in downlink radio communication (radio communication from the base station apparatus3to the terminal apparatus1). Here, one or more pieces of DCI (which may be referred to as DCI formats) are defined for transmission of the downlink control information. In other words, a field for the downlink control information is defined as DCI and is mapped to information bits. For example, the following DCI formats may be defined.DCI format 0_0DCI format 0_1DCI format 1_0DCI format 1_1DCI format 2_0DCI format 2_1DCI format 2_2DCI format 2_3 DCI format 0_0 may include information for indicating scheduling information of the PUSCH (frequency domain resource allocation and time domain resource allocation). DCI format 0_1 may include information for indicating scheduling information of the PUSCH (frequency domain resource allocation and time domain resource allocation), information for indicating a BandWidth Part (BWP), a Channel State Information (CSI) request, a Sounding Reference Signal (SRS) request, and information related to an antenna port. DCI format 1_0 may include information for indicating scheduling information of the PUSCH (frequency domain resource allocation and time domain resource allocation). DCI format 1_1 may include information for indicating scheduling information of the PUSCH (frequency domain resource allocation and time domain resource allocation), information for indicating a BandWidth Part (BWP), Transmission Configuration Indication (TCI), and information related to an antenna port. DCI format 2_0 is used to notify the slot format of one or multiple slots. The slot format is defined as one in which each OFDM symbol in the slot is classified as downlink, flexible, or uplink. For example, in a case that the slot format is 28, DDDDDDDDDDDDXU is applied to 14 symbols of OFDM symbols in the slot in which slot format 28 is indicated. Here, D is a downlink symbol, X is a flexible symbol, and U is an uplink symbol. Note that the slot will be described below. DCI format 2_1 is used to notify the terminal apparatus1of physical resource blocks and OFDM symbols which may be assumed to be without transmission. Note that this information may be referred to as a preemption indication (intermittent transmission indication). DCI format 2_2 is used for transmission of the PUSCH and Transmit Power Control (TPC) commands for the PUSCH. DCI format 2_3 is used to transmit a group of TPC commands for sounding reference signal (SRS) transmission by one or multiple terminal apparatuses1. An SRS request may be transmitted together with the TPC commands. An SRS request and TPC commands may be defined in DCI format 23 for uplink with no PUSCH and PUCCH or uplink in which the transmit power control of the SRS is not associated with the transmit power control of the PUSCH. The DCI for the downlink is also referred to as downlink grant or downlink assignment. Here, the DCI for the uplink is also referred to as uplink grant or uplink assignment. The PUSCH is used to transmit Uplink Control Information (UCI) in uplink radio communication (radio communication from the terminal apparatus1to the base station apparatus3). Here, the uplink control information may include Channel State Information (CSI) used to indicate a downlink channel state. The uplink control information may include Scheduling Request (SR) used to request an UL-SCH resource. The uplink control information may include a Hybrid Automatic Repeat request ACKnowledgement (HARQ-ACK). The HARQ-ACK may indicate a HARQ-ACK for downlink data (Transport block, Medium Access Control Protocol Data Unit (MAC PDU), or Downlink-Shared CHannel (DL-SCH)). The PDSCH is used to transmit downlink data (Downlink Shared CHannel (DL-SCH)) from the Medium Access Control (MAC) layer. In a case of the downlink, the PSCH is used to transmit System Information (SI), a Random Access Response (RAR), and the like. The PUSCH may be used to transmit the uplink data (Uplink Shared CHannel (UL-SCH)) from the MAC layer or HARQ-ACK and/or CSI with the uplink data. The PSCH may be used to transmit the CSI only or the HARQ-ACK and CSI only. In other words, the PSCH may be used to transmit the UCI only. Here, the base station apparatus3and the terminal apparatus1exchange (transmit and/or receive) signals with each other in higher layers. For example, the base station apparatus3and the terminal apparatus1may transmit and/or receive Radio Resource Control (RRC)) signaling (also referred to as a Radio Resource Control (RRC) message or Radio Resource Control (RRC) information) in an RRC layer. The base station apparatus3and the terminal apparatus1may transmit and/or receive a Medium Access Control (MAC) control element in a Medium Access Control (MAC) layer. Here, the RRC signaling and/or the MAC control element is also referred to as higher layer signaling. Since the higher layer here refers to a higher layer viewed from the physical layer, the higher layer may include one or multiple of a MAC layer, an RRC layer, an RLC layer, a PDCP layer, a Non Access Stratum (NAS) layer, or the like. For example, in a processing of the MAC layer, the higher layer may include one or multiple of an RRC layer, an RLC layer, a PDCP layer, a NAS layer, or the like. The PDSCH and PUSCH may be used to transmit the RRC signaling and the MAC control element. Here, in the PDSCH, the RRC signaling transmitted from the base station apparatus3may be signaling common to multiple terminal apparatuses1in a cell. The RRC signaling transmitted from the base station apparatus3may be signaling dedicated to a certain terminal apparatus1(also referred to as dedicated signaling). In other words, terminal apparatus-specific (UE-specific) information may be transmitted through signaling dedicated to the certain terminal apparatus1. The PUSCH may be used to transmit UE Capabilities in the uplink. InFIG.1, the following downlink physical signals are used for downlink radio communication. Here, the downlink physical signals are not used to transmit information output from the higher layers but are used by the physical layer.Synchronization signal (SS)Reference Signal (RS) The synchronization signal may include a Primary Synchronization Signal (PSS) and a Secondary Synchronization Signal (SSS). A cell ID may be detected by using the PSS and SSS. The synchronization signal is used for the terminal apparatus1to establish synchronization in a frequency domain and a time domain in the downlink. Here, the synchronization signal may be used for the terminal apparatus1to select precoding or a beam in precoding or beamforming performed by the base station apparatus3. Note that the beam may be referred to as a transmission or reception filter configuration. A reference signal is used for the terminal apparatus1to perform channel compensation on a physical channel. Here, the reference signal is used for the terminal apparatus1to calculate the downlink CSI. The reference signal may be used for a numerology such as a radio parameter or subcarrier spacing, or used for fine synchronization that allows FFT window synchronization to be achieved. According to the present embodiment, at least one of the following downlink reference signals are used.Demodulation Reference Signal (DMRS)Channel State Information Reference Signal (CSI-RS)Phrase Tracking Reference Signal (PTRS)Tracking Reference Signal (TRS) The DMRS is used to demodulate a modulated signal. Note that two types of reference signals may be defined as the DMRS: a reference signal for demodulating the PBCH and a reference signal for demodulating the PDSCH, or that both reference signals may be referred to as the DMRS. The CSI-RS is used for measurement of Channel State Information (CSI) and beam management. The PTRS is used to track the phase in the time axis to ensure frequency offset due to phase noise. The TRS is used to ensure Doppler shift during fast travel. Note that the TRS may be used as one configuration of the CSI-RS. For example, a radio resource may be configured with one port of a CSI-RS as a TRS. According to the present embodiment, any one or multiple of the following uplink reference signals are used.Demodulation Reference Signal (DMRS)Phrase Tracking Reference Signal (PTRS)Sounding Reference Signal (SRS) The DMRS is used to demodulate a modulated signal. Note that two types of reference signals may be defined as the DMRS: a reference signal for demodulating the PUCCH and a reference signal for demodulating the PUSCH, or that both reference signals may be referred to as the DMRS. The SRS is used for measurement of uplink channel state information (CSI), channel sounding, and beam management. The PTRS is used to track the phase in the time axis to ensure frequency offset due to phase noise. The downlink physical channels and/or the downlink physical signals are collectively referred to as a downlink signal. The uplink physical channels and/or the uplink physical signals are collectively referred to as an uplink signal. The downlink physical channels and/or the uplink physical channels are collectively referred to as a physical channel. The downlink physical signals and/or the uplink physical signals are collectively referred to as a physical signal. The BCH, the UL-SCH, and the DL-SCH are transport channels. A channel used in the Medium Access Control (MAC) layer is referred to as a transport channel. A unit of the transport channel used in the MAC layer is also referred to as a Transport Block (TB) and/or a MAC Protocol Data Unit (PDU). A Hybrid Automatic Repeat reQuest (HARQ) is controlled for each transport block in the MAC layer. The transport block is a unit of data that the MAC layer delivers to the physical layer. In the physical layer, the transport block is mapped to a codeword, and coding processing is performed for each codeword. The reference signal may also be used for Radio Resource Measurement (RRM). The reference signal may also be used for beam management. Beam management may be a procedure of the base station apparatus3and/or the terminal apparatus1for matching directivity of an analog and/or digital beam in a transmission apparatus (the base station apparatus3in the downlink and the terminal apparatus1in the uplink) with directivity of an analog and/or digital beam in a reception apparatus (the terminal apparatus1in the downlink and the base station apparatus3in the uplink) to acquire a beam gain. Note that the procedure described below may be included as a procedure for constituting, configuring, or establishing a beam pair link,Beam selectionBeam refinementBeam recovery For example, the beam selection may be a procedure for selecting a beam in communication between the base station apparatus3and the terminal apparatus1. The beam refinement may be a procedure for selecting a beam having a higher gain or changing a beam to an optimum beam between the base station apparatus3and the terminal apparatus1according to the movement of the terminal apparatus1. The beam recovery may be a procedure for re-selecting the beam in a case that the quality of a communication link is degraded due to blockage caused by a blocking object, a passing human being, or the like in communication between the base station apparatus3and the terminal apparatus1. The beam management may include the beam selection or the beam refinement. The beam recovery may include the following procedures.Detection of beam failureDiscovery of new beamTransmission of beam recovery requestMonitor of response to beam recovery request For example, the Reference Signal Received Power (RSRP) of the SSS included in the CSI-RS or the SS/PBCH block may be used, or the CSI may be used, in selecting the transmit beam of the base station apparatus3in the terminal apparatus1. The CSI-RS Resource Index (CRI) may be used, or an index indicated in the sequence of the demodulation reference signal (DMRS) used for demodulation of the PBCH and/or PBCH included in the SS/PBCH block may be used, as a report to the base station apparatus3. The base station apparatus3indicates the CRI or the time index of the SS/PBCH in indicating the beam to the terminal apparatus1, and the terminal apparatus1receives based on the indicated CRI or the time index of the SS/PBCH. At this time, the terminal apparatus1may configure a spatial filter, based on the indicated CRI or the time index of the SS/PBCH for reception. The terminal apparatus1may receive by using the assumption of Quasi-Co-Location (QCL). One signal (antenna port, synchronization signal, reference signal, or the like) being in QCL or being assumed to be in QCL with another signal (antenna port, synchronization signal, reference signal, or the like) can be interpreted as one signal being associated with another signal. In a case that a Long Term Property of a channel on which one symbol in one antenna port is carried may be estimated from a channel on which one symbol in the other antenna port is carried, the two antenna ports are said to be in QCL. The Long Term Property of the channel includes at least one of a delay spread, a Doppler spread, a Doppler shift, an average gain, or an average delay. For example, in a case that an antenna port1and an antenna port2are in QCL with respect to the average delay, this means that a reception timing for the antenna port2may be estimated from a reception timing for the antenna port1. The QCL may also be expanded to beam management. For this purpose, spatially expanded QCL may be newly defined. For example, the Long Term Property of a channel in spatial QCL assumption may be an Angle of Arrival ((AoA), a Zenith angle of Arrival (ZoA), or the like) and/or an Angle Spread (for example, Angle Spread of Arrival (ASA) or a Zenith angle Spread of Arrival (ZSA)), a transmission angle (AoD, ZoD, or the like) or an Angle Spread of the transmission angle (for example, an Angle Spread of Departure (ASD) or a Zenith angle Spread of Departure (ZSS)), Spatial Correlation, or a reception spatial parameter, in a radio link or channel. For example, in a case that an antenna port1and an antenna port2are considered to be QCL, with respect to the reception spatial parameters, it means that the receive beam (reception spatial filter) for receiving a signal from the antenna port2can be inferred from the receive beam for receiving a signal from the antenna port1. Combinations of long terms properties which may be considered to be QCL may be defined as QCL types. For example, the following types may be defined.Type A: Doppler shift, Doppler spread, average delay, delay spreadType B: Doppler shift, Doppler spreadType C: Average delay, Doppler shiftType D: Reception spatial parameters The above-described. QCL types may configure and/or indicate the QCL assumption of one or two reference signals and the PDCCH or the PDSCH DMRS in the RRC and/or MAC layer and/or DCI as a Transmission Configuration Indication (TCI). For example, in a case that index #2of the PBCHISS block and the QCL type A+the QCL type B are configured and/or indicated as one state of the TCI at a time when the terminal apparatus1receives the PDCCH, the terminal apparatus1may consider, at a time when receiving the PDCCH DMRS, the Doppler shift, the Doppler spread, the average delay, the delay spread, and the reception spatial parameters in the reception of the PBCH/SS block index #2as the long term properties of the channel to receive the DMRS of the PDCCH to perform synchronization or channel estimation. At this time, the reference signal indicated by the TCI (the PBCH/SS block in the example described above) may be referred to as a source reference signal, and a reference signal affected by the long term properties inferred from the long term properties of the channel at a time when the source reference signal (the PDCCH DMRS in the example described above) is received may be referred to as a target reference signal. The TCI may be configured with multiple TCI states and a combination of a source reference signal and a QCL type for each state in the RRC, and indicated to the terminal apparatus1by the MAC layer or the DCI. According to this method, operation of the base station apparatus3and the terminal apparatus1equivalent to beam management may be defined as beam management and beam indication/report, based on the spatial QCL assumption and radio resources (time and/or frequency). The subframe will now be described. The subframe in the present embodiment may also be referred to as a resource unit, a radio frame, a time period, or a time interval. FIG.2is a diagram illustrating a schematic configuration of a downlink slot according to a first embodiment of the present invention. Each of the radio frames is 10 ms in length. Each of the radio frames includes 10 subfraines and W slots. One slot includes X OFDM symbols. In other words, the length of one subframe is 1 ms. For each of the slots, time length is defined based on subcarrier spacings. For example, in a case that the subcarrier spacing of an OFDM symbol is 15 kHz and Normal Cyclic Prefixes (NCPs) are used, X=7 or X=14, and X=7 ad X=14 correspond to 0.5 ms and 1 ms, respectively. In a case that the subcarrier spacing is 60 kHz, X=7 or X=14, and X=7 and X=14 correspond to 0.125 ms and 0.25 ms, respectively. For example, in a case of X=14, W=10 is used in a case that the subcarrier spacing is 15 kHz, and W=40 is used in a case that the subcarrier spacing is 60 kHz.FIG.2illustrates a case of X=7 as an example. Note that a case of X=14 can be similarly configured by expanding the case of X=7. The uplink slot is defined similarly, and the downlink slot and the uplink slot may be defined separately. The bandwidth of the cell ofFIG.2may also be defined as a BandWidth Part (BWP). The slot may also be defined as a Transmission Time Interval (TTI). A slot may not be defined as TTI. TTI may be a transmission period of a transport block. The signal or the physical channel transmitted in each of the slots may be represented by a resource grid. The resource grid is defined by multiple subcarriers and multiple OFDM symbols. The number of subcarriers constituting one slot depends on each of the downlink and uplink bandwidths of a cell. Each element in the resource grid is referred to as a resource element. The resource element may be identified by using a subcarrier number and an OFDM symbol number. A resource grid is used to represent mapping of a certain physical downlink channel (such as the PDSCH) or a certain physical uplink channel (such as the PUSCH) to resource elements. For example, in a case that the subcarrier spacing is 15 kHz, and in a case that the number X of OFDM symbols included in the subframe is 14 and NCPs are used, one physical resource block is defined by 14 continuous OFDM symbols in the time domain and by 12*Nmax continuous subcarriers in the frequency domain. Nmax is the maximum number of resource blocks determined by the subcarrier spacing configuration μ described below. In other words, the resource grid includes (14*12*Nmax, μ) resource elements. In a case of Extended CPs (ECPs), since it is supported only with the subcarrier spacing of 60 kHz, one physical resource block is defined by, for example, 12 (the number of OFDM symbols included in one slot)*4 (the number of slots included in one subframe)=48 continuous OFDM symbols in the time domain and by 12*Nmax, μ continuous subcarriers in the frequency domain. In other words, the resource grid includes (48*12*Nmax, μ) resource elements. As a resource block, a reference resource block, a common resource block, a physical resource block, a virtual resource block is defined. One resource block is defined as 12 subcarriers that are continuous in the frequency domain. Reference resource blocks are common in all subcarriers, and, for example, may be configured with resource blocks at subcarrier spacing of 15 kHz, and may be numbered in ascending order. Subcarrier index 0 at reference resource block index 0 may be referred to as a reference point A (which may simply be referred to as a “reference point”), The common resource blocks are resource blocks numbered from 0 in ascending order in each subcarrier spacing configuration μ from the reference point A. The resource grid described above is defined by these common resource blocks. Physical resource blocks are resource blocks numbered in ascending order from 0 included in a bandwidth part (BWP) described below, and the physical resource blocks are resource blocks numbered in ascending order from 0 included in the bandwidth part (BWP). A certain physical uplink channel is first mapped to a virtual resource block. Thereafter, the virtual resource block is mapped to a physical resource block. (from TS38.211) Next, the subcarrier spacing configuration μ will be described. As described above, multiple OFDM numerologies are supported in NR. In the BWP, the subcarrier spacing configuration μ (μ=0, 1, . . . , 5) and the cyclic prefix length are given by a higher layer for the downlink BAP, and are given by a higher layer in the uplink BWP. Here, in a case that μ is given, the subcarrier spacing Δf is given by Δf=2{circumflex over ( )}μ*15 (kHz). In the subcarrier spacing configuration μ, the slots are counted in ascending order from 0 to N{circumflex over ( )}{subframe} . . . {slot}−1 in the subframe, and counted in ascending order from 0 to N{circumflex over ( )}{frame, μ}_{slot}−1 in the frame. N{circumflex over ( )}{slot}_{symb} continuous OFDM symbols are in the slot, based on the slot configuration and the cyclic prefix. N{circumflex over ( )}{slot}_{symb} is 14. The start of the slot n{circumflex over ( )}{μ}_{s} in the subframe is aligned with the start in time of the n{circumflex over ( )}{μ}_{s} N{circumflex over ( )}{slot}_{symb}-th OFDM symbol in the same subframe. The subframe, the slot, and a mini-slot will now be described.FIG.3is a diagram illustrating the relationship between the subframe and the slot and the mini-slot in the time domain. As illustrated inFIG.3, three types of time units are defined. The subframe is 1 ms regardless of the subcarrier spacing. The number of OFDM symbols included in the slot is 7 or 14, and the slot length depends on the subcarrier spacing. Here, in a case that the subcarrier spacing is 15 kHz, 14 OFDM symbols are included in one subframe. A downlink slot may be referred to as PDSCH mapping type A. An uplink slot may be referred to as PUSCH mapping type A. The mini-slot (which may be referred to as a sub-slot) is a time unit including OFDM symbols that are less in number than the OFDM symbols included in the slot.FIG.3illustrates, by way of example, a case that the mini-slot includes 2 OFDM symbols. The OFDM symbols in the mini-slot may match the timing for the OFDM symbols constituting the slot. Note that the smallest unit of scheduling may be a slot or a mini-slot. Allocating a mini-slot may be referred to as non-slot based scheduling. Scheduling the mini-slots may be expressed as scheduling resources in which the relative time positions of the starting positions of the reference signal and the data are fixed. A downlink mini-slot may be referred to as PDSCH mapping type B. An uplink mini-slot may be referred to as PUSCH mapping type B. FIG.4is a diagram illustrating an example of a slot format. Here, a case that the slot length is 1 ms at a subcarrier spacing of 15 kHz is illustrated as an example. InFIG.4, D represents the downlink, and U represents the uplink. As illustrated inFIG.4, during a certain time interval (for example, the minimum time interval to be allocated to one UE in the system), the subframe may include at least one of the followings:downlink symbol,flexible symbol, oruplink symbol. Note that these ratios may be predetermined as slot formats. These ratios may also be defined by the number of downlink OFDM symbols included in the slot or the start position and the end position within the slot. These ratios may also be defined by the number of uplink OFDM symbols or DFT-S-OFDM symbols included in the slot or the start position and the end position within the slot. Note that scheduling the slots may be expressed as scheduling resources in which the relative time positions of the reference signal and the slot boundary are fixed. The terminal apparatus1may receive a downlink signal or a downlink channel in a downlink symbol or a flexible symbol. The terminal apparatus1may transmit an uplink signal or a downlink channel in an uplink symbol or a flexible symbol. FIG.4(a)is an example in which the entire subframe is used for downlink transmission during a certain time interval (which may be referred to as, for example, a minimum unit of a time resource that can be allocated to one UE, or a time unit, or multiple minimum units of time resources may be collectively referred to as a time unit). InFIG.4(b), an uplink is scheduled via the PDCCH for example by using the first time resource, and an uplink signal is transmitted after a gap for a processing delay of the PDCCH, a time for switching from the downlink to the uplink, and a flexible symbol including generation of a transmit signal. InFIG.4(c), the PDCCH and/or downlink PDSCH are transmitted by using the first time resource, and the PUSCH or PUCCH are transmitted after a gap for a processing delay, a time for switching from the downlink to the uplink, and generation of a transmit signal. Here, for example, the uplink signal may be used to transmit the HARQ-ACK and/or CSI, namely, the UCI. InFIG.4(d), the PDCCH and/or PDSCH are transmitted by using the first time resource, and the uplink PUSCH and/or PUCCH are transmitted after a gap for a processing delay, a time for switching from the downlink to the uplink, and generation of a transmit signal. Here, for example, the uplink signal may be used to transmit the uplink data, namely, the UL-SCH.FIG.4(e)is an example in which the entire subframe is used for uplink transmission (PUSCH or PUCCH). The above-described downlink part and uplink part ay include multiple OFDM symbols as is the case with LTE. FIG.5is a diagram illustrating an example of beamforming. Multiple antenna elements are connected to one Transceiver unit (TXRU)10. The phase is controlled by using a phase shifter11for each antenna element and a transmission is performed from an antenna element12, thus allowing a beam for a transmit signal to be directed in any direction. Typically, the TXRU may be defined as an antenna port, and only the antenna port may be defined for the terminal apparatus1. Controlling the phase shifter11allows setting of directivity in any direction. Thus, the base station apparatus3can communicate with the terminal apparatus1by using a high gain beam. Hereinafter, the bandwidth part (BWP) will be described. The BWP is also referred to as a carrier BWP. The BWP may be configured for each of the downlink and the uplink. The BWP is defined as a set of continuous physical resources selected from continuous subsets of common resource blocks. The terminal apparatus1may be configured with up to four BWPs in which one downlink carrier BWP is activated at a certain time. The terminal apparatus1may be configured with up to four BWPs in which one uplink carrier BWP is activated at a certain time. In the case of carrier aggregation, the BWP may be configured for each serving cell. At this time, configuring one BWP in a certain serving cell may be expressed as configuring no BWP. Configuring two or more BWPs may be expressed as configuring BWP. In an activated serving cell, there is always one active (activated) BWP. BWP switching for a certain serving cell is used to activate art inactive (deactivated) BWP and deactivate an active (activated) BWP. BWP switching for a certain serving cell is controlled by the PDCCH indicating a downlink assignment or an uplink grant, in addition of the SpCell (PCell or PSCell) or the activation of the SCell, one BWP is initially active without receiving the PDCCH indicating a downlink assignment or an uplink grant. The initially active BWP may be specified by an RRC message transmitted from the base station apparatus3to the terminal apparatus1. The active BWP for a certain serving cell is specified by the RRC or the PDCCH transmitted from the base station apparatus3to the terminal apparatus1. In Unpaired spectrum (such as TDD bands), the DL BWP and the UL BWP are paired, and BWP switching is common to the UL and the DL. In the active BWP for each of the activated serving cells for which the BWP is configured, the MAC entity of the terminal apparatus1applies normal processing. The normal processing includes transmitting the UL-SCH, transmitting the RACH, monitoring the PDCCH, transmitting the PUCCH, and receiving the DL-SCH. In an inactive BWP for each of the activated serving cells for which the BWP is configured, the MAC entity of the terminal apparatus1does not transmit the UL-SCH, does not transmit the RACH, does not monitor the PDCCH, does not transmit the PUCCH, and does not receive the DL-SCH. In a case that a certain serving cell is deactivated, an active BWP may not be present (for example, the active BWP is deactivated). The terminal apparatus1may be configured with one primary cell and up to 15 secondary cells. The time and frequency resources for reporting the CSI used by the terminal apparatus1are controlled by the base station apparatus3. The CSI includes a Channel Quality Indicator (CQI), a Precoding Matrix Indicator (PMI), a CSI-RS Resource Indicator (CRI), a Strongest Layer Indication (SLI), a rank indication (RI), and/or a Layer-1 Reference Signal Received Power (L1-RSRP). For the CQI, PMI, CRI, SLI, RI, and L1-RSRP, the terminal apparatus1is configured by a higher layer with a configuration related to N (N is equal to or greater than 1) CSI reports, a configuration related to resources of M (M is equal to or greater than 1) CSI reference signals (CSI RSs), and a configuration related to one CSI measurement including L (L is equal to or greater than 1) links. The configuration related to the CSI measurement includes a list of configurations related to the CSI reports, a list of configurations related to the CSI resources, a list of configurations of the links, and a list of trigger states. Each will be described below. Each of the configurations related to the CSI reports is associated with one downlink BWP (the BWP identity of a higher layer), and each of the configurations related to the CSI reports includes the following parameters to be reported.One identity for identifying a configuration related to the CSI reportOperation in the time domain (for example, periodic, semi-persistent, or Aperiodic)CSI parameters to be reported (for example, CRI, RI, PMI, CQI, or the like)Configuration in the frequency domain (including each of the information to configure the broadband CQI or the subband CQI, and information to configure the wideband PMI or the subband PMI)Configuration of restriction on CSI measurement (measurement restriction configuration, which may be configured for each of the channel measurement and interference measurement)Codebook configuration (information of the CSI type (information for indicating type1or type2) and codebook subset restriction)Maximum number of CQI per report (which may be information for indicating either one codeword or two codewords)Assumption of the CQI table table including up to 64 QAM, CQI table including up to 256 QAM, URLLC, or the like) Each of the configurations related to the CSI resources includes information related to S (S is equal to or greater than 1) CSI-RS resource sets, and each CSI-RS resource set includes multiple CSI-RS resources (a NZPCSI-RS for channel measurement or interference measurement, and a CSI-Interference Measurement (IM) resource for interference measurement), and a configuration related to resources of the SS/PBCH block used for L1-RSRP calculation. Here, the NZP CSI-RS resource is a CSI-RS in which the sequence is generated in accordance with a generation method defined in advance in the specification, and the CSI-RS is mapped to resource elements. Each of the configurations related to the CSI resources is placed in an identified BWP in a higher layer, and the configurations related to all the CSI resources linked to a configuration related to one CSI report is the same BWP. Next, the channel measurement and interference measurement described above will be described. The channel measurement is to measure the amount related to the quality of each layer or each codeword in a case that the downlink desired signal or channel or spatial multiplexing is assumed for the CSI measurement, and the interference measurement is to measure the amount of interference in each layer or codeword in a case that the downlink interference signal or channel or spatial multiplexing is assumed for the CSI measurement. Here, “layer” refers to the number of PDSCHs to be spatially multiplexed. Note that the configuration (ssb-Resources) related to the resources of the SS/PBCH block used for the L1-RSRP calculation may be included in each of the configurations related to the CSI resources. The operation in the time domain of the CSI-RS resources may be included in each of the configurations related to the CSI resources. The operation in the time domain of the CSI-RS resources may be included in each of the configurations related to CSI-RS resource sets. The configuration of each link includes an indication of the configuration related to the CSI report, an indication of the CSI configuration, and an indication of whether to measure the channel measurement or the interference measurement. The configuration of each link may include multiple trigger states for dynamically selecting a configuration related to the CSI report for one or multiple aperiodic CSI reports. Each trigger state is associated with a configuration related to one or multiple CSI reports, and the configuration related to each CSI report is linked to a configuration related to one or multiple periodic, semi-persistent, or aperiodic CSI reference signals. Here, the terminal apparatus may assume the following, depending on the number of configurations associated with the linked CSI resources.In a case that one configuration related to CSI resources is configured, the resource configuration is for channel measurement for L1-RSRP calculation.In a case that two configurations related to CSI resources are configured, the first configuration related to CSI resources is for channel measurement, and the second configuration related to CSI resources is for interference measurement on the CSI-IM or the NZP CSI-RS resource.In a case that three configurations related to CSI resources are configured, the first configuration related to CSI resources is for channel measurement, the second configuration related to CSI resources is for interference measurement on the CSI-IM resource, and the third configuration related to CSI resources is for interference measurement on the NZP CSI-RS resource. For the CSI measurement, the terminal apparatus1may assume the following.Each NZP CSI-RS port configured for the interference measurement corresponds to transmission layers of interference;All transmission layers of interference on the NZP CSI-RS port is considered for the associated Energy per resource element (EPRE); andThere are other interference signals on the NZP CSI-RS resource for the channel measurement, the CSI-RS resource for the interference measurement, or the CSI-IM resource for interference measurement Here, EPRE represents the energy of the NZP CSI-RS per resource element. Specifically, the base station apparatus3is configured with each of the ratio (Pc) of the PDSCH EPRE to the EPRE of the NZP CSI-RS, the ratio (Pc-PDCCH) of the PDCCH EPRE to the EPRE of the NZP CSI-RS, and the ratio (Pc_SS) of the EPRE of the SS/PBCH block to the EPRE of the NZP CSI-RS. In this way, the EPRE can be considered for the CSI measurement from the ratio of the energy in which the CSI-RS EPRE is configured. A configuration related to one or multiple CSI reports for the channel measurement and/or the interference measurement on one or multiple component carriers and/or trigger states for one or multiple CSI-RS resource sets are configured by a higher layer for a CSI-RS resource set in which the operation in the time domain of the CSI-RS resource set is aperiodic. For a trigger of an aperiodic CSI report, one set of CSI trigger states is configured with higher layer parameters, and the CSI trigger states are associated with any one candidate of DL MVP. The terminal apparatus1does not expect that the CSI report for the downlink BWP not activated is triggered. Each trigger state is initiated by using a CSI request field included in the DCI (for example, DCI format 0_1). In this case, the terminal apparatus performs the following operations.In a case that the value of the CSI request field is 0, CSI is not requestedIn a case that the number of CSI trigger states is greater than 2NTS−1, the terminal apparatus1receives, from the MAC layer, a selection command to be used for mapping to the 2NTS−1 trigger states to the code point of the CSI request field. Here, NTSis the bit size of the CSI request field configured in a higher layer. NTSmay be configured with any value from {0, 1, 2, 3, 5, 6}.In a case that the number of CSI trigger states is smaller than or equal to 2NTS−1, the CSI request field directly indicates the trigger state and the QCL assumption of the terminal apparatus1.For the aperiodic CSI-RS resource associated with each CSI trigger state, the terminal apparatus1is indicated a source reference signal of QCL and the QCL configuration of the QCL type from the higher layer (for example, the TCI may be used). Here, NTSis the number of bits of the CSI request field of the DCI, and the number of states of the triggers (NCSI) for the aperiodic CSI report configured by the RRC may be greater than or smaller than or equal to 2NTS−1. In a case that a configuration related to a CSI resource linked to a configuration related to one CSI report has a configuration related to multiple aperiodic CSI-RS resource sets, and a portion of the aperiodic CSI-RS resource sets is associated with a trigger state, a bitmap is configured by a higher layer for selecting a CSI-IM resource or a NZP CSI-RS resource set per trigger state for each CSI-RS resource set. The configuration described above will be described, Information related to the CSI measurement may be included in information (for example, ServingCellConfigDedicated) configured for the terminal apparatus1for each cell. In other words, the information related to the CSI measurement is configured for each cell, and the information related to the CSI measurement of each cell includes the following information.List of configurations related to N CSI reportsList of configurations related to M CSI reference signalsList of links between configurations related to L CSI reports and configurations for related to CSI reference signals (csi-MeasIdToAddModList)Information related to triggers of CSI reports (the number of bits NTSof the CSI request field and/or the trigger state corresponding to the value of the CSI request field) The configuration related to each CSI report includes the index or the identity of the configuration related to the CSI report, information for configuring the operation in the time domain, and information for configuring which CSI to report. The configuration related to each CSI report may also include a BWP index for identifying one BWP. In a case that the information for configuring the operation in the time domain indicates periodic or semi-persistent, a period (number of slots) for reporting CSI may be included. In a case that the information for configuring the operation in the time domain indicates Aperiodic, the information related to the offset of the number of slots from the slot where the aperiodic CSI report is triggered to transmitting the CSI report. In a case that the information for configuring the operation in the time domain indicates Aperiodic, the configuration related to each CSI report may include the index of the trigger state of the CSI report. The configuration related to each link may include the index or the identity of the configuration related to the CSI report to be linked, the index or the identity of the configuration related to the CSI reference signal, and the identity of the link. FIG.6illustrates an example of an RRC configuration related to the CSI measurement and a CSI request field in a certain serving cell c. Here, the description is made as the number of trigger states configured by the RRC NCSI=3, the number of bits of the CSI request field NTS=2. Note that here, it is assumed that the number of BWPs configured for the serving cell is 2. As illustrated inFIG.6, a list of configurations for CSI reports is configured for the information related to the CSI measurement of the serving cell c, and four configurations related to the CSI reports are configured in the list. Among them, the configurations of aperiodic CSI reports are configurations #1to #3related to the CSI report. The configuration #1related to the CSI report is associated with the trigger state #0, the configuration #2related to the CSI report is associated with the trigger state #1, and the configuration #3related to the CSI report is associated with the trigger state #2. In this case, because of NCSI=2NTS−1, it is mapped directly to the 2-bit CSI request field included in the DCI format. As illustrated inFIG.6, “00” of the CSI request field does not transmit the CSI report. The trigger state #0is associated with “01”, the trigger state #1is associated with “10”, and the trigger state #2is associated with “11”. The terminal apparatus1reports the CSI in the PUSCH, based on the configuration related to the CSI report configured by the RRC and the configuration related to the CSI report associated with the value of the CSI request field included in the DCI. At this time, the terminal apparatus1measures the CSI, based on the CSI parameters to be reported included in the configuration related to the CSI report, from the CSI-RS resource set or the CSI-RS resource for measuring the CSI from the configuration related to the CSI resource associated with the configuration related to the CSI report. The configuration related to each CSI report is associated with the BWP in the serving cell. InFIG.6, the trigger state #1and the trigger state #2are associated with the BWP index #0and the BWP index #1, respectively. In other words, in a case that 10 is indicated as the value of the CSI request field, the terminal apparatus1reports the CSI in the BWP #1. In other words, the value (information) of the CSI request field indicates one of multiple trigger states, each of the multiple trigger states is configured for each serving cell, and is associated with a configuration related to one or multiple CSI reports and a configuration related to one or multiple reference signals for the CSI measurement. Note that the value of the CSI request field may be stated as information included in the CSI request field. Here, “active” is configured as the BWP index of the trigger state #0rather than the actual index of the configured BWP. This means being associated with the activated BWP. For example, in a case that the BWP indicating khe BWP index #0is activated in a certain slot for the terminal apparatus1, the CSI request field “01” measures the CSI in the BWP corresponding to the activated BWP index #0, and reports the CSI. On the other hand, in a case that the BWP corresponding to the BWP index #1is activated in a certain slot, the CSI request field “01” measures the CSI in the BWP corresponding to the activated BWP index #1, and reports the CSI. In other words, the CSI request field included in the DCI of the PDCCH includes a trigger state, each trigger state is associated with a configuration related to one or multiple CSI reports and a configuration related to one or multiple CSI resources, and one of the multiple trigger skates is configured to be associated with the activated BWP of the serving cell c. FIG.7illustrates an example of a configuration and a CSI request field in a case of the number of trigger states configured by the RRC NCSI=4, and the number of bits of the CSI request field NTS=2 in a certain serving cell. In the example ofFIG.7, as inFIG.6, each of the configurations related to the CSI report in which the time operation is aperiodic is associated with a BWP index and a trigger state. In the example ofFIG.7, because of NCSI>2NTS−1, the trigger states for each code point “01”, “10”, and “11” in the CSI request field is selected via the MAC layer. As illustrated inFIG.7, one or multiple trigger states may be selected for the four trigger states configured by the RRC by using a bitmap for indicating each trigger state for each code point. In the example ofFIG.7, a 4-bit bitmap “1000” for selection of each trigger state for the code point “01” is indicated to the terminal apparatus1and mapped to correspond to the CSI request field “01”. This means that the trigger state #0is configured to the code point “01” of the CSI request field. In this manner, each code point is selected by a bitmap corresponding to each trigger state in the MAC layer. In other words, the CSI request field included in the DCI of the PDCCH includes a trigger state, and each trigger state is associated with a configuration related to one or multiple CSI reports and a configuration related to one or multiple CSI resources, One or multiple trigger states associated with the CSI request field are selected from the multiple trigger states by using a bitmap in the MAC layer, and are mapped to each of the code points in the CSI request field. FIG.8illustrates an example in a case that two serving cells are configured. Here, an example is illustrated in which two serving cells are configured as the number of serving cells, and trigger states are allocated to the configurations related to the aperiodic CSI report in each cell. As illustrated in the drawing, multiple configurations related to CSI reports are associated in the CSI request field. For example, the trigger state #1of the serving cell #1and the trigger state #1of the serving cell #2are configured to the code point “01”. Here, in a case that “10” is indicated as the value of the CSI request field for the terminal apparatus1in a certain slot, the terminal apparatus1reports the CSI of the BWP #0of the serving cell #1and the CSI of the BWP #0of the serving cell #2. In this case, in a case that the BWP #0of the serving cell #1and the BWP #0of the serving cell #2are activated together, the terminal apparatus1reports the CSI of the BWP #0of the serving cell #1and the BWP #0of the serving cell #2. In a case that the BWP #0of the serving cell #1is activated but the BWP #0of the serving cell #2is not activated, the terminal apparatus1reports the CSI of the BWP #0of the serving cell #1. In this way, multiple serving cells are configured, and the CSI report of each serving cell indicated by the value of the CSI request field is made. In other words, the terminal apparatus1receives the PDCCH for carrying the DCI including the CSI request field, and in a case that the CSI report of the BWP in the multiple serving cells is triggered based on the CSI request field, the terminal apparatus1transmits the CSI report of only the BWP indicated by the activated BWP index. At this time, the CSI request field indicates a trigger state, and the trigger state indicates one of multiple states. Each state of the multiple states is configured for each serving cell, and is associated with a configuration related to one or multiple CSI reports and a configuration related to one or multiple CSI resources and a BWP index for each serving cell. Here, in the example described above, a configuration related to one CSI report is configured for one value of the CSI request field, but multiple CSI reports may be associated. In the example described above, a case is illustrated that the configuration related to the CSI report of each serving cell is always associated with the configuration related to the BWP index, but in a case that there is one BWP, the associated information may not be configured. In this case, the CSI measurement and the CSI reporting may be performed based on the bandwidth of the serving cell. Although an example is described above in which the CSI report is transmitted based on whether or not the BWP is activated, even in a case that the serving cell is deactivated, the CSI report in the serving cell may not be transmitted, and the CSI of only the activated serving cell and/or activated BWP may be reported. Although, in the example described above, the information for indicating the index of the trigger state is included in the configuration related to the CSI report, the configuration related to the CSI measurement may include the list of trigger states, and which configuration related to the CSI report is included in each trigger state may be configured. In other words, in the present invention, the terminal apparatus1may report only the CSI of the activated serving cell. The terminal apparatus1may report only the CSI of the activated BWP in a case of a cell in which the BWP is configured and activated. The terminal apparatus1may not report the CSI of the deactivated serving cell. The CSI of the deactivated BWP may not be reported. The BWP index included in the configuration related to the CSI report may be configured for each serving cell, and the serving cell index and the BWP index may be used for identification of the BWP. An aspect of the present embodiment may be operated in carrier aggregation or dual connectivity with the Radio Access Technologies (RAT) such as LTE and LTE-A/LTE-A Pro. In this case, the aspect may be used for some or all of the cells or cell groups, or the carriers or carrier groups (for example, Primary Cells (PCells), Secondary Cells (SCells), Primary Secondary Cells (PSCells), Master Cell Groups (MCGs), or Secondary Cell Groups (SCGs)). The aspect may be independently operated and used in a stand-alone manner. In the dual connectivity operation, the Special Cell (SpCell)) is referred to as a PCell of the MCG or a PSCell of the SCG, respectively, depending on whether the MAC entity is associated with the MCG or is associated with the SCG. Except the dual connectivity operation, the Special Cell (SpCell) is referred to as a PCell. The Special Cell (SpCell) supports PUCCH transmission and contention based random access. Configurations of apparatuses according to the present embodiment will be described below. Here, an example of a case is illustrated in which CP-OFDM is applied as a downlink radio transmission scheme, and CP-OFDM or DFTS-OFDM (SC-FDM) is applied as an uplink radio transmission scheme, FIG.9is a schematic block diagram illustrating a configuration of the terminal apparatus1according to the present embodiment. As illustrated inFIG.9, the terminal apparatus1is configured to include a higher layer processing unit101, a controller103, a receiver105, a transmitter107, and a transmit and/or receive antenna109. The higher layer processing unit101includes a radio resource control unit1011, a scheduling information interpretation unit1013, and a Channel State Information (CSI) report control unit1015. The receiver105includes a decoding unit1051, a demodulation unit1053, a demultiplexing unit1055, a radio receiving unit1057, and a measurement unit1059. The transmitter107includes a coding unit1071, a modulation unit1073, a multiplexing unit1075, a radio transmitting unit1077, and an uplink reference signal generation unit1079. The higher layer processing unit101outputs the uplink data (the transport block) generated by a user operation or the like, to the transmitter107. The higher layer processing unit101performs processing of the Medium Access Control (MAC) layer, the Packet Data Convergence Protocol (PDCP) layer, the Radio Link Control (RLC) layer, and the Radio Resource Control (RRC) layer. The radio resource control unit1011included in the higher layer processing unit101manages various pieces of configuration information of the terminal apparatus1. The radio resource control unit1011generates information allocated in each channel for uplink, and outputs the generated information to the transmitter107. The scheduling information interpretation unit1013included in the higher layer processing unit101interprets the DCI (scheduling information) received through the receiver105, generates control information for control of the receiver105and the transmitter107, in accordance with a result of interpreting the DCI, and outputs the generated control information to the controller103. The CSI report control unit1015indicates to the measurement unit1059to derive Channel State Information (RI/PMI/CQI/CRI) relating to the CSI reference resource. The CSI report control unit1015indicates to the transmitter107to transmit RI/PMI/CQI/CRI. The CSI report control unit1015sets a configuration that is used in a case that the measurement unit1059calculates CQI. In accordance with the control information from the higher layer processing unit101, the controller103generates a control signal for control of the receiver105and the transmitter107. The controller103outputs the generated control signal to the receiver105and the transmitter107to control the receiver105and the transmitter107. In accordance with the control signal input from the controller103, the receiver105demultiplexes, demodulates, and decodes a reception signal received from the base station apparatus3through the transmit and/or receive antenna109, and outputs information resulting from the decoding to the higher layer processing unit101. The radio receiving unit1057converts (down-converts) a downlink signal received through the transmit and/or receive antenna109into a signal of an intermediate frequency, removes unnecessary frequency components, controls an amplification level in such a manner as to suitably maintain a signal level, performs orthogonal demodulation based on an in-phase component and an orthogonal component of the received signal, and converts the resulting orthogonally-demodulated analog signal into a digital signal. The radio receiving unit1057removes a portion corresponding to a Guard. Interval (GI) from the digital signal resulting from the conversion, performs Fast Fourier Transform (FFT) on the signal from which the Guard Interval has been removed, and extracts a signal in the frequency domain. The demultiplexing unit1055demultiplexes the extracted signal into the downlink PDCCH, the downlink PDSCH, and the downlink reference signal. The demultiplexing unit1055performs channel compensation for the PDCCH and PUSCH, based on the channel estimate value input from the measurement unit1059. The demultiplexing unit1055outputs the downlink reference signal resulting from the demultiplexing, to the measurement unit1059. The demodulation unit1053demodulates the downlink PDCCH and outputs a signal resulting from the demodulation to the decoding unit1051. The decoding unit1051attempts to decode the PDCCH, In a case of succeeding in the decoding, the decoding unit1051outputs downlink control information resulting from the decoding and an RNTI to which the downlink control information corresponds, to the higher layer processing unit101. The demodulation unit1053demodulates the PDSCH in compliance with a modulation scheme notified with the downlink grant, such as Quadrature Phase Shift Keying (QPSK), 16 Quadrature Amplitude Modulation (QAM), 64 QAM, or 256 QAM and outputs a signal resulting from the demodulation to the decoding unit1051. The decoding unit1051performs decoding in accordance with information of a transmission or an original coding rate notified with the downlink control information, and outputs, to the higher layer processing unit101, the downlink data (the transport block) resulting from the decoding. The measurement unit1059performs downlink path loss measurement, channel measurement, and/or interference measurement from the downlink reference signal input from the demultiplexing unit1055, The measurement unit1059outputs, to the higher layer processing unit101, the measurement result and CSI calculated based on the measurement result. The measurement unit1059calculates a downlink channel estimate value from the downlink reference signal and outputs the calculated downlink channel estimate value to the demultiplexing unit1055. The transmitter107generates the uplink reference signal in accordance with the control signal input from the controller103, codes and modulates the uplink data (the transport block) input from the higher layer processing unit101, multiplexes the PUCCH, the PUSCH, and the generated uplink reference signal, and transmits a signal resulting from the multiplexing to the base station apparatus3through the transmit and/or receive antenna109. The coding unit1071codes the Uplink Control Information and the uplink data input from the higher layer processing unit101. The modulation unit1073modulates the coded bits input from the coding unit1071, in compliance with a modulation scheme such as BPSK, QPSK, 16 QAM, 64 QAM, or 256 QAM. The uplink reference signal generation unit1079generates a sequence determined according to a prescribed rule (formula), based on a physical cell identity (also referred to as a Physical Cell Identity (PCI), a cell ID, or the like) for identifying the base station apparatus3, a bandwidth in which the uplink reference signal is mapped, a cyclic shift notified with the uplink grant, a parameter value for generation of a DMRS sequence, and the like. Based on the information used for the scheduling of PUSCH, the multiplexing unit1075determines the number of PUSCH layers to be spatially multiplexed, maps multiple pieces of uplink data to be transmitted on the same PUSCH to multiple layers through Multiple Input Multiple Output Spatial Multiplexing (MIMO SM), and performs precoding on the layers. In accordance with the control signal input from the controller103, the multiplexing unit1075performs Discrete Fourier Transform (DFT) on modulation symbols of PUSCH. The multiplexing unit1075multiplexes PUCCH and/or PUSCH signals and the generated uplink reference signal for each transmit antenna port. To be more specific, the multiplexing unit1075maps the PUCCH and/or PUSCH signals and the generated uplink reference signal to the resource elements for each transmit antenna port. The radio transmitting unit1077performs Inverse Fast Fourier Transform (IFFT) on a signal resulting from the multiplexing to perform modulation in compliance with an SC-FDMA A scheme, adds the Guard Interval to the SC-FDM-modulated SC-FDM symbol to generate a baseband digital signal, converts the baseband digital signal into an analog signal, generates an in-phase component and an orthogonal component of an intermediate frequency from the analog signal, removes frequency components unnecessary for the intermediate frequency band, converts (up converts) the signal of the intermediate frequency into a signal of a high frequency, removes unnecessary frequency components, performs power amplification, and outputs a final result to the transmit and/or receive antenna109for transmission. FIG.10is a schematic block diagram illustrating a configuration of the base station apparatus3according to the present embodiment. As is illustrated, the base station apparatus3is configured to include a higher layer processing unit301, a controller303, a receiver305, a transmitter307, and a transmit and/or receive antenna309. The higher layer processing unit301includes a radio resource control unit3011, a scheduling unit3013, and a CSI report control unit3015. The receiver305includes a decoding unit3051, a demodulation unit3053, a demultiplexing unit3055, a radio receiving unit3057, and a measurement unit3059. The transmitter307includes a coding unit3071, a modulation unit3073, a multiplexing unit3075, a radio transmitting unit3077, and a downlink reference signal generation unit3079. The higher layer processing unit301performs processing of the Medium Access Control (MAC) layer, the Packet Data Convergence Protocol (PDCP) layer, the Radio Link Control (RLC) layer, and the Radio Resource Control (RRC) layer. The higher layer processing unit301generates control information for control of the receiver305and the transmitter307, and outputs the generated control information to the controller303. The radio resource control unit3011included in the higher layer processing unit301generates, or acquires from a higher node, the downlink data (the transport block) allocated in the downlink PDSCH, system information, the RRC message, the MAC Control Element (CE), and the like, and outputs the result of the generation or the acquirement to the transmitter307, The radio resource control unit3011manages various configuration information for each of the terminal apparatuses1. The scheduling unit3013included in the higher layer processing unit301determines a frequency and a subframe to which the physical channels (PDSCH or PUSCH) are allocated, the transmission coding rate and modulation scheme for the physical channels (PDSCH or PUSCH), the transmit power, and the like, from the received CSI and from the channel estimate value, channel quality, or the like input from the measurement unit3059. The scheduling unit3013generates the control information for control of the receiver305and the transmitter307in accordance with a result of the scheduling, and outputs the generated information to the controller303. The scheduling unit3013generates the information (for example, the DCI format) to be used for the scheduling of the physical channels (PDSCH or PUSCH), based on the result of the scheduling. The CSI report control unit3015included in the higher layer processing unit301controls a CSI report to be performed by the terminal apparatus1. The CSI report control unit3015transmits information, assumed in order for the terminal apparatus1to derive RI/PMI/CQI in the CSI reference resource, for indicating various configurations, to the terminal apparatus1through the transmitter307. Based on the control information from the higher layer processing unit301, the controller303generates a control signal for controlling the receiver305and the transmitter307. The controller303outputs the generated control signal to the receiver305and the transmitter307to control the receiver305and the transmitter307. In accordance with the control signal input from the controller303, the receiver305demultiplexes, demodulates, and decodes a reception signal received from the terminal apparatus1through the transmit and/or receive antenna309, and outputs inthrmation resulting from the decoding to the higher layer processing unit301. The radio receiving unit3057converts (down-converts) an uplink signal received through the transmit and/or receive antenna309into a signal of an intermediate frequency, removes unnecessary frequency components, controls the amplification level in such a manner as to suitably maintain a signal level, performs orthogonal demodulation based on an in-phase component and an orthogonal component of the received signal, and converts the resulting orthogonally-demodulated analog signal into a digital signal. The radio receiving unit3057removes a portion corresponding to the Guard Interval (GI) from the digital signal resulting from the conversion. The radio receiving unit3057performs Fast Fourier Transform (FFT) on the signal from which the Guard Interval has been removed, extracts a signal in the frequency domain, and outputs the resulting signal to the demultiplexing unit3055. The demultiplexing unit1055demultiplexes the signal input from the radio receiving unit3057into PUCCH, PUSCH, and the signal such as the uplink reference signal. The demultiplexing is performed based on radio resource allocation information, predetermined by the base station apparatus3using the radio resource control unit3011, that is included in the uplink grant notified to each of the terminal apparatuses1. The demultiplexing unit3055performs channel compensation of the PUCCH and the PUSCH, based on the channel estimate value input from the measurement unit3059. The demultiplexing unit3055outputs an uplink reference signal resulting from the demultiplexing, to the measurement unit3059. The demodulation unit3053performs Inverse Discrete Fourier Transform (IDFT) on the PUSCH, obtains modulation symbols, and performs reception signal demodulation, that is, demodulates each of the modulation symbols on the PUCCH and the PUSCH, in compliance with the modulation scheme determined in advance, such as Binary Phase Shift Keying (BPSK), QPSK, 16 QAM, 64 QAM, 256 QAM, or in compliance with the modulation scheme that the base station apparatus3itself notified in advance with the uplink grant to each of the terminal apparatuses1. The demodulation unit3053demultiplexes the modulation symbols of multiple pieces of uplink data transmitted on the same PUSCH by using the MIMO SM, based on the number of spatially multiplexed sequences notified in advance with the uplink grant to each of the terminal apparatuses1and information for indicating the precoding to be performed on the sequences. The decoding unit3051decodes the coded bits of the PUCCH and the PUSCH, which have been demodulated, in compliance with a predetermined coding scheme by using the transmission or original coding rate that is predetermined or notified in advance with the uplink grant to the terminal apparatus1by the base station apparatus3, and outputs the decoded uplink data and uplink control information to the higher layer processing unit101. In a case that the PUSCH is retransmitted, the decoding unit3051performs the decoding with the coded bits input from the higher layer processing unit301and retained in a HARQ buffer, and the demodulated coded bits. The measurement unit3059measures the channel estimate value, the channel quality, and the like, based on the uplink reference signal input from the demultiplexing unit3055, and outputs the measurement result to the demultiplexing unit3055and the higher layer processing unit301. The transmitter307generates the downlink reference signal in accordance with the control signal input from the controller303, codes and modulates the downlink control information and the downlink data that are input from the higher layer processing unit301, multiplexes the PDCCH, the PDSCH, and the downlink reference signal and transmits a signal resulting from the multiplexing to the terminal apparatus1through the transmit and/or receive antenna309or transmits the PDCCH, the PDSCH, and the downlink reference signal to the terminal apparatus1through the transmit and/or receive antenna309by using separate radio resources. The coding unit3071codes the downlink control information and the downlink data input from the higher layer processing unit301. The modulation unit3073modulates the coded bits input from the coding unit3071, in compliance with a modulation scheme such as BPSK, QPSK, 16 QAM, 64 QAM, and 256 QAM. The downlink reference signal generation unit3079generates, as the downlink reference signal, a sequence known to the terminal apparatus1, the sequence being determined in accordance with a predetermined rule based on the physical cell identity (PCI) for identifying the base station apparatus3, or the like. The multiplexing unit3075, in accordance with the number of PDSCH layers to be spatially multiplexed, maps one or multiple pieces of downlink data to be transmitted in one PDSCH to one or multiple layers, and performs preceding to the one or multiple layers. The multiplexing unit3075multiplexes the downlink physical channel signal and the downlink reference signal for each transmit antenna port. The multiplexing unit3075maps the downlink physical channel signal and the downlink reference signal to the resource elements for each transmit antenna port. The radio transmitting unit3077performs Inverse Fast Fourier Transform (IFFT) on the modulation symbol resulting from the multiplexing or the like, performs the modulation in compliance with an OFDM scheme, adds the guard interval to the OFDM-modulated OFDM symbol, generates a digital signal in a baseband, converts the digital signal in the baseband into an analog signal, generates an in-phase component and an orthogonal component of an intermediate frequency from the analog signal, removes frequency components unnecessary for the intermediate frequency band, converts (up converts) the signal of the intermediate frequency into a signal of a high frequency signal, removes unnecessary frequency components, performs power amplification, and outputs a final result to the transmit and/or receive antenna309for transmission. (1) More specifically, a terminal apparatus1according to a first aspect of the present invention includes; a receiver configured to receive a physical downlink control channel for conveying downlink control information including a first information field; and a transmitter configured to report channel state information (CSI), wherein the first information field indicates first information, the first information indicates one of multiple states, each of the multiple states is associated with: a configuration related to one or multiple CSI reports; and a configuration related to one or multiple CSI resources, and the one of the multiple states is configured to be associated with a first serving cell and a bandwidth part (BWP) of the first serving cell. (2) A base station apparatus3according to a second aspect of the present invention includes; a transmitter configured to transmit a physical downlink control channel for conveying downlink control information including a first information field; and a receiver configured to receive a channel state information (CSI) report, wherein the first information field indicates first information, the first information indicates one of multiple states, each of the multiple states is associated with: a configuration related to one or multiple CSI reports; and a configuration related to one or multiple CSI resources, and the one of the multiple states is configured to be associated with a first serving cell and a bandwidth part (BWP) of the first serving cell. (3) A communication method according to a third aspect of the present invention is a communication method for a terminal apparatus, the communication method including the steps of: receiving a physical downlink control channel for conveying downlink control information including a first information field; and reporting channel state information (CSI), wherein the first information field indicates first information, the first information indicates one of multiple states, each of the multiple states is associated with: a configuration related to one or multiple CSI reports; and a configuration related to one or multiple CSI resources, and the one of the multiple states is configured to be associated with a first serving cell and a bandwidth part (BWP) of the first serving cell. (4) A communication method according to a fourth aspect of the present invention is a communication method for a base station apparatus, the communication method including the steps of: transmitting a physical downlink control channel for conveying downlink control information including a first information field; and reporting channel state information (CSI), wherein the first information field indicates first information, the first information indicates one of multiple states, each of the multiple states is associated with: a configuration related to one or multiple CSI reports; and a configuration related to one or multiple CSI resources, and the one of the multiple states is configured to be associated with a first serving cell and a bandwidth part (BWP) of the first serving cell. (5) An integrated circuit according to a fifth aspect of the present invention is an integrated circuit mounted on a terminal apparatus, the integrated circuit including: a receiving unit configured to receive a physical downlink control channel (PDCCH) for conveying downlink control information (DCI) including a first information field (CSI request field); and a transmitting unit configured to report channel state information (CSI), wherein the first information field indicates first information, the first information indicates one of multiple states, each of the multiple states is associated with: a configuration related to one or multiple CSI reports; and a configuration related to one or multiple CSI resources, and the one of the multiple states is configured to be associated with a first serving cell and a bandwidth part (BWP) of the first serving cell. (6) An integrated circuit according to a sixth aspect of the present invention is an integrated circuit mounted on a base station apparatus, the integrated circuit including: a transmitting unit configured to transmit a physical downlink control channel (PDCCH) for conveying downlink control information (DCI) including a first information field (CSI request field); and a receiving unit configured to receive a channel state information (CSI) report, wherein the first information field indicates first information, the first information indicates one of multiple states, each of the multiple states is associated with: a configuration related to one or multiple CSI reports; and a configuration related to one or multiple CSI resources, and the one of the multiple states is configured to be associated with a first serving cell and a bandwidth part (BWP) of the first serving cell. A program running on an apparatus according to the present invention may serve as a program that controls a Central Processing Unit (CPU) and the like to cause a computer to operate in such a manner as to realize the functions of the above-described embodiment according to the present invention. Programs or the information handled by the programs are temporarily stored in a volatile memory such as a Random Access Memory (RAM), a non-volatile memory such as a flash memory, a Hard Disk Drive (HDD), or any other storage device system. Note that a program for realizing the functions of the embodiment according to the present invention may be recorded in a computer-readable recording medium. This configuration may be realized by causing a computer system to read the program recorded on the recording medium for execution. It is assumed that the “computer system” refers to a computer system built into the apparatuses, and the computer system includes an operating system and hardware components such as a peripheral device. The “computer-readable recording medium” may be any of a semiconductor recording medium, an optical recording medium, a magnetic recording medium, a medium dynamically retaining the program for a short time, or any other computer readable recording medium. Each functional block or various characteristics of the apparatuses used in the above-described embodiment may be implemented or performed on an electric circuit, for example, an integrated circuit or multiple integrated circuits. An electric circuit designed to perform the functions described in the present specification may include a general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or other programmable logic devices, discrete gates or transistor logic, discrete hardware components, or a combination thereof. The general-purpose processor may be a microprocessor or may be a processor of known type, a controller, a micro-controller, or a state machine instead. The above-mentioned electric circuit may include a digital circuit, or may include an analog circuit. In a case that with advances in semiconductor technology, a circuit integration technology appears that replaces the present integrated circuits, it is also possible to use a new integrated circuit based on the technology according to one or more aspects of the present invention. Note that, in the embodiments according to the present invention, examples have been described in which the present invention is applied to a communication system constituted by a base station apparatus and a terminal apparatus, but the present invention can also be applied in a system in which terminals communicate each other such as Device to Device (D2D). Note that the invention of the present patent application is not limited to the above-described embodiments. In the embodiment, apparatuses have been described as an example, but the invention of the present application is not limited to these apparatuses, and is applicable to a terminal apparatus or a communication apparatus of a fixed-type or a stationary-type electronic apparatus installed indoors or outdoors, for example, an AV apparatus, a kitchen apparatus, a cleaning or washing machine, an air-conditioning apparatus, office equipment, a vending machine, and other household apparatuses. The embodiments of the present invention have been described in detail above referring to the drawings, but the specific configuration is not limited to the embodiments and includes, for example, an amendment to a design that falls within the scope that does not depart from the gist of the present invention. Various modifications are possible within the scope of the present invention defined by claims, and embodiments that are made by suitably combining technical means disclosed according to the different embodiments are also included in the technical scope of the present invention. A configuration in which constituent elements, described in the respective embodiments and having mutually the same effects, are substituted for one another is also included in the technical scope of the present invention.
82,060
11943170
DESCRIPTION OF EMBODIMENTS The following describes technical solutions in this application with reference to the accompanying drawings. An embodiment of this application provides an NDPA frame transmission method that is applied to a wireless communication system. The wireless communication system may be a wireless local area network (WLAN) or a cellular network. The method may be implemented by a communication device in the wireless communication system or by a chip or a processor in the communication device. In the wireless local area network, the communication device supports communication performed by using protocols of the IEEE 802.11 series. The protocols of the IEEE 802.11 series include 802.11be, 802.11ax, or 802.11 a/b/g/n/ac. FIG.1is used as an example to describe a network structure to which the NDPA frame transmission method in this application is applicable.FIG.1is a schematic diagram of a network structure according to an embodiment of this application. The network structure may be a wireless local area network. The network structure may include one or more access point (AP) stations and one or more non-access point stations (none access point station, non-AP STA). For ease of description, an access point station is referred to as an access point (AP), and a non-access point station is referred to as a station (STA) in this specification. The APs are, for example, an AP1and an AP2inFIG.1, and the STAs are, for example, a STA1and a STA2inFIG.1. The access point may be an access point through which a terminal device (for example, a mobile phone) accesses a wired (or wireless) network, and is mainly deployed in a home, a building, and a campus, with a typical coverage radius ranging from dozens of meters to hundreds of meters. Certainly, the access point may be alternatively deployed outdoors. The access point is equivalent to a bridge that connects a wired network and a wireless network. A main function of the access point is to connect various wireless network clients together and then connect the wireless network to an Ethernet. Specifically, the access point may be a terminal device (for example, a mobile phone) or a network device (for example, a router) with a wireless fidelity (WiFi) chip. The access point may be a device supporting the 802.11be standard. Alternatively, the access point may be a device supporting a plurality of wireless local area network (wireless local area networks, WLAN) standards of the 802.11 family, such as 802.11be, 802.11ax, 802.11ac, 802.11n, 802.11g, 802.11b, and 802.11a. The access point in this application may be an extremely high throughput (EHT) AP, or may be an access point to which a specific generation of a Wi-Fi standard in the future is applicable. The access point may include a processor and a transceiver. The processor is configured to control and manage an action of the access point. The transceiver is configured to receive or transmit information. The station may be a wireless communication chip, a wireless sensor, a wireless communication terminal, or the like, and may also be referred to as a user. For example, the station may be a mobile phone supporting a Wi-Fi communication function, a tablet computer supporting a Wi-Fi communication function, a set-top box supporting a Wi-Fi communication function, a smart TV supporting a Wi-Fi communication function, a smart wearable device supporting a Wi-Fi communication function, a vehicle-mounted communication device supporting a Wi-Fi communication function, a computer supporting a Wi-Fi communication function, or the like. Optionally, the station may support the 802.11be standard. The station may also support a plurality of wireless local area network (WLAN) standards of the 802.11 family, such as 802.11be, 802.11ax, 802.11ac, 802.11n, 802.11g, 802.11b, and 802.11a. The station may include a processor and a transceiver. The processor is configured to control and manage an action of the station. The transceiver is configured to receive or transmit information. The station in this application may be an extremely high throughput (EHT) STA, or may be a STA to which a specific generation of the Wi-Fi standard in the future is applicable. For example, the access point and the station may be devices applied to an internet of vehicles; internet of things nodes, sensors, or the like in an internet of things (IoT); smart cameras, smart remote controls, smart water meters, or smart power meters in a smart household; sensors in a smart city; or the like. The access point and the station in embodiments of this application may also be collectively referred to as communication apparatuses. The communication apparatus may include a hardware structure and a software module, and implement the foregoing functions in a form of the hardware structure, the software module, or a combination of the hardware structure and the software module. A specific function of the foregoing functions may be implemented in a manner of the hardware structure, the software module, or the combination of the hardware structure and the software module. FIG.2is a schematic diagram of a structure of a communication apparatus according to an embodiment of this application. As shown inFIG.2, the communication apparatus200may include a processor201and a transceiver205, and optionally, further includes a memory202. The transceiver205may be referred to as a transceiver unit, a transceiver, a transceiver circuit, or the like, and is configured to implement a transceiver function. The transceiver205may include a receiver and a transmitter. The receiver may be referred to as a receiving machine, a receiver circuit, or the like, and is configured to implement a receiving function. The transmitter may be referred to as a transmitting machine, a transmitter circuit, or the like, and is configured to implement a sending function. The memory202may store a computer program or software code or instructions204. The computer program or software code or instructions204may also be referred to as firmware. The processor201may control a MAC layer and a PHY layer by running a computer program or software code or instructions203in the processor201or by invoking the computer program or software code or instructions204stored in the memory202, to implement an NDPA frame transmission method provided in the following embodiments of this application. The processor201may be a central processing unit (CPU). The memory202may be, for example, a read-only memory (read-only memory, ROM) or a random access memory (RAM). The processor201and the transceiver205that are described in this application may be implemented on an integrated circuit (IC), an analog IC, a radio frequency integrated circuit RFIC, a hybrid signal IC, an application-specific integrated circuit (ASIC), a printed circuit board (PCB), an electronic device, or the like. The communication apparatus200may further include an antenna206. The modules included in the communication apparatus200are merely examples for description. This is not limited in this application. As described above, the communication apparatus200described in the foregoing embodiment may be an access point or a station. However, a scope of the communication apparatus described in this application is not limited thereto, and a structure of the communication apparatus may not be limited to that shown inFIG.2. The communication apparatus may be an independent device or a part of a large device. For example, an implementation form of the communication apparatus may be as follows: (1) an independent integrated circuit IC, chip, chip system, or chip subsystem; (2) a set with one or more ICs, where optionally, the IC set may also include a storage component configured to store data or instructions; (3) a module that can be embedded in another device; (4) a receiver, an intelligent terminal, a wireless device, a handset, a mobile unit, a vehicle-mounted device, a cloud device, an artificial intelligence device, or the like; or (5) others or the like. For a case in which the implementation form of the communication apparatus is a chip or a chip system, refer to a schematic diagram of a structure of a chip shown inFIG.3. The chip shown inFIG.3includes a processor301and an interface302. There may be one or more processors301and a plurality of interfaces302. The interface302is configured to receive and transmit signals. Optionally, the chip or the chip system may include a memory303. The memory303is configured to store program instructions and data that are necessary for the chip or the chip system. In addition, embodiments of this application do not limit the protection scope and applicability of the claims. A person skilled in the art may make adaptive changes to functions and deployments of elements in this application, or may omit, replace, or add various processes or components as appropriate without departing from the scope of embodiments of this application. Since the 802.11a/g standard, WLANs have gone through a plurality of generations of standards, for example, 802.11n, 802.11ac, and 802.11ax that is currently under discussion. An NDPA frame has different variants (variant) in different standards. 802.11ac is a previous-generation standard of 802.11ax. In the 802.11ac standard, an NDPA frame variant may be referred to as a very high throughput (VHT) NDPA frame.FIG.4Ais a schematic diagram of a structure of a VHT NDPA frame. As shown inFIG.4A, the VHT NDPA frame includes a frame control (frame control) field, a duration field, a receiving address (RA) field, a transmitting address (TA) field, a sounding dialog token field, and one or more station information (STA information, STA Info) fields. The frame control field includes a frame type subfield and a frame subtype subfield, indicating that the frame is an NDPA frame. The sounding dialog token field is used to index a channel sounding sequence number. The RA field and the TA field are used to identify a receive end and a transmit end of a MAC frame. The station information field includes an association identifier (AID) subfield indicating an AID, a feedback type subfield, and a number of columns (Nc) subfield. One station information field is two octets. The AID subfield indicates an AID of a station corresponding to the station information field. The feedback type subfield indicates whether feedback is single-user feedback or multi-user feedback. The number of columns (Nc) subfield indicates a quantity of columns for which the station needs to feed back channel state information, or may be understood as indicating a quantity of spatial streams for which the station needs to feed back channel state information. In 802.11ax, a corresponding NDPA frame variant is a high throughput (HE) NDPA frame.FIG.4Bis a schematic diagram of a structure of an HE NDPA frame. The HE NDPA frame includes a frame control field, a duration field, an RA field, a TA field, a sounding dialog token field, and one or more station information fields. The sounding dialog token may use one octet, that is, the sounding dialog token may use eight bits, namely, B0to B7. The sounding dialog token field includes a 1-bit frame type subfield. The frame type subfield is located at B1of the sounding dialog token, and used to indicate whether the NDPA frame is an HE NDPA frame. 0 indicates that the NDPA frame is not an HE NDPA frame, and 1 indicates that the NDPA frame is an HE NDPA frame. The station information field includes an AID subfield, a partial bandwidth information (partial BW Info) subfield, a feedback type and Ng subfield, a disambiguation subfield, a codebook size subfield, and a number of columns (Nc) subfield. One station information field is four octets. The AID subfield and the number of columns subfield serve the same purposes as the AID subfield and the number of columns subfield in the VHT NDPA frame. The partial bandwidth information subfield indicates a frequency domain range for which a station corresponding to the AID needs to feed back channel state information. A bandwidth used for transmitting data may be divided into one or more resource units (resource unit, RU). An RU may be a 26-tone RU, a 52-tone RU, a 106-tone RU, a 242-tone RU, a 484-tone RU, a 996-tone RU, or the like. Atone represents a subcarrier. For example, the 26-tone RU represents an RU including 26 subcarriers. A frequency domain resource of 20 MHz may include one entire 242-tone resource unit (242-tone RU), or may include nine 26-tone RUs. The partial bandwidth information subfield indicates, by indicating a segment of contiguous 26-tone RUs, a frequency domain range for which a station needs to feed back channel state information. In other words, the partial bandwidth information subfield indicates RUs for which a STA corresponding to the AID needs to feed back channel state information, to indicate a frequency domain range of a partial bandwidth for which the station needs to feed back channel state information. It should be understood that the frequency domain range of the partial bandwidth belongs to a bandwidth corresponding to the NDPA frame. The partial bandwidth information subfield includes a resource unit start index and a resource unit end index, and indicates a segment of contiguous RUs by using the resource unit start index and the resource unit end index. In 802.11ax, a maximum bandwidth is 160 Wiz and includes seventy-four 26-tone RUs. Therefore, the RU start index needs [log274]=7 bits to indicate a first RU for which the station corresponding to the AID needs to feed back channel state information, and the RU end index needs seven bits to indicate a last RU for which the station needs to feed back channel state information. “[ ]” indicates rounding up. The feedback type and Ng subfield indicates whether feedback is single-user feedback or multi-user feedback, and indicates that Ng subcarriers are allocated into one group. In this way, channel state information of subcarriers in a same group is fed back together, helping reduce overheads. The disambiguation subfield is used to prevent a VHT STA from misreading an NDPA frame as a VHT NDPA frame. The VHT STA in this application is an earlier-version VHT STA that supports the 802.11ac protocol but does not support an NDPA frame variant occurring after the VHT NDPA frame. The codebook size subfield indicates precision of quantization. Different precisions correspond to different overheads. In 802.11ax, a station has a maximum of eight antennas and supports a maximum of eight columns. Therefore, the number of columns subfield needs [log28]=3 bits to indicate a specific value of a column quantity, that is, one of 1 to 8. In 802.11az, that is, in a phase of a ranging standard, a corresponding NDPA frame variant is a ranging NDPA frame. A structure of the ranging NDPA frame is basically the same as the structure of the HE NDPA frame. For the structure of the ranging NDPA frame, refer toFIG.4B. Different from the HE NDPA frame, a frame type subfield in a sounding dialog token field in the ranging NDPA frame is two bits. The two bits are B0and B1of B0to B7. One bit (B1) is used to indicate whether the NDPA frame is an HE NDPA frame, and the other bit (B0) is used to indicate whether the NDPA frame is a ranging NDPA frame. A ranging STA determines an NDPA frame variant based on the 2-bit frame type subfield. Specific indication relationships of the frame type subfield are shown in Table 1 below. TABLE 1Frame type subfieldRangingHEVariant (Variant)00VHT NDPA frame01HE NDPA frame10Ranging NDPA frame11Undefined It should be understood that an HE STA does not read the bit that is of the frame type subfield and that is used to indicate whether an NDPA frame is a ranging NDPA frame. The HE STA in this application is an earlier-version HE STA that supports the 802.11ax protocol but does not support an NDPA frame variant occurring after the HE NDPA frame. The ranging STA in this application is an earlier-version ranging STA that supports the 802.11az protocol but does not support an NDPA frame variant occurring after the ranging NDPA frame. The following describes indication manners of the partial bandwidth information subfield in different generations of the standard. In 802.11ax, a maximum bandwidth is 160 MHz, and the partial bandwidth information subfield may indicate a bandwidth by using seventy-four 26-tone RUs. Ordinal values corresponding to the seventy-four 26-tone RUs are 0, 1, 2, 3, . . . , and 73 sequentially in ascending order of a corresponding frequency. In an HE NDPA frame, an indication manner of the partial bandwidth information subfield is as follows: A resource unit start index in the partial bandwidth information subfield indicates one of the seventy-four 26-tone RUs, and a resource unit end index in the partial bandwidth information subfield is used to indicate one of the seventy-four 26-tone RUs. Specifically, the resource unit start index indicates one of the seventy-four 26-tone RUs by indicating an ordinal value of the 26-tone RU, and the resource unit end index indicates one of the seventy-four 26-tone RUs by indicating an ordinal value of the 26-tone RU. It can be learned that in the foregoing solution, the RU start index in the partial bandwidth information subfield supports only one of the seventy-four 26-tone RUs, and the RU end index in the partial bandwidth information subfield also supports only one of the seventy-four 26-tone RUs. However, in the 802.11be standard, a maximum bandwidth that can be supported is 320 MHz. For example, when a 26-tone RU is used as a granularity for indication, 320 MHz may correspond to a maximum of one hundred and forty-eight 26-tone RUs, and 148 cases need to be indicated. Clearly, in this case, the partial bandwidth information subfield in the HE NDPA frame cannot meet a requirement for indicating an RU that is in a larger bandwidth and for which channel state information needs to be fed back in the 802.11be standard. The following describes manners in which the number of columns subfield indicates a column quantity in different generations of the standard. In 802.11ax, a largest column quantity is 8. The 3-bit number of columns subfield in the HE NDPA frame indicates a column quantity by indicating a value of 1 to 8. However, in the 802.11be standard that is under discussion, a maximum of 16 columns need to be indicated. Clearly, the number of columns subfield in the HE NDPA frame cannot meet a requirement for indicating a larger column quantity in the 802.11be standard. The following describes technical solutions in this application with reference to the NDPA frame transmission method provided in embodiments of this application. In embodiments of this application, the NDPA frame includes a plurality of fields and subfields. It should be understood that names of the fields and subfields in the NDPA frame are not limited in embodiments of this application, and in another embodiment, may be alternatively replaced with other names. FIG.5is a schematic flowchart of an NDPA frame transmission method according to an embodiment of this application. The method includes the following steps. 501. A beamformer (Bfer) generates an NDPA frame. The NDPA frame includes one or more station information fields. The station information field includes an AID subfield indicating an AID, a partial bandwidth information subfield, and/or a number of columns subfield. The partial bandwidth information subfield indicates an RU that is of RUs included in a bandwidth corresponding to the NDPA frame and for which a beamformee (Bfee) corresponding to the AID needs to feed back channel state information. It may be alternatively understood that the partial bandwidth information subfield indicates an RU for which the Bfer requests feedback. The RU for which the station needs to feed back channel state information or the RU for which the Bfer requests feedback may be one RU or a combination of a plurality of RUs. It should be understood that RUs indicated by the partial bandwidth information subfield are a segment of contiguous RUs, and the partial bandwidth information subfield is not limited to indicating a segment of contiguous 26-tone RUs. For example, the partial bandwidth information subfield may indicate a segment of contiguous 52-tone RUs, a segment of contiguous 242-tone RUs, or a combination of a plurality of contiguous RUs of different sizes. In this application, the RU indicated by the partial bandwidth information subfield is not necessarily an actual RU. RUs correspond to subcarriers. The partial bandwidth information subfield indicates a range of subcarriers by indicating RUs in a bandwidth, to indicate a frequency domain range of a partial bandwidth for which the Bfer requests feedback. For example, indicating 320 MHz as one hundred and forty-eight 26-tone RUs does not mean that a bandwidth of 320 MHz includes one hundred and forty-eight 26-tone RUs. The RU indicated by the partial bandwidth information subfield is only used to indicate a corresponding frequency domain range. The bandwidth corresponding to the NDPA frame is greater than 160 MHz, or a column quantity indicated by the number of columns subfield is greater than 8; or the bandwidth corresponding to the NDPA frame is greater than 160 MHz, and a column quantity indicated by the number of columns subfield is greater than 8. The bandwidth corresponding to the NDPA frame may be understood as a channel sounding bandwidth. Alternatively, the bandwidth corresponding to the NDPA frame is a bandwidth of an NDP transmitted by the AP after the AP transmits the NDPA frame. It may be understood that the station information field may include any one of the partial bandwidth information subfield and the number of columns subfield, or may include the partial bandwidth information subfield and the number of columns subfield. The NDPA frame in this embodiment of this application meets the following: any one of a case in which the bandwidth corresponding to the NDPA frame is greater than 160 MHz and a case in which a column quantity indicated by the number of columns subfield is greater than 8; or a case in which the bandwidth corresponding to the NDPA frame is greater than 160 MHz and a case in which a column quantity indicated by the number of columns subfield is greater than 8. 502. The Bfer transmits the NDPA frame. Correspondingly, the Bfee receives the NDPA frame and obtains a related channel sounding parameter from the NDPA frame, for example, partial bandwidth information and/or a quantity of columns in a compressed beamforming feedback matrix. The Bfer may be an AP or a STA. The Bfee may be a STA or an AP. Optionally, after step 502, the beamformee may start a channel sounding procedure. The channel sounding procedure may include the following steps. 503. The Bfer transmits a null data packet (null data packet, NDP). It should be understood that the Bfer transmits the NDP after a short inter-frame space (short inter-frame space, SIFS) elapses. Correspondingly, after the SIFS elapses after receiving the NDPA frame, the Bfee receives the NDP based on the related channel sounding parameter obtained from the NDPA frame. 504. According to indication information in the NDPA frame, the Bfee performs channel estimation based on the NDP to obtain channel state information, and forms a beamforming report based on the channel state information. Specifically, the Bfee determines, based on the AID subfield in the station information field in the NDPA frame, that the AID subfield matches an AID of the Bfee, and determines that the Bfee itself needs to perform channel sounding. The Bfee may determine, based on the partial bandwidth information subfield in the station information field, a frequency range for which channel state information needs to be fed back, and then perform channel estimation based on the NDP, to obtain channel state information of the frequency range for which the station needs to feed back channel state information. Alternatively, the Bfee may determine the quantity of columns in the compressed beamforming feedback matrix based on Nc. The compressed beamforming feedback matrix is a part of the beamforming report and carries at least a part of channel state information. 505. The Bfee transmits the beamforming report. It may be understood that the beamforming report includes the channel state information. The Bfer may be an AP or a STA. The Bfee may be a STA or an AP. In this embodiment of this application, an example in which the Bfer is an AP and the Bfee is a STA is used for description. It should be understood that the technical solutions in this embodiment of this application are also applicable to a case in which the Bfer is a STA or the Bfee is an AP. Certainly, the channel sounding procedure is not limited to a solution of steps 503 to 505 proposed in this embodiment of this application. Optionally, the channel sounding procedure may be alternatively as follows: Another Bfer transmits an NDP frame, and the beamformee performs sounding based on the NDP frame. In the technical solutions in this embodiment of this application, the partial bandwidth information subfield in the NDPA frame indicates an RU that is in a bandwidth greater than 160 MHz and for which channel state information needs to be fed back. In this case, the station information field can indicate the station to sound a channel with a bandwidth greater than 160 MHz, and feed back a beamforming report based on a channel sounding result, thereby implementing data transmission in a larger bandwidth and improving transmission efficiency. The column quantity indicated by the number of columns subfield is greater than 8. The station information field can indicate the station to sound a channel with a bandwidth whose column quantity is greater than 8, and feed back a beamforming report based on a channel sounding result, thereby implementing data transmission in more streams and improving transmission efficiency. Specifically, the NDPA frame includes type information. The type information is used to indicate an NDPA frame variant. For example, the type information indicates that the NDPA frame is an EHT NDPA frame. The type information is carried in a field preceding one or more station information fields. In this way, when receiving an NDPA frame, a station first obtains type information from the NDPA frame, determines an NDPA frame variant based on the type information, and then determines, based on the NDPA frame variant, a policy for reading a station information field. For example, the station is an EHT STA, and the type information indicates that the NDPA frame variant is an EHT NDPA frame. The EHT STA reads the station information field according to a structure of a station information field in an EHT NDPA frame. The EHT STA obtains the station information field that includes an AID of the EHT STA, and obtains a related channel sounding parameter (for example, an RU that is in a bandwidth greater than 160 MHz and for which channel state information needs to be fed back and/or indication of a column quantity greater than 8) from the station information field. Then, the EHT STA receives an NDP based on the related channel sounding parameter, then obtains channel state information based on the NDP, and feeds back the channel state information to an access point by using a beamforming report. It can be learned that the type information can indicate that a new NDPA frame variant is an EHT NDPA frame. The EHT NDPA frame can indicate an RU that is in a bandwidth greater than 160 MHz and for which channel state information needs to be fed back, to indicate a station to sound a channel with a bandwidth greater than 160 MHz, and feed back a beamforming report based on a channel sounding result, thereby implementing data transmission in a larger bandwidth and improving transmission efficiency. The EHT NDPA frame can further indicate a column quantity greater than 8, to indicate a station to sound a channel with a bandwidth whose column quantity is greater than 8, and feed back a beamforming report based on a channel sounding result, thereby implementing data transmission in more streams and improving transmission efficiency. In addition, in this case, a new frame does not need to be defined, and an available type remaining in a MAC frame is fully utilized, thereby saving resources. It should be understood that if the type information indicates that the NDPA frame is a VHT NDPA frame, an HE NDPA frame, or a ranging NDPA frame, the EHT STA can also read the NDPA frame separately according to a format corresponding to the VHT NDPA frame, the HE NDPA frame, or the ranging NDPA frame. The following describes in detail type information indication solutions provided in an embodiment of this application. The type information indication solutions in this embodiment of this application are not limited to a scenario of indicating that an NDPA frame variant is an EHT NDPA frame, and are also applicable to a scenario of indicating that an NDPA frame variant is a new NDPA frame variant corresponding to a standard that occurs after 802.11be. An NDPA frame further includes a sounding dialog token. The sounding dialog token includes a frame type indication field. As shown in Table 2, in a first type information indication solution provided in this application, a sounding dialog token field includes a 2-bit frame type indication field. When values of two bits of the frame type subfield are both 1, it indicates a new NDPA frame variant. The new variant may be, for example, an EHT NDPA frame corresponding to 802.11be, or an NDPA frame corresponding to a standard that occurs after 802.11be. In this embodiment of this application, an example is used for description in which an NDPA frame variant transmitted by an AP is an EHT NDPA frame, and a frame type subfield in a sounding dialog token field indicates that an NDPA frame variant is an EHT NDPA frame. TABLE 2Frame type subfieldRangingHEVariant00VHT NDPA frame01HE NDPA frame10Ranging NDPA frame11EHT NDPA frame It may be understood that in this solution, the type information is carried in the frame type subfield. In this way, defining a new NDPA frame variant as an EHT NDPA frame can be implemented with no need to change an existing correspondence between a value of the frame type subfield and an indicated NDPA frame variant. An EHT STA receives an NDPA frame from an AP, and determines the NDPA frame variant by reading a 2-bit frame type subfield in the NDPA frame. For example, the frame type subfield indicates that the NDPA frame variant is an EHT NDPA frame. The EHT STA determines, based on the frame type subfield, that the NDPA frame is an EHT NDPA frame. The EHT STA may read a station information field in the EHT NDPA frame according to a format of an EHT NDPA frame. In this way, the EHT NDPA frame can indicate an RU that is in a bandwidth greater than 160 MHz and for which channel state information needs to be fed back, so that the station can sound a channel with a bandwidth greater than 160 MHz, and feed back a beamforming report based on a channel sounding result, thereby implementing data transmission in a larger bandwidth and improving transmission efficiency. The EHT NDPA frame can further indicate a column quantity greater than 8, so that the station can sound a channel with a bandwidth whose column quantity is greater than 8, and feed back a beamforming report based on a channel sounding result, thereby implementing data transmission in more streams and improving transmission efficiency. It should be understood that in this embodiment of this application, the frame type subfield in a sounding dialog token field indicates that the NDPA frame variant is an EHT NDPA frame, and in another embodiment, the frame type subfield in the sounding dialog token field may alternatively indicate that the NDPA frame variant is a VHT NDPA frame, an HE NDPA frame, or a ranging NDPA frame. If the frame type subfield indicates that the NDPA frame is a VHT NDPA frame, an HE NDPA frame, or a ranging NDPA frame, the EHT STA can also read the NDPA frame separately according to a format corresponding to the VHT NDPA frame, the HE NDPA frame, or the ranging NDPA frame. As shown in Table 3, in a second type information indication solution provided in this application, the foregoing entry used to indicate the ranging NDPA frame is exchanged with the foregoing entry used to indicate the EHT NDPA frame. In a frame type subfield, when a value of a bit indicating whether an NDPA frame is a ranging NDPA frame is 1, and a value of a bit indicating whether an NDPA frame is an HE NDPA frame is 0, an indicated NDPA frame variant is an EHT NDPA frame; and when the value of the bit indicating whether an NDPA frame is a ranging NDPA frame is 1, and the value of the bit indicating whether an NDPA frame is an HE NDPA frame is 1, an indicated NDPA frame variant is a ranging NDPA frame. In this way, defining a new NDPA frame variant as an EHT NDPA frame can also be implemented. A frame type subfield in a sounding dialog token field can indicate that a type of a frame variant is EHT NDPA. The EHT NDPA frame can indicate an RU that is in a bandwidth greater than 160 MHz and for which channel state information needs to be fed back, to indicate a station to sound a channel with a bandwidth greater than 160 MHz, and feed back a beamforming report based on a channel sounding result, thereby implementing data transmission in a larger bandwidth and improving transmission efficiency. The EHT NDPA frame can further indicate a column quantity greater than 8, to indicate a station to sound a channel with a bandwidth whose column quantity is greater than 8, and feed back a beamforming report based on a channel sounding result, thereby implementing data transmission in more streams and improving transmission efficiency. TABLE 3Frame type subfieldRangingHEVariant (Variant)00VHT NDPA frame01HE NDPA frame10EHT NDPA frame11Ranging NDPA frame A (16n+12)thbit (for example, a 28thbit or a 44thbit) of each station information field in the EHT NDPA frame is a disambiguation subfield, where n is a positive integer. A first bit of a station information field corresponds to B0of the station information field. In this way, compared with the solution corresponding to Table 2, the solution corresponding to Table 3 can prevent an HE STA from misreading. This is because the HE STA does not read the bit that is of the frame type subfield and that indicates whether an NDPA frame is a ranging NDPA frame. In the solution corresponding to Table 2, when the value of the bit that is of the frame type subfield and that indicates whether an NDPA frame is an HE NDPA frame is 1, the HE STA reads the NDPA frame as an HE NDPA frame. However, when the value of the bit that is of the NDPA frame and that indicates whether an NDPA frame is an HE NDPA frame is 1, the NDPA frame may indeed be an HE NDPA frame, or may be an EHT NDPA frame. However, the EHT NDPA frame may use more than four octets, for example, six octets. In this case, the HE STA reads first 11 bits of third two octets as an AID. If the first 11 bits of the third two octets of the station information field happen to match an AID of the HE STA, the HE STA reads the two octets indicating the AID and two adjacent octets following the two octets as a station information field of the HE STA, causing the HE STA to misread. According to the solution corresponding to Table 3, when the value of the bit that is of the NDPA frame and that indicates whether an NDPA frame is an HE NDPA frame is 0, the HE STA can identify the NDPA frame as a VHT NDPA frame and read the NDPA frame according to a format of a VHT NDPA frame. If the NDPA frame is indeed a VHT NDPA frame, the HE STA can correctly read the VHT NDPA frame. If the NDPA frame is an EHT NDPA frame, first 11 or 12 bits of each station information field in the EHT NDPA frame indicate an AID. The AID is not the AID of the HE STA, and the (16n+12)thbit of the station information field is the disambiguation subfield. In this case, first 12 bits of each two octets in the EHT NDPA frame do not match the AID of the HE STA. Even if the HE STA considers the EHT NDPA frame as a VHT NDPA frame, reads the NDPA frame according to the format of the VHT NDPA frame, and reads the first 12 bits of each two octets of each station information field as an AID, the HE STA can recognize that the first 12 bits of each two octets do not match the AID of the HE STA, thereby preventing the HE STA from mistaking a station information field in the EHT NDPA frame for the station information field of the HE STA and consequently misreading the NDPA frame. When the value of the bit that is of the NDPA frame and that indicates whether an NDPA frame is an HE NDPA frame is 1, if the NDPA frame is actually indeed an HE NDPA frame, the HE STA can correctly read the HE NDPA frame. If the NDPA frame is actually a ranging NDPA frame, the HE STA can read an AID in the NDPA frame at a correct position, because a station information field in a ranging NDPA frame has a same structure as a station information field in an HE NDPA frame. Station information fields in the ranging NDPA frame do not include the AID of the HE STA, and the HE STA can also find, by reading the station information fields, that an AID in each station information field does not match the AID of the HE STA. In this way, the HE STA does not consider one of the station information fields in the NDPA frame as the station information field of the HE STA, and therefore, does not misread the NDPA frame. However, as network technologies develop, there will be a next-generation standard following 802.11be. The next-generation standard may support a larger bandwidth. In this case, if a new NDPA frame variant occurring after the EHT NDPA frame needs to be defined, another NDPA frame variant occurring after the EHT NDPA frame cannot be indicated by the foregoing 2-bit frame type subfield. FIG.6is a schematic diagram of a structure of an NDPA frame according to an embodiment of this application. In a third type information indication solution provided in this embodiment of this application, a special station information field is newly added to the NDPA frame. The special station information field includes a special AID field and a frame subtype field. The special AID indicates the special station information field. For example, a special AID may be 2047, indicating that the station information field is a special station information field. An AID of 2047 is not defined in an existing standard. Therefore, a STA can identify the special station information field based on the special AID. It should be understood that in another embodiment, the special AID is not limited to 2047, and may be, for example, another AID that is not defined. A frame type subfield in a sounding dialog token field and the frame subtype subfield in the newly added special station information field jointly indicate an NDPA frame variant. For example, the frame type subfield indicates that the NDPA frame is not an HE NDPA frame or a ranging NDPA frame, and the frame subtype subfield in the special station information field indicates a specific NDPA frame variant. For example, the frame subtype subfield in the special station information field may indicate that the NDPA frame is an EHT NDPA frame. It should be understood that the frame subtype subfield may alternatively indicate that an NDPA frame variant is another NDPA frame variant occurring after the EHT NDPA frame. Specifically, as shown in Table 4, in the indication solution, when values of two bits of the frame type subfield are both 0, it indicates that the NDPA frame is not an HE NDPA frame or a ranging NDPA frame. TABLE 4Frame type subfieldVariant00(not an HE NDPA frame ora ranging NDPA frame)01HE NDPA frame10Ranging NDPA frame11Undefined It may be understood that in this solution, the type information is carried in the frame type subfield and the frame subtype subfield. In this way, indicating one or more new NDPA frame variants that occur after the HE NDPA frame and the ranging NDPA frame can be supported. The new NDPA frame variant may be, for example, the EHT NDPA frame or another NDPA frame occurring after the EHT NDPA frame. An STA supporting a new NDPA frame variant can determine, based on the frame subtype subfield in the special station field, which variant the NDPA frame is specifically, so as to read the NDPA frame according to a structure of the NDPA frame variant. For example, the frame type subfield and the frame subtype subfield indicate that the NDPA frame variant is an EHT NDPA frame. The EHT STA may determine, based on the frame subtype subfield, that the NDPA frame is an EHT NDPA frame. Then, the EHT STA reads a partial bandwidth information subfield and a number of columns subfield in the NDPA frame according to a structure of an EHT NDPA frame, thereby accurately obtaining information about a partial bandwidth for which channel state information needs to be fed back and a quantity of columns for which feedback needs to be provided. If the frame type subfield indicates that the NDPA frame is a VHT NDPA frame, an HE NDPA frame, or a ranging NDPA frame, the EHT STA can also read the NDPA frame separately according to a format corresponding to the VHT NDPA frame, the HE NDPA frame, or the ranging NDPA frame. It should be understood that an octet quantity of a station information field in this embodiment of this application is an integer multiple of 2. The station information field may use four octets, or may use more than four octets, for example, six octets. In this solution, the frame type subfield in the sounding dialog token field indicates NDPA frames other than the HE NDPA frame and the ranging NDPA frame as a category of NDPA frames, and then the frame subtype subfield in the special station field indicates which variant the NDPA frame is specifically. This can prevent an HE STA or a ranging STA from misreading a new NDPA frame variant. For example, the frame subtype subfield indicates that the NDPA frame is an EHT NDPA frame. The HE STA or the ranging STA can consider the EHT NDPA frame as a VHT NDPA frame based on the frame type subfield in the sounding dialog token field, and read the EHT NDPA frame according to a structure of a VHT NDPA frame. The station information field in the EHT NDPA frame does not include an AID of the HE STA or the ranging STA. The earlier-version HE STA or ranging STA receiving the EHT NDPA frame does not provide feedback based on the EHT NDPA frame, because an AID in the station information field does not match the AID of the earlier-version HE STA or ranging STA. This can prevent the earlier-version HE STA or ranging STA from misreading the EHT NDPA frame. Further, in an NDPA frame transmitted by an AP, a (16n+12)thbit of a special station information field is a disambiguation subfield, and a (16n+12)thbit (for example, a 28thbit) of each station information field is a disambiguation subfield, where n is a positive integer. A first bit of the special station information field corresponds to B0of the special station information field. A first bit of each station information field corresponds to B0of the station information field. In this case, when the frame type subfield indicates that the NDPA frame is not an HE NDPA frame or a ranging NDPA frame, a VHT STA does not read the frame type subfield in the sounding dialog token field, and cannot identify the NDPA frame variant. In this embodiment of this application, on a basis of the conventional technology, an NDPA frame variant indicated when the values of the two bits of the frame type subfield are both 0 is changed, which does not cause the VHT STA to misread another NDPA frame variant. For example, an NDPA frame transmitted by an AP is an EHT NDPA frame, first 11 bits of a special station information field and first 11 bits of any station information field in the EHT NDPA frame are set as an 11-bit AID, the AID is not an AID of the VHT STA, and a (16n+12)thbit of the special station information field and a (16n+12)thbit of any station information field are the disambiguation subfield. In this case, first 12 bits of each two octets of the station information field and the special station information field in the EHT NDPA frame do not match the AID of the VHT STA. Even if the VHT STA reads the EHT NDPA frame according to the structure of the VHT NDPA frame, and reads the first 12 bits of each two octets of the special station information field or the station information field as an AID, the VHT STA can recognize that the first 12 bits of each two octets do not match the AID of the VHT STA, thereby preventing the VHT STA from mistaking the special station information field or a station information field in the EHT NDPA frame for a station information field of the VHT STA and consequently misreading the NDPA frame. In addition, when an indication manner in Table 4 is compared with an indication manner in Table 3, the original entry that indicates the ranging NDPA frame and that is of the frame type subfield does not need to be changed, either. In this case, the ranging STA can still normally identify a ranging NDPA frame based on the frame type subfield. It can be learned from the foregoing description that by using the third type information indication solution, a new NDPA frame variant (for example, an EHT NDPA frame) can be indicated, and the VHT STA, the HE STA, and the ranging STA can be prevented from misreading the new NDPA frame variant. In some embodiments, the special station information field may further include a disallowed subchannel bitmap. The disallowed subchannel bitmap is used to indicate preamble puncturing information. For example, each bit of the disallowed subchannel bitmap corresponds to frequency domain resources of one granularity, and each bit indicates whether the corresponding frequency domain resources are punctured. The granularity may be 2n*10 MHz, where n is a positive integer. For example, the granularity may be 20 MHz, 40 MHz, 80 MHz, or the like. A bit quantity of the disallowed subchannel bitmap may be fixed. For example, a bit quantity of a disallowed subchannel bitmap in an EHT NDPA frame may be set to 16. Sixteen bits can indicate preamble puncturing information of a bandwidth of 320 MHz. When a bandwidth is less than 320 MHz, each 20 MHz corresponds to one bit of the disallowed subchannel bitmap, and a remaining bit in the disallowed subchannel bitmap indicates that corresponding frequency domain resources are punctured. For example, when a bandwidth is 240 MHz, first 12 bits of the disallowed subchannel bitmap indicate preamble puncturing information of 240 MHz, and last four bits indicate that corresponding frequency domain resources are punctured. The bit quantity of the disallowed subchannel bitmap may be alternatively variable. The bit quantity of the disallowed subchannel bitmap may be determined based on a bandwidth. For example, when the granularity is 20 MHz, if a bandwidth is 160 MHz, the bit quantity of the disallowed subchannel bitmap may be 8; or if a bandwidth is 320 MHz, the bit quantity of the disallowed subchannel bitmap may be 16. When the granularity is 40 MHz, if a bandwidth is 160 MHz, the bit quantity of the disallowed subchannel bitmap may be 4; or if a bandwidth is 320 MHz, the bit quantity of the disallowed subchannel bitmap may be 8. When a bandwidth is less than 80 MHz, the bit quantity of the disallowed subchannel bitmap may also be 4, each MHz corresponds to one bit in the disallowed subchannel bitmap, and a remaining bit in the disallowed subchannel bitmap indicates that corresponding frequency domain resources are punctured. For example, when a bandwidth is 40 MHz, first two bits of the disallowed subchannel bitmap are used to indicate a puncturing status of a first 20 MHz and a puncturing status of a second 20 MHz of the 40 MHz, and last two bits of the disallowed subchannel bitmap indicate that corresponding frequency domain resources are punctured. In some embodiments, the disallowed subchannel bitmap may be alternatively carried in an NDP that is transmitted by an AP after the AP transmits an NDPA frame. Alternatively, the disallowed subchannel bitmap is carried only in an NDP. Optionally, the special station information field further includes a bandwidth indication subfield for indicating a bandwidth. A partial bandwidth indicated by the partial bandwidth information subfield is in a frequency domain range of the bandwidth indicated by the bandwidth indication subfield. To indicate an RU that is in a bandwidth greater than 160 MHz and for which channel state information needs to be fed back or to indicate more spatial streams, an embodiment of this application provides some station information field design solutions. In this embodiment of this application, an example is used for description in which an NDPA frame variant transmitted by an AP is an EHT NDPA frame. The station information field design solutions in this application are not limited to the EHT NDPA frame, and are also applicable to an NDPA frame variant corresponding to a standard that occurs after 802.11be. The following describes the station information field design solutions that are provided in a case in which a bandwidth corresponding to an NDPA frame is greater than 160 MHz. In a first station information field design solution provided in this embodiment of this application, a quantity of octets of a station information field are increased, and a quantity of bits of a partial bandwidth information subfield are also increased, to implement that the partial bandwidth information subfield indicates an RU that is in a bandwidth greater than 160 MHz and for which channel state information needs to be fed back. For example, a station information field in an NDPA frame may use six octets, and a partial bandwidth information subfield uses more than seven bits. In this way, an RU that is in a larger bandwidth and for which channel state information needs to be fed back can be indicated. When the station information field is six octets, a quantity of bits of a number of columns subfield can also be increased. For example, the number of columns subfield is more than three bits. In this case, a larger column quantity can also be indicated. It should be understood that an octet quantity of the station information field in the NDPA frame may be 2*N, where N is greater than or equal to 3. For example, the octet quantity of the station information field in the NDPA frame may be alternatively 8, 10, 12, or the like. Further, six octets may be divided into three two octets sequentially. A 12thbit of second two octets and a 12thbit of third two octets are set as a disambiguation field. In this case, even if a VHT STA reads the NDPA frame according to a format of a VHT NDPA frame and reads first 12 bits of each two octets as an AID, the VHT STA can recognize that the first 12 bits of each two octets do not match an AID of the VHT STA, thereby preventing the VHT STA from mistaking a station information field in the new NDPA frame variant for a station information field of the VHT STA and consequently misreading the new NDPA frame variant. In this solution, the partial bandwidth information subfield in the station information field includes an RU start index and an RU end index. The RU start index uses more than seven bits, and the RU end index also uses more than seven bits. The number of columns subfield in the station information field is more than three bits. In a specific embodiment, a bandwidth is 320 MHz, and a length of a station information field is six octets. A structure of a station information field in an EHT NDPA frame is shown inFIG.7. In this embodiment, a partial bandwidth information subfield in the station information field includes an RU start index and an RU end index. The RU start index uses eight bits and is used to indicate one 26-tone RU in 320 MHz. The RU end index also uses eight bits and is used to indicate one 26-tone RU in 320 MHz. A 12th bit of a second octet (a 28th bit of the station information field) and a 12thbit of a third octet (a 44thbit of the station information field) are set as a disambiguation field. As shown inFIG.7, a length of a station information field in a VHT NDPA frame is two octets, that is, 16 bits in total. First 12 bits of the station information field in the VHT NDPA frame are an AID. A VHT STA does not read a frame type subfield or identify an NDPA frame variant. The VHT STA reads the EHT NDPA frame as a VHT NDPA frame. This causes the VHT STA to read first 12 bits of each two octets of a station information field in the EHT NDPA frame as an AID. If first 12 bits of second two octets of a station information field in the station information field happen to match an AID of the VHT STA receiving the EHT NDPA frame, misreading is caused. Of the station information field in the EHT NDPA frame provided in this embodiment, first 11 bits of first two octets are an AID of an EHT STA. In this case, first 12 bits of the first two octets do not match the AID of the VHT STA. A 12thbit (B11) of the second two octets of the station information field is a disambiguation field. The first 12 bits of the second two octets of the station information field do not match the AID of the VHT STA, either. In this way, the VHT STA can be prevented from mistaking a station information field for a station information field of the VHT STA, consequently causing the VHT STA to misread the EHT NDPA frame. It may be understood that an 8-bit index field can indicate a maximum of 2 8=256 cases. 320 MHz may include a maximum of one hundred and forty-eight 26-tone RUs. The 8-bit RU start index may indicate one of one hundred and forty-eight 26-tone RUs. The 8-bit RU end index may also indicate one of one hundred and forty-eight 26-tone RUs. In the embodiment in which the first station information field design solution is used, the bandwidth corresponding to the NDPA frame is not limited to 320 MHz in the foregoing example, and may be alternatively another bandwidth greater than 160 MHz, for example, 240 MHz or 480 MHz. Accordingly, a bit quantity of the partial bandwidth information subfield may be adaptively adjusted based on a size of the bandwidth. It should be understood that the first station information field design solution may be implemented in combination with the foregoing second and third type information indication solutions, or may be implemented separately. For the EHT NDPA frame using the foregoing first station information field design solution, setting of the disambiguation field can prevent the VHT STA from misreading. However, if the EHT NDPA frame uses the foregoing first type information indication solution, an HE STA does not read a bit that is of a frame type subfield and that indicates whether an NDPA frame is a ranging NDPA frame. As a result, the HE STA cannot distinguish an HE NDPA frame from an EHT NDPA frame. In this case, the HE STA reads the EHT NDPA frame as an HE NDPA frame. When reading third two octets of a station information field in the EHT NDPA frame, the HE STA reads first 11 bits of the third two octets as an AID indicated by first 11 bits of a station information field in an HE NDPA frame. If the first 11 bits of the third two octets of the station information field in the EHT NDPA frame match an AID of the HE STA receiving the EHT NDPA frame, it causes the HE STA to misread the EHT NDPA frame. If the EHT NDPA frame using the foregoing first station information field design solution uses the second type information indication solution, the HE STA reads the EHT NDPA frame as a VHT NDPA frame. The station information field in the EHT NDPA frame does not include an AID of an HE STA, an AID of a ranging STA, or an AID of a VHT STA, and a disambiguation subfield is set at another position that may be considered by the HE STA, the ranging STA, or the VHT STA to have an AID specified. In this way, the HE STA can be prevented from misreading the EHT NDPA frame. If the EHT NDPA frame using the foregoing first station information field design solution uses the third type information indication solution, when an indication of a bit that is of the frame type subfield and that indicates whether an NDPA frame is an HE NDPA frame is 1, it only indicates that the NDPA frame is an HE NDPA frame, and there is no other indication case. The HE STA can accurately determine, based on the indication of the bit that is of the frame type subfield and that indicates whether an NDPA frame is an HE NDPA frame, whether the NDPA frame is an HE NDPA frame, and does not read another NDPA frame variant as an HE NDPA frame. FIG.8is a schematic diagram of a structure of a station information field in an EHT NDPA frame according to an embodiment of this application. In a second station information field design solution provided in this embodiment of this application, a station information field is four octets. When the EHT NDPA frame is compared with an HE NDPA frame, an octet quantity of a station information field is not changed. In this solution, a quantity of bits of a partial bandwidth information subfield are compressed, and an indication manner of the partial bandwidth information subfield is improved, to indicate an RU that is in a larger bandwidth and for which channel state information needs to be fed back and/or to indicate a column quantity greater than 8. For example, the quantity of the bits of the partial bandwidth information subfield may be compressed. The partial bandwidth information subfield may use 13 or less bits. In this way, a quantity of bits of a number of columns subfield can be increased, so that the number of columns subfield uses four or more bits, and the number of columns subfield can indicate a larger column quantity. Alternatively, when the quantity of the bits of the partial bandwidth information subfield are compressed, a resource unit start index and a resource unit end index may support indication of an RU that is in a larger bandwidth and for which channel state information needs to be fed back. Certainly, when this solution is used, it is not limited to that a bit quantity of the number of columns subfield needs to be increased to be greater than or equal to 4. The number of columns subfield may alternatively use three bits. In a possible implementation, the partial bandwidth information subfield in the station information field includes a resource unit start index and a resource unit offset index. The resource unit start index is used to indicate a first RU of RUs for which a station corresponding to an AID needs to feed back channel state information. The resource unit offset index indicates an offset, relative to the first RU, of a last RU for which channel state information needs to be fed back. Alternatively, the resource unit start index indicates a first RU for which a Bfer requests feedback, and the resource unit offset index indicates an offset, relative to the first RU, of a last RU for which the Bfer requests feedback. The offset may be 0. It may be understood that the RU offset indicated by the resource unit offset index does not indicate an RU actually included in a bandwidth. Actually, the resource unit offset index indicates a subcarrier offset in frequency domain by indicating the RU offset, to cooperate with the resource unit start index to indicate a contiguous frequency domain range. As shown inFIG.8, the station information field is not limited to being used by the EHT NDPA frame, and may also be used by another NDPA frame variant occurring after the EHT NDPA frame. Refer toFIG.8. The resource unit start index uses eight bits. In this case, when a bandwidth is 320 MHz, the resource unit start index can indicate one of one hundred and forty-eight 26-tone RUs corresponding to 320 MHz. The resource unit offset index may use five or less bits. In this case, the partial bandwidth information subfield uses 13 or less bits. Therefore, the number of columns subfield can use four or more bits, thereby indicating a larger column quantity. It should be understood that a 28th bit (B27) of the station information field is set as a disambiguation subfield, which is used to prevent an earlier-version STA from misreading. For a principle of how the disambiguation subfield prevents misreading, refer to a related description in the foregoing embodiment. Details are not described herein again. In a specific embodiment, offsets of different sizes may be indicated by different resource unit offset indexes. For a scenario in which a bandwidth is 320 MHz, the following uses an example for description. In an example, a correspondence between a resource unit offset index and a corresponding offset is shown in Table 5 below. TABLE 5Resource unitoffset indexOffset00000000126-tone RU001052-tone RU0011106-tone RU0100242-tone RU0101484-tone RU0111996-tone RU01112*996-tone RU10003*996-tone RU10014*996-tone RU For example, when the resource unit offset index is 0010, it indicates that an offset, relative to a first RU, of a last RU for which channel state information needs to be fed back is a 52-tone RU. Further, if the resource unit start index indicates that a first RU for which a station needs to feed back channel state information is a first 26-tone RU, an RU for which the station needs to feed back channel state information is the first 26-tone RU and a 52-tone RU adjacent to the first 26-tone RU. In this case, a frequency domain range for which the station needs to feed back channel state information is a frequency domain range corresponding to the first 26-tone RU and the 52-tone RU adjacent to the first 26-tone RU, that is, first 78 subcarriers with lowest frequencies in 320 MHz. In another example, a correspondence between a resource unit offset index and a corresponding offset is shown in Table 6 below. TABLE 6Resource unitoffset indexOffset00000000126-tone RU00102*26-tone RU00114*26-tone RU01009*26-tone RU010118*26-tone RU011137*26-tone RU011174*26-tone RU1000111*26-tone RU1001148*26-tone RU For example, when the resource unit offset index is 0011, it indicates that an offset, relative to a first RU, of a last RU for which channel state information needs to be fed back is a 4*26-tone RU. When the bandwidth is 320 MHz, a 26-tone RU is used as a granularity, a first RU is indicated by the resource unit start index, and a last RU is indicated by the resource unit end index, the resource unit start index and the resource unit end index each require eight bits. In other words, the partial bandwidth information subfield requires at least 16 bits. In contrast, in the foregoing solution in which indication is performed by using the resource unit start index and the resource unit offset index, 10 resource unit offset indexes can indicate 10 resource offsets of different sizes, and the resource unit offset index may use four bits. In this way, a bit quantity of the partial bandwidth information subfield can be reduced. In addition, a saved bit may be used to increase bits of the number of columns subfield, thereby indicating a larger quantity of space-time streams. Certainly, in another implementation, the correspondence between a resource unit offset index and an offset is not limited to the foregoing example shown in Table 5 or Table 6, and another correspondence may be alternatively set. A bit quantity of the resource unit offset index is also not limited to 4, provided that a sum of a bit quantity of the resource unit start index and the bit quantity of the resource unit offset index is less than or equal to 13. In another specific embodiment, the resource unit offset index indicates an offset by indicating a multiple that an RU for which channel state information needs to be fed back is of a basic granularity. The basic granularity is greater than or equal to a 26-tone RU. In other words, the resource unit offset index indicates the offset by indicating a value obtained by dividing the offset by the basic granularity. When being expressed by a mathematical expression, the resource unit offset index may be expressed as N=offset/basic granularity. The resource unit offset index may indicate the offset by indicating N, where N is a positive integer. For example, a bandwidth is 320 MHz, including one hundred and forty-eight 26-tone RUs, and a basic granularity is an 8*26-tone RU. In this case, an offset has 20 values. The resource unit offset index may use five bits to indicate 20 values. For a correspondence between a resource unit offset index and a corresponding offset, refer to Table 7. TABLE 7Resource unitoffset indexOffset000000000018*26-tone RU0001016*26-tone RU0001124*26-tone RU0010032*26-tone RU0010140*26-tone RU0011148*26-tone RU0011156*26-tone RU0100064*26-tone RU0100172*26-tone RU0101080*26-tone RU0101188*26-tone RU0110096*26-tone RU01101104*26-tone RU01110112*26-tone RU01111120*26-tone RU10000128*26-tone RU10001136*26-tone RU10010144*26-tone RU10011150*26-tone RU If the offset is eight 26-tone RUs, the resource unit offset index may be 00001. If the offset is sixteen 26-tone RUs, the resource unit offset index may be 00010. The resource unit start index may use eight bits. The resource unit start index may indicate one of the one hundred and forty-eight 26-tone RUs. An STA may determine, based on the resource unit start index and the resource unit offset index, a first RU and a last RU for which channel state information needs to be fed back. Alternatively, the STA may determine, based on the resource unit start index and the resource unit offset index, a range of RUs for which channel state information needs to be fed back. For example, if the resource unit start index indicates that a first RU for which a STA corresponding to the station information field needs to feed back channel state information is a ninth 26-tone RU of the one hundred and forty-eight 26-tone RUs, and the resource unit offset index is 00010, it can be determined that a last RU for which the STA needs to feed back channel state information is a 25th26-tone RU. RUs for which the STA needs to feed back channel state information are the ninth 26-tone RU to the 25th 26-tone RU. For another example, if an ordinal value of the last RU that is determined based on the resource unit start index and the resource unit offset index and for which the STA needs to feed back channel state information is greater than 148, it can be determined that the last RU for which the STA needs to feed back channel state information is a 148th RU. For example, if the first RU that is indicated by the resource unit start index and for which channel state information needs to be fed back is a 142ndRU of the one hundred and forty-eight 26-tone RUs, and the resource unit offset index indicates that the offset is eight 26-tone RUs, it can be determined that the last RU for which the STA needs to feed back channel state information is the 148th RU. It may be understood that a degree of compression of the resource unit offset index may be adjusted by adjusting the basic granularity. A larger basic granularity indicates a greater degree of compression. A greater degree of compression indicates a smaller quantity of bits required to indicate a same offset. In this way, the partial bandwidth information subfield can support indication of an RU that is in a larger bandwidth and for which channel state information needs to be fed back. In another possible implementation, the partial bandwidth information subfield includes an RU indication index. The RU indication index includes a frequency domain indication part and an RU indication part. The frequency domain indication part is used to indicate a frequency domain range in which an RU for which a station corresponding to an AID needs to feed back channel state information is located. The RU indication part is used to indicate the RU for which channel state information needs to be fed back. In a specific embodiment, a bandwidth is 320 MHz, and the 320 MHz is divided into four frequency domain ranges, which may also be referred to as four frequency segments. The four frequency domain ranges correspond to a first 80 MHz, a second 80 MHz, a third 80 MHz, and a fourth 80 MHz in the 320 MHz sequentially in ascending order of frequency. The frequency domain indication part may use two bits, and indicates that a frequency domain range in which an RU for which a STA needs to feed back channel state information is located is one of the four frequency domain ranges. For example, for a correspondence between a frequency domain indication part and an indicated frequency domain range, refer to Table 8. TABLE 8Frequency domainindication partFrequency domain range00First 80 MHz01Second 80 MHz10Third 80 MHz11Fourth 80 MHZ The RU indication part uses seven bits, and is used to indicate that the RU for which channel state information needs to be fed back is one RU or a combination of a plurality of RUs in the frequency domain range indicated by the frequency domain indication part. For a relationship between an RU indication part and an indicated RU, refer to Table 9. TABLE 9RU indication partDescription of an indicated RUNumber of entries(seven bits)(description)(number of entries)0-36May indicate thirty-seven 26-tone RUs37in 80 MHz, respectively.(Possible 26-tone RU cases in 80 MHz)37-52May indicate sixteen 52-tone RUs in1680 MHz, respectively.(Possible 52-tone RU cases in 80 MHz)53-60May indicate eight 106-tone RUs in 808MHz, respectively.(Possible 106-tone RU cases in 80 MHz)61-64May indicate four 242-tone RUs in 804MHz, respectively.(Possible 242-tone RU cases in 80 MHz)65-66May indicate two 484-tone RUs in 802MHz, respectively.(Possible 484-tone RU cases in 80 MHz)67Indicates one 996-tone RU in 80 MHz.1(996-tone RU cases in 80 MHz)68-70Two 996-tone RUs371Four 996-tone RUs172-75A 106-tone RU at a low frequency in4one 20 MHz of 80 MHz is combined(Each entry correspondswith a center 26-tone RU in the 20 MHz.to one 20 MHz)76-79A 106-tone RU at a high frequency in4one 20 MHz of 80 MHz is combined(Each entry correspondswith a center 26-tone RU in the 20 MHz.to one 20 MHz)80-83A 52-tone RU at a second lowest4frequency in one 20 MHz of 80 MHz is(Each entry correspondscombined with an adjacent 26-tone RUto one 20 MHz)on a same side in the 20 MHZ.84-87A 52-tone RU at a second lowest4frequency in one 20 MHz of 80 MHz is(Each entry correspondscombined with a center 26-tone RU into one 20 MHz)the 20 MHz.88-91A 52-tone RU at a second highest4frequency in one 20 MHz of 80 MHz is(Each entry correspondscombined with a contiguous 26-toneto one 20 MHz)RU on a same side in the 20 MHz.92-95A 52-tone RU at a second highest4frequency in one 20 MHz of 80 MHz is(Each entry correspondscombined with a center 26-tone RU into one 20 MHz)the 20 MHz.96-97A 484-tone RU in 80 MHz is combined2with an adjacent 242-tone RU.(The two entries correspondto a 484-tone RU at a lowfrequency and a 484-tone RUat a high frequency, respectively)98-99A 484-tone RU in 80 MHz is combined2with a non-adjacent 242-tone RU.(The two entries correspondto a 484-tone RU at a lowfrequency and a 484-tone RUat a high frequency, respectively)100Two 242-tone RUs on two sides in 801MHz are combined.101-102A currently indicated 996-tone RU is2combined with a 484-tone RU that is in(The two entries correspondan adjacent 80 MHz and is not adjacentto a 484-tone RU that is locatedto the 996-tone RU.in an adjacent 80 MHz at a lowfrequency and a 484-tone RU that islocated in an adjacent 80 MHzat a high frequency, respectively)103-104A currently indicated 996-tone RU is2combined with a 484-tone RU and a(The two entries correspond242-tone RU that are in an adjacent 80to a 484-tone RU and a 242-MHz and are not adjacent to the 996-tone RU that are located in antone RU.adjacent 80 MHz at a lowfrequency and a 484-tone RUand a 242-tone RU that arelocated in an adjacent 80 MHzat a high frequency, respectively)105-106A currently indicated 996-tone RU is2combined with two 242-tone RUs in an(The two entries correspondadjacent 80 MHz.to two 242-tone RUs that arelocated in an adjacent 80 MHzat a low frequency and two242-tone RUs that are locatedin an adjacent 80 MHz at ahigh frequency, respectively)107Two 996-tone RUs at a lowest1frequency in 320 MHz are combinedwith a 996-tone RU at a highestfrequency.108A 996-tone RU at a lowest frequency1in 320 MHz is combined with two 996-tone RUs at a highest frequency.109Three 996-tone RUs at a lowest1frequency in 320 MHz are combined.110Three 996-tone RUs at a highest1frequency in 320 MHz are combined.111-126A 484-tone RU and a 242-tone RU that16are in a currently indicated 80 MHz arecombined with a 484-tone RU and a242-tone RU that are in an adjacent 80 MHz.127Reserved1 The RU indication part of each entry in Table 9 is a decimal value. In the partial bandwidth information subfield, the RU indication part of the RU indication index is a binary value corresponding to the decimal value in Table 9. It should be understood that a correspondence between the RU indication part and the indicated RU that are of each entry in Table 9 is merely an optional embodiment. In this application, the correspondence between an RU indication part and an indicated RU is not limited to the correspondence in Table 9. In another embodiment, the correspondence between an RU indication part and an indicated RU may be alternatively different from the correspondence in Table 9. With reference to Table 8 and Table 9, in a specific example, first two bits of an RU indication index are a frequency domain indication part, and last seven bits of the RU indication index are an RU indication part. If the RU indication index is 000000001, 00 indicates a first 80 MHz, 0000001 indicates a second 26-tone RU in the 80 MHz, and therefore, 000000001 indicates that an RU for which a station needs to feed back channel state information is the second 26-tone RU in the first 80 MHz. If the RU indication index is 111000010, 11 indicates a fourth 80 MHz, 1000010 indicates a second 484-tone RU in the 80 MHz, and therefore, 111000010 indicates that an RU for which a station needs to feed back channel state information is the second 484-tone RU in the fourth 80 MHz. In still another possible implementation, the partial bandwidth information subfield includes a resource unit start index and a resource unit end index. The resource unit start index is used to indicate a first RU for which a station corresponding to an AID needs to feed back channel state information. The resource unit end index is used to indicate a last RU for which the station corresponding to the AID needs to feed back channel state information. In this implementation, the resource unit start index and the resource unit end index are compressed, and granularities of RUs indicated by the resource unit start index and the resource unit end index are increased. For example, the granularity is increased from a 26-tone RU to a 2*26-tone RU or a 4*26-tone RU. Specifically, the first RU indicated by the resource unit start index is a (k1*n+c1)th26-tone RU, where c1and k1are positive integers, and n is a natural number. The last RU indicated by the resource unit end index is a (k2*m+c2)th26-tone RU, where c2and k2are positive integers, and m is a natural number. The resource unit start index indicates the first RU by indicating n. The resource unit end index indicates the last RU by indicating m. k1≥2 and/or k2≥2. For example, in a specific embodiment, a bandwidth is 320 MHz, k1is 2, and c1is 1. A resource unit start index indicates a first, third, fifth, . . . , (2n+1)th, or 147th 26-tone RU, where n≤73. In this case, a granularity of a first RU indicated by the resource unit start index is a 2*26-tone RU. The resource unit start index indicates a total of 74 cases and requires seven bits. k2is 4, and c2is 2. A resource unit end index indicates a second, sixth, . . . , (4n+2)th, or 150th26-tone RU. In this case, a granularity of a last RU indicated by the resource unit end index is a 4*26-tone RU. The resource unit end index indicates a total of 37 cases and requires six bits, and n≤37. For a correspondence between a resource unit start index and an RU, refer to Table 10. For a correspondence between a resource unit end index and an RU, refer to Table 11. TABLE 10Resource unitstart indexRU0000000First 26-tone RU0000001Third 26-tone RU0000010Fifth 26-tone RU. . .. . .1001000147th26-tone RU TABLE 11Resource unitend indexRU000000Second 26-tone RU000001Sixth 26-tone RU00001010th26-tone RU. . .. . .100100150th26-tone RU It may be understood that when an ordinal value of an RU indicated by the resource unit end index is greater than an ordinal value of a last RU corresponding to a bandwidth, the RU indicated by the resource unit end index is the last RU corresponding to the bandwidth. For example, in the foregoing example, when a bandwidth is 320 MHz, and the resource unit end index indicates the 150th26-tone RU, an indicated RU is actually a 148th26-tone RU. A degree of compression of the partial bandwidth information subfield can be adjusted by adjusting a value of k1or k2or adjusting values of both k1and k2. A larger k1indicates a greater degree of compression of the partial bandwidth information subfield, and a larger k2also indicates a greater degree of compression of the partial bandwidth information subfield. A greater degree of compression indicates a smaller quantity of bits required to indicate same partial bandwidth information. In this way, the partial bandwidth information subfield can indicate an RU that is in a larger bandwidth and for which channel state information needs to be fed back. Optionally, when a bandwidth is less than or equal to 160 MHz, the partial bandwidth information subfield may indicate partial bandwidth information in a manner used in an HE NDPA frame. It should be understood that the foregoing second station information field design solution may be implemented separately, or may be implemented in combination with any one of the foregoing type information indication solutions. In an EHT NDPA frame specified by using the second station information field design solution, an octet quantity of a station information field is 4 and is consistent with an octet quantity of a station information field in an HE NDPA frame. The station information field in the EHT NDPA frame does not include an AID of an HE STA, an AID of a ranging STA, or an AID of a VHT STA, and a disambiguation subfield is set at another position that may be considered by the HE STA, the ranging STA, or the VHT STA to have an AID specified. In this way, the HE STA, the ranging STA, or the VHT STA can be prevented from misreading the EHT NDPA frame. The following describes a design solution of a number of columns subfield in a station information field, where the design solution is provided in a case in which a column quantity indicated by a number of columns subfield in an NDPA frame is greater than 8. The number of columns subfield design solution is applicable to a case in which the station information field uses six octets, and is also applicable to a case in which the station information field uses four octets. Specifically, a quantity of bits of the number of columns subfield may be increased to indicate a larger column quantity. For example, when a quantity of columns for which feedback needs to be provided is in a range from 1 to 16, the number of columns subfield may use four bits. A 4-bit field can indicate 24=16 cases. The number of columns subfield may indicate a column quantity by using a column quantity index. For a correspondence between a column quantity index and a column quantity, refer to Table 12. TABLE 12Column quantity indexQuantity of columns000010001200103. . .. . .111015111116 It should be understood that the correspondence between a column quantity index and a column quantity is not limited to the correspondence shown in Table 12. In another embodiment, the correspondence between a column quantity index and a column quantity may be flexibly set. The foregoing number of columns subfield design solution may be implemented in combination with any one of the foregoing station information field design solutions, or may be implemented separately. When a bandwidth corresponding to an NDPA frame is greater than 160 MHz and a column quantity indicated by a number of columns subfield is greater than 8, a design solution of a partial bandwidth information subfield in a station information field may be any one of the foregoing station information field design solutions that are provided in a case in which a bandwidth corresponding to an NDPA frame is greater than 160 MHz, and for a number of columns subfield design solution, reference may be made to the foregoing number of columns subfield design solution that is provided in a case in which a column quantity indicated by a number of columns subfield is greater than 8. In an optional embodiment, when a bandwidth is less than or equal to 160 MHz, a partial bandwidth information subfield indicates partial bandwidth information in a manner used in an HE NDPA frame, whereas when a bandwidth is greater than 160 MHz, a partial bandwidth information subfield indicates partial bandwidth information according to the foregoing indication solutions of the partial bandwidth information subfield. In another optional embodiment, when a bandwidth is less than or equal to 160 MHz, or a maximum quantity of columns for which feedback needs to be provided is less than or equal to 8, a station information field in an EHT NDPA frame may alternatively use the station information field design solution in any one of the foregoing embodiments of this application. Further, regardless of whether a bandwidth is greater than 160 MHz or less than or equal to 160 MHz and whether a quantity of columns for which feedback needs to be provided is less than or equal to 8 or greater than 8, the station information field in the EHT NDPA frame uses a uniform format. That is, the station information field in the EHT NDPA frame uses a uniform format. In this way, an EHT STA can read station information fields in all EHT NDPA frames by using a uniform read policy. Therefore, it is more convenient for the EHT STA to read the EHT NDPA frames. An embodiment of this application further provides another station information field design solution in which when a bandwidth is greater than 160 MHz, one station corresponds to two station information fields. In other words, an NDPA frame includes two station information fields corresponding to a same station. The two station information fields include an AID of the same station. The solution may be implemented in combination with any one of the foregoing type information indication solutions, or may be implemented separately. A manner of indicating partial bandwidth information of the station is as follows: Partial bandwidth information subfields in the two station information fields jointly indicate a first RU and a last RU for which the STA corresponding to the AID needs to feed back channel state information. It may be understood that in this solution, a partial bandwidth information subfield corresponding to the STA that corresponds to the AID is divided into two parts, which are respectively transmitted in the two station information fields including the AID. In this way, a station information field corresponding to a station can be newly added without changing a station information field that is originally included in an NDPA frame and that corresponds to the station. The two station information fields cooperate to indicate partial bandwidth information, thereby implementing indication of an RU that is in a larger bandwidth and for which channel state information needs to be fed back. Further, in ascending order of frequency, when an indicated partial bandwidth belongs to a first 160 MHz of a bandwidth, a partial bandwidth information subfield in an EHT NDPA frame may use a setting manner of a partial bandwidth information subfield in an HE NDPA frame. When a partial bandwidth of a station belongs to a first 160 MHz of a bandwidth, an EHT NDPA frame includes two station information fields including an AID of the station. The two station information fields cooperate to indicate partial bandwidth information. Therefore, a quantity of bits occupied by a station information field corresponding to each station can be increased on demand, instead of blindly increasing a quantity of bits of station information fields corresponding to all stations, thereby reducing overheads. FIG.9is a schematic diagram of a structure of another NDPA frame according to an embodiment of this application. In an embodiment, a resource unit start index in one of two station information fields and a resource unit start index in the other station information field indicate a first RU for which a station corresponding to a same AID needs to feed back channel state information, and a resource unit end index in the one station information field and a resource unit end index in the other station information field indicate a last RU for which the station corresponding to the same AID needs to feed back channel state information. For example, of the two station information fields, the resource unit start index in the one station information field is seven bits, and the resource unit start index in the other station information field is one bit. In this case, the 7-bit resource unit start index in the one station information field and the 1-bit resource unit start index in the other station information field cooperate to indicate the first RU for which the STA corresponding to the same AID needs to feed back channel state information. It may be understood that the 7-bit resource unit start index in the one station information field and the 1-bit resource unit start index in the other station information field jointly form a complete resource unit start index. The 1-bit resource unit start index in the other station information field may be a most significant bit (most significant bit, MSB), which may also be referred to as a highest bit, of the complete resource unit start index. Similarly, the 1-bit resource unit end index in the one station information field and the 1-bit resource unit end index in the other station information field jointly form a complete resource unit end index. The complete resource unit end index can indicate the last RU for which the station needs to feed back channel state information. The 1-bit resource unit end index in the other station information field may be a most significant bit, which may also be referred to as a highest bit, of the complete resource unit end index. Certainly, bit quantities of the resource unit start indexes and the resource unit end indexes in the two station information fields are not limited to the foregoing examples, and may be alternatively other values. FIG.10is a schematic diagram of a structure of still another NDPA frame according to an embodiment of this application. As shown inFIG.10, in another embodiment, one of two station information fields includes a resource unit start index but not a resource unit end index, and the other station information field includes a resource unit end index but not a resource unit start index. In this case, the resource unit start index in the one station information field indicates a first RU for which a STA corresponding to the two station information fields needs to feed back channel state information, and the resource unit end index in the other station information field indicates a last RU for which the STA corresponding to the two station information fields needs to feed back channel state information. In this way, in a scenario with a bandwidth of 320 MHz, both partial bandwidth information subfields in the two station information fields use only eight bits, thereby controlling an octet quantity of the station information field to be 4 or less, and supporting indication of an RU that is in 320 MHz and for which channel state information needs to be fed back. In this way, a quantity of bits of a partial bandwidth information subfield in each station information field are reduced, thereby increasing a quantity of bits of a number of columns subfield and also indicating a larger column quantity. When a column quantity that needs to be indicated is greater than 8, an indication manner of the column quantity may be as follows: A number of columns subfield in one of two station information fields and a number of columns subfield in the other station information field indicate a quantity of columns for which a station corresponding to a same AID needs to feed back channel state information. In a possible embodiment, as shown inFIG.9, a number of columns subfield in one station information field uses three bits, and a number of columns subfield in the other station information field uses one bit. The number of columns subfield in the one station information field and the number of columns subfield in the other station information field form a complete number of columns subfield. The complete number of columns subfield can accurately indicate a column quantity. The 1-bit number of columns subfield in the other station information field may be a most significant bit (MSB) or a highest bit of the complete number of columns subfield. It may be understood that the foregoing partial bandwidth information indication manner may be implemented in combination with the foregoing column quantity indication manner, or may be implemented separately. This is not limited in this application. FIG.11is a schematic diagram of a structure of still another NDPA frame according to an embodiment of this application. As shown inFIG.11, in still another embodiment, one of two station information fields includes a partial bandwidth information subfield but not a number of columns subfield, and the other station information field includes a number of columns subfield but not a partial bandwidth information subfield. Optionally, a station corresponds to two station information fields only when frequencies of a partial bandwidth for which the station needs to feed back channel state information are not in a first 160 MHz of an entire bandwidth in ascending order of frequency. The two station information fields include an AID of the station. That is, AIDs included in the two station information fields are the same. When the frequencies of the partial bandwidth for which the station needs to feed back channel state information are in the first 160 MHz of the entire bandwidth, the STA corresponds to only one station information field, thereby avoiding an increase in overheads caused by addition of an excess station information field. The following describes in detail related content in a beamforming report fed back by a station. As shown in Table 13, a beamforming report includes category information, EHT action information, EHT multiple input multiple output (MIMO) control information, a compressed beamforming report, and a multiuser exclusive beamforming report (MU exclusive beamforming report). TABLE 13SequenceInformation1Category2EHT action3EHT multiple input multiple output (MIMO) control4Compressed beamforming report5Multiuser exclusive beamforming report(MU exclusive beamforming report, Multiple User) In some embodiments, a MIMO control field in a beamforming report is the EHT MIMO control field that carries EHT MIMO control information.FIG.12is a schematic diagram of a structure of an EHT MIMO control field according to an embodiment of this application. The EHT MIMO control field includes a number of columns subfield, a number of rows subfield, a bandwidth (BW) subfield, a number of groups subfield, a codebook information subfield, a feedback type subfield, a remaining feedback segment subfield, a first feedback segment subfield, a resource unit start index, a resource unit end index, a sounding dialog token subfield, a disallowed subchannel bitmap present/length subfield, and a disallowed subchannel bitmap. The number of columns subfield is used to indicate a quantity of columns for which feedback needs to be provided. The number of rows subfield is used to indicate a quantity of rows for which feedback needs to be provided. The channel bandwidth subfield is used to indicate a bandwidth of a channel. The number of groups subfield is a number of groups bitmap, and is used to indicate how subcarriers are grouped. Channel state information of subcarriers in a same group is fed back together. The feedback type subfield is used to indicate a type of feedback. The remaining feedback segment subfield is used to indicate a segment that is not yet fed back. The first feedback segment subfield is used to indicate whether the segment is a first feedback segment. The resource unit start index is used to indicate a first RU for which a station needs to feed back channel state information. The resource unit end index is used to indicate a last RU for which a station needs to feed back channel state information. The disallowed subchannel bitmap present/length subfield indicates whether a disallowed subchannel bitmap is present; and when a disallowed subchannel bitmap is present, indicates a length of the disallowed subchannel bitmap. For example, when an indication of the disallowed subchannel bitmap present/length subfield is 0, it may be understood as indicating that no disallowed subchannel bitmap is present. In an example, each bit in the disallowed subchannel bitmap is used to indicate preamble puncturing information of 20 MHz. For a description of the disallowed subchannel bitmap in the EHT MIMO control field, refer to a related description of the disallowed subchannel bitmap in the NDPA frame in the foregoing embodiment. In this way, the EHT MIMO control field in the beamforming report also includes the disallowed subchannel bitmap. This can prevent the EHT MIMO control field from being misread. The resource unit start index and the resource unit end index in the EHT MIMO control field may be compressed by using any one of the foregoing solutions of compressing a quantity of bits of a partial bandwidth information subfield. The disallowed subchannel bitmap present/length subfield may indicate a length of a disallowed subchannel bitmap by indicating a bandwidth or a bit quantity. For example, the disallowed subchannel bitmap present/length subfield may indicate that a bandwidth is 320 MHz, to indicate that the length of the disallowed subchannel bitmap is 16 bits; or the disallowed subchannel bitmap present/length subfield indicates that a bandwidth is 240 MHz, to indicate that the length of the disallowed subchannel bitmap is 12 bits; or the disallowed subchannel bitmap present/length subfield indicates that a bandwidth is 160 MHz, to indicate that the length of the disallowed subchannel bitmap is eight bits. Certainly, an indication manner of the disallowed subchannel bitmap present/length subfield and an indicated length are not limited to the foregoing examples. In another embodiment, the disallowed subchannel bitmap present/length subfield may perform indication in another manner, or may indicate another length. In this way, a network device receiving the EHT MIMO control field can accurately receive the EHT MIMO control field based on the disallowed subchannel bitmap present/length subfield. In an optional embodiment, the foregoing NDPA frame may be implemented by using a trigger frame. In other words, the NDPA frame in step 502 may be a trigger frame. The trigger frame includes content in the NDPA frame in any one of the foregoing embodiments provided in this application. Specifically, refer toFIG.13.FIG.13is a schematic diagram of a structure of a trigger frame. The trigger frame includes a trigger frame type field. The field indicates that the trigger frame is an NDPA trigger frame. The trigger frame further includes a bandwidth field, a sounding dialog token field, and an NDPA frame type field. Type information is carried in the NDPA frame type field and used to indicate an NDPA frame variant. The design solutions of related content in the station information field in any one of the foregoing embodiments are all applicable to a station information field in the trigger frame. Details are not described herein again. The station information field in the trigger frame may not include a disambiguation field. This is because in 802.11ax and earlier standard protocols, an NDPA frame is not transmitted by using a trigger frame. Accordingly, an HE STA and a STA that adapts to a standard occurring before 802.11ax (for example, a VHT STA) do not receive an NDPA frame by using a trigger frame, either. In the solution of transmitting an NDPA frame by using a trigger frame, there is no need to consider a problem that a device adapting to 802.11ax and earlier standard protocols misreads the NDPA frame. Therefore, there is no need to set the disambiguation field, either. Further, refer toFIG.14.FIG.14is a schematic diagram of a feedback process of a beamforming report. An NDPA frame is transmitted by using a trigger frame. Bfees include a plurality of stations (for example, a STA1, a STA2, and a STA3inFIG.14). An AP first transmits the NDPA frame in a form of a trigger frame, and then transmits an NDP after an SIFS elapses. The trigger frame can trigger a plurality of stations to simultaneously perform uplink transmission. For example, inFIG.14, the STA1, the STA2, and the STA3can simultaneously feed back a beamforming report, thereby improving efficiency of channel sounding. An embodiment of this application further provides another partial bandwidth information indication solution. Station information includes an RU indication index. The RU indication index indicates information about a bandwidth for which channel state information needs to be fed back by indicating an RU corresponding to a partial or full bandwidth for which channel state information needs to be fed back. The RU indication index may be understood as a partial bandwidth information field or a partial bandwidth information subfield. A minimum RU granularity indicated by the RU indication index is one 242-tone RU. In this way, it is unnecessary to indicate a small RU, thereby helping reduce a quantity of bits of the RU indication index and reducing indication overheads. The RU indication index indicates a size of an RU corresponding to a partial or full bandwidth for which channel state information needs to be fed back, and indicates a frequency position of the RU in a complete bandwidth. Specifically, for a 242-tone RU, a corresponding bandwidth is 20 MHz, and 16 different RU indication indexes may be used to indicate different positions of the 242-tone RU in 320 MHz. For example, the 16 different RU indication indexes are used to indicate corresponding sixteen 20 MHz segments in 320 MHz in ascending order of frequency, respectively. For a 484-tone RU, a corresponding bandwidth is 40 MHz, and eight a 484-tone RU. For example, the eight different RU indication indexes are used to indicate corresponding eight 40 MHz segments in 320 MHz in ascending order of frequency, respectively. For a 242+484-tone RU, a corresponding bandwidth is 60 MHz, and 16 different RU indication indexes may be used to indicate different positions of the 242+484-tone RU in 320 MHz. For example, 320 MHz is understood as a first 80 MHz, a second 80 MHz, a third MHz, and a fourth 80 MHz in ascending order of frequency. Four different RU indication indexes may be used to indicate positions of the 242+484-tone RU in the first 80 MHz, respectively; another four different RU indication indexes may be used to indicate positions of the 242+484-tone RU in the second 80 MHz, respectively; still another four different RU indication indexes may be used to indicate positions of the 242+484-tone RU in the third 80 MHz, respectively; and yet another four different RU indication indexes may be used to indicate positions of the 242+484-tone RU in the fourth 80 MHz, respectively. For a 996-tone RU, a corresponding bandwidth is 80 MHz, and four different RU indication indexes may be used to indicate different positions of the 996-tone RU in 320 MHz. For example, the four different RU indication indexes are used to indicate corresponding four MHz segments in 320 MHz in ascending order of frequency, respectively. For a 484+996-tone RU, a corresponding bandwidth is 120 MHz; four different RU indication indexes may be used to indicate different positions of the 484+996-tone RU in 160 MHz that is of 320 MHz and that is at a lowest frequency, respectively; and another four different RU indication indexes may be used to indicate different positions of the 484+996-tone RU in 160 MHz that is of 320 MHz and that is at a highest frequency, respectively. For a 2*996-tone RU, a corresponding bandwidth is 160 MHz, and six different RU indication indexes may be used to indicate different positions of the 2*996-tone RU in 320 MHz. Alternatively, a position of the 2*996-tone RU may be further specified. In a bandwidth of 320 MHz, only the 2*996-tone RU in 160 MHz at a lowest frequency and the 2*996-tone RU in 160 MHz at a highest frequency are supported, and correspond to only two different RU indication indexes. For a 3*996-tone RU, a corresponding bandwidth is 240 MHz, and four different RU indication indexes may be used to indicate different positions of the 3*996-tone RU in 320 MHz. For a 4*996-tone RU, a corresponding bandwidth is 320 MHz. In this case, it may be understood as indicating a full bandwidth, and one RU indication index may be used for indication. The RU indication index may use, for example, but not limited to, six or seven bits. It should be understood that a type of an RU indicated by the RU indication index is not limited to the foregoing example types, and the RU indication index may alternatively indicate an RU of another type and a position of the RU in a bandwidth. For example, the RU indication index may use six bits. In an embodiment, the RU indication index may perform indication according to Table 14. TABLE 14RU indicationIndicated bandwidth/RUindex (six bits)(indicated BW/RU)Comments000000 to242-tone RUs (20 MHz)16 options, corresponding to001111sixteen 20 MHz segments inascending order of frequency,respectively010000 to242 + 484-tone RU in a first 80 MHzFour options (4 options)010011(242 + 484 in 1st 80 MHz Segment)010100 to242 + 484-tone RU in a second 80 MHzFour options (4 options)010111(242 + 484 in 2nd 80 MHz Segment)011000 to242 + 484-tone RU in a third 80 MHzFour options (4 options)011011(242 + 484 in 3rd 80 MHz Segment)011100 to242 + 484-tone RU in a fourth 80 MHzFour options (4 options)011111(242 + 484 in 4th 80 MHz Segment)100000 to484-tone RUs (40 MHz)Eight options (8 options),100111corresponding to eight 40 MHzsegments in ascending order offrequency, respectively101000 to484 + 996-tone RU in a first 160 MHzFour options (4 options)101011(484 + 996 in 1st 160 MHz)101100 to484 + 996-tone RU in a second 160 MHzFour options (4 options)101111(484 + 996 in 2nd 160 MHz)110000 to996-tone RUFour options (4 options)110011110100 to2*996-tone RUSix options (6 options)111001111010 to3*996-tone RUFour options (4 options)111101111110Reserved (reserved)One option (1 options)111111Full BW (full bandwidth) or 4*996-toneOne option (1 options)RU It should be understood that in Table 14, correspondences between the RU indication index and the indicated bandwidth/RU are arranged in ascending order of RU size. In another embodiment, correspondences between the RU indication index and the indicated bandwidth/RU are not limited to the correspondences in Table 14. The correspondences between the RU indication index and the indicated bandwidth/RU may be adjusted and converted, provided that a corresponding bandwidth/RU can be obtained based on an RU indication index. Optionally, for an N*996-tone RU, a corresponding bandwidth is N*80 MHz, and fourteen different RU indication indexes may be used to indicate different positions of the N*996-tone RU in 320 MHz, where N=1, 2, or 3. Optionally, the RU indication index indicating the N*996-tone RU may include four bits, which are used as a bitmap. Each of the four bits corresponds to one 80 MHz of 320 MHz. For example, if the bitmap is 1100, it indicates that the 2*996-tone RU is located in a first 80 MHz and a second 80 MHz of 320 MHz; or if the bitmap is 0010, it indicates that the 996-tone RU is located in a third 80 MHz of 320 MHz. Optionally, the 4-bit bitmap is last four bits of the RU indication index. For a 4*996-tone RU, a corresponding bandwidth is 320 MHz. In this case, it may be understood as indicating a full bandwidth, and one RU indication index may be used for indication. For example, according to the foregoing indication manner of the RU indication indexes corresponding to the N*996-tone RU, an RU indication index indicating the 4*996-tone RU also includes a 4-bit bitmap. The bitmap is 1111, indicating that the 4*996-tone RU is located in the first 80 MHz to a fourth 80 MHz of 320 MHz. As shown in Table 14 to Table 21, an RU indication index indicating a 4*996-tone RU may be used to indicate a 4*996-tone RU, or may be used to indicate a full bandwidth (a complete bandwidth). For example, the RU indication index may use six bits. In an embodiment, the RU indication index may perform indication according to Table 15-1. TABLE 15-1RU indicationIndicated bandwidth/RUindex (six bits)(indicated BW/RU)Comments000000 to242-tone RUs (20 MHz)16 options (16 options),001111corresponding to sixteen 20MHz segments in ascendingorder of frequency, respectively010000 to484-tone RUs (40 MHz)Eight options (8 options),010111corresponding to eight 40 MHzsegments in ascending order offrequency, respectively011000 to484 + 996-tone RU in a first 160 MHzFour options (4 options)011011(484 + 996 in 1st 160 MHz)011100 to484 + 996-tone RU in a second 160 MHzFour options (4 options)011111(484 + 996 in 2nd 160 MHz)100000Reserved (reserved)One option (1 options)100001 toN*996-tone RU (N = 1 to 3)14 options, last four least101110significant bits indicating four80 MHz segments, respectively(14 options, last 4 LSB indicateeach 80 MHz segment)101111Full BW (full bandwidth) or 4*996-tone RUOne option (1 options)110000 to242 + 484-tone RU in a first 80 MHzFour options (4 options)110011(242 + 484 in 1st 80 MHz Segment)110100 to242 + 484-tone RU in a second 80 MHzFour options (4 options)110111(242 + 484 in 2nd 80 MHz Segment)111000 to242 + 484-tone RU in a third 80 MHzFour options (4 options)111011(242 + 484 in 3rd 80 MHz Segment)111100 to242 + 484-tone RU in a fourth 80 MHzFour options (4 options)111111(242 + 484 in 4th 80 MHz Segment) Specific correspondences between 100001-101110 and indicated N*996-tone RUs may be shown in Table 16. Last four bits of 100001 to 101110 specifically indicate the N*996-tone RUs, and the last four bits implement a function of a bitmap. TABLE 16RU indicationindexIndicated RU1000014th80 MHz (fourth 80 MHz)1000103rd80 MHz (third 80 MHz)1000113rd80 MHz + 4th80 MHz (the third 80 MHz and thefourth 80 MHZ)1001002nd80 MHz (second 80 MHz)1001012nd80 MHz + 4th80 MHz (the second 80 MHz and thefourth 80 MHZ)1001102nd80 MHz + 3rd80 MHz (the second 80 MHz and thethird 80 MHz)1001112nd80 MHz + 3rd80 MHz + 4th80 MHz (the second 80MHz, the third 80 MHz, and the fourth 80 MHz)1010001st80 MHz (first 80 MHz)1010011st80 MHz + 4th80 MHz (the first 80 MHz and thefourth 80 MHz)1010101st80 MHz + 3rd80 MHz (the first 80 MHz and the third80 MHz)1010111st80 MHz + 3rd80 MHz + 4th80 MHz (the first 80MHZ, the third 80 MHZ, and the fourth 80 MHz)1011001st80 MHz + 2nd80 MHz (the first 80 MHz and thesecond 80 MHz)1011011st80 MHz + 2nd80 MHz + 4th80 MHz (the first 80MHz, the second 80 MHz, and the fourth 80 MHz)1011101st80 MHz + 2nd80 MHz + 3rd80 MHz (the first 80MHz, the second 80 MHZ, and the third 80 MHz)101111Full bandwidth or 1st80 MHz + 2nd80 MHz + 3rd80MHz + 4th80 MHz (the first 80 MHz, the second 80MHz, the third 80 MHZ, and the fourth 80 MHz) When a bandwidth for which channel state information needs to be fed back is a full bandwidth, there are two indication solutions. The following separately describes the two indication solutions. One of the two indication solutions is as follows: When a bandwidth for which channel state information needs to be fed back is a full bandwidth, an RU indication index indicates the full bandwidth or an RU corresponding to the full bandwidth, regardless of a size of a bandwidth. For example, when the bandwidth is 20 MHz, 40 MHz, 80 MHz, 160 MHz, or 320 MHz, the RU indication index indicates the full bandwidth or the RU corresponding to the full bandwidth (for example, in Table 15-1, an RU indication index (for example, 101111) is used to indicate the full bandwidth or the 4*996-tone RU). The other indication solution is as follows: When a bandwidth for which channel state information needs to be fed back is a full bandwidth, an RU indication index (for example, 101111 in Table 15-1) indicating the 4*996-tone RU is used for indication only when a bandwidth is equal to the full bandwidth, for example, only when a bandwidth corresponding to an RU is 320 MHz, which is equal to the full bandwidth of 320 MHz, where the bandwidth for which channel state information needs to be fed back is the full bandwidth. When the bandwidth for which channel state information needs to be fed back is the full bandwidth but a bandwidth is less than the full bandwidth, an RU indication index of an RU corresponding to the bandwidth is used for indication. Examples are as follows: When the bandwidth is 20 MHz, if the bandwidth for which channel state information needs to be fed back is the full bandwidth, an RU indication index indicating a 242-tone RU in which the 20 MHz is located may be used for indication. For example, based on Table 15-1, one of 000000 to 001111 may be used for indication. When the bandwidth is 40 MHz, if the bandwidth for which channel state information needs to be fed back is the full bandwidth, an RU indication index indicating a 484-tone RU in which the 40 MHz is located may be used for indication. For example, based on Table 15-1, one of 010000 to 010111 may be used for indication. When the bandwidth is 80 MHz, if the bandwidth for which channel state information needs to be fed back is the full bandwidth, an RU indication index indicating a 996-tone RU in which the 80 MHz is located may be used for indication. For example, based on Table 15-1, one of 100001 to 101110 may be used for indication. When the bandwidth is 160 MHz, if the bandwidth for which channel state information needs to be fed back is the full bandwidth, an RU indication index indicating a 2*996-tone RU in which the 160 MHz is located may be used for indication. For example, based on Table 15-1, one of 100001 to 101110 may be used for indication. Optionally, rows in which the RU indication indexes 100001 to 101110 and 101111 are located in Table 15-1 may be replaced with Table 15-2 below. TABLE 15-2100001 toN*996-tone15 options, last four least significant bits101111RU (N = 1 to 4)indicating four 80 MHz segments,respectively (15 options, last 4 LSB indicateeach 80 MHz segment) Correspondences between the RU indication index and the indicated bandwidth/RU in Table 15-1 may be adjusted and converted. Optionally, the RU indication index may alternatively use seven bits, thereby supporting indication of more RU types. Specifically, for a 2*996+484-tone RU, a corresponding bandwidth is 200 MHz, and 12 different RU indication indexes may be used to indicate different positions of the 2*996+484-tone RU in 320 MHz. Specifically, 240 MHz that is of 320 MHz and that is at a lowest frequency corresponds to six different RU indication indexes, which indicate 2*996+484-tone RUs that are in different positions in the 240 MHz at the lowest frequency; and 240 MHz that is of 320 MHz and that is at a highest frequency corresponds to another six different RU indication indexes. For a 3*996+484-tone RU, a corresponding bandwidth is 280 MHz, and eight different RU indication indexes may be used to indicate different positions of the 3*996+484-tone RU in 320 MHz. For example, the RU indication index may perform indication according to Table 17. TABLE 17RU indicationIndicated bandwidth/RUindex (seven bits)(indicated BW/RU)Comments0000000 to242-tone RUs (20 MHz)16 options (16 options), corresponding0001111to sixteen 20 MHz segments inascending order of frequency,respectively0010000 to242 + 484-tone RU in a first 80Four options (4 options)0010011MHz (242 + 484 in 1st 80 MHzSegment)0010100 to242 + 484-tone RU in a secondFour options (4 options)001011180 MHz (242 + 484 in 2nd 80MHz Segment)0011000 to242 + 484-tone RU in a third 80Four options (4 options)0011011MHz (242 + 484 in 3rd 80 MHzSegment)0011100 to242 + 484-tone RU in a fourth 80Four options (4 options)0011111MHz (242 + 484 in 4th 80 MHzSegment)0100000 to484-tone RUs (40 MHz)Eight options (8 options),0100111corresponding to eight 40 MHzsegments in ascending order offrequency, respectively0101000 to484 + 996-tone RU in a first 160Four options (4 options)0101011MHz(484 + 996 in 1st 160 MHz)0101100 to484 + 996-tone RU in a secondFour options (4 options)0101111160 MHZ(484 + 996 in 2nd 160 MHz)0110000 to996-tone RUFour options (4 options)01100110110100 to2*996-tone RUSix options (6 options)01110010111010 to2*996 + 484-tone RU12 options (12 options)10011011001110 to3*996-tone RUFour options (4 options)10100011010010 to3*996 + 484-tone RUEight options (8 options)10110011011010 toReserved (reserved)45 options (45 options)11111101111111Full bandwidth (full BW)One option (1 options) For example, in another possible implementation, the RU indication index may perform indication according to Table 18. In Table 18, RU indication indexes other than an RU indication index indicating an N*996-tone RU are arranged in ascending order of RU size. The index indicating the N*996-tone RU includes a 4-bit bitmap. TABLE 18RU indicationIndicated RUindex (six bits)(indicated RU)Comments000000 to 001111242-tone RUs (20 MHz)16 options (16 options), corresponding tosixteen 20 MHz segments in ascendingorder of frequency, respectively010000 to 010011242 + 484-tone RU in a first 80Four options (4 options)MHz (242 + 484 in 1st 80 MHzSegment)010100 to 010111242 + 484-tone RU in a secondFour options (4 options)80 MHz (242 + 484 in 2nd 80MHz Segment)011000 to 011011242 + 484-tone RU in a third 80Four options (4 options)MHz (242 + 484 in 3rd 80 MHzSegment)011100 to 011111242 + 484-tone RU in a fourthFour options (4 options)80 MHz (242 + 484 in 4th 80MHz Segment)100000 to 100111484-tone RUs (40 MHz)Eight options (8 options), correspondingto eight 40 MHz segments in ascendingorder of frequency, respectively101000 to 101011484 + 996-tone RU in a firstFour options (4 options)160 MHz (484 + 996 in 1st 160MHz)101100 to 101111484 + 996-tone RU in a secondFour options (4 options)160 MHz (484 + 996 in 2nd160 MHz)110000Reserved (reserved)One option (1 options)110001 to 111110N*996-tone RU (N = 1 to 3)14 options, last four least significant bitsindicating four 80 MHz segments,respectively (14 options, last 4 LSBindicate each 80 MHz segment)111111Full BW (full bandwidth) orOne option (1 options)4*996-tone RU Optionally, the RU indication index may alternatively use seven bits, thereby supporting indication of more RU types. For example, for a 2*996+484-tone RU, a corresponding bandwidth is 200 MHz, and 12 different RU indication indexes may be used to indicate different positions of the 2*996+484-tone RU in 320 MHz. For a 3*996+484-tone RU, a corresponding bandwidth is 280 MHz, and eight different RU indication indexes may be used to indicate different positions of the 3*996+484-tone RU in 320 MHz. For example, the RU indication index may perform indication according to Table 19. An index indicating an N*996-tone RU includes a 4-bit bitmap. TABLE 19RU indicationIndicated bandwidth/RUindex (seven bits)(indicated BW/RU)Comments0000000 to242-tone RUs (20 MHz)16 options, corresponding to sixteen 200001111MHz segments in ascending order offrequency, respectively0010000 to484-tone RUs (40 MHz)Eight options (8 options), corresponding0010111to eight 40 MHz segments in ascendingorder of frequency, respectively0011000 to484 + 996-tone RU in a firstFour options (4 options)0011011160 MHz (484 + 996 in 1st160 MHz)0011100 to484 + 996-tone RU in aFour options (4 options)0011111second 160 MHz (484 + 996in 2nd 160 MHz)0100000ReservedOne option (1 options)0100001 toN*996-tone RU (N = 1 to 3)14 options - last 4 LSB indicate each 800101110MHz segment, ′1′ means segment isrequested0101111Full BW (full bandwidth)One option (1 options)or 4*996-tone RU0110000 to242 + 484-tone RU in a firstFour options (4 options)011001180 MHz (242 + 484 in 1st 80MHz Segment)0110100 to242 + 484-tone RU in aFour options (4 options)0110111second 80 MHz (242 + 484in 2nd 80 MHz Segment)0111000 to242 + 484-tone RU in a thirdFour options (4 options)011101180 MHz (242 + 484 in 3rd80 MHz Segment)0111100 to242 + 484-tone RU in aFour options (4 options)0111111fourth 80 MHz (242 + 484 in4th 80 MHz Segment)1000000 to2*996 + 484-tone RU12 options10010111001100 to3*996 + 484-tone RUEight options (8 options)10100111010100 toReserved44 options1111111 Correspondences between the RU indication index and the indicated bandwidth/RU in Table 19 may be adjusted and converted. For example, in another possible implementation, the RU indication index may perform indication according to Table 20. In Table 20, RU indication indexes other than an RU indication index indicating an N*996-tone RU are arranged in ascending order of RU size. The index indicating the N*996-tone RU includes a 4-bit bitmap. TABLE 20RU indicationIndicated bandwidth/RUindex (seven bits)(indicated BW/RU)Comments0000000 to242-tone RUs (20 MHz)16 options, corresponding to sixteen 200001111MHz segments in ascending order offrequency, respectively0010000 to242 + 484-tone RU in a firstFour options (4 options)001001180 MHz (242 + 484 in 1st 80MHz Segment)0010100 to242 + 484-tone RU in aFour options (4 options)0010111second 80 MHz (242 + 484in 2nd 80 MHz Segment)0011000 to242 + 484-tone RU in a thirdFour options (4 options)001101180 MHz (242 + 484 in 3rd80 MHz Segment)0011100 to242 + 484-tone RU in aFour options (4 options)0011111fourth 80 MHz (242 + 484 in4th 80 MHz Segment)0100000 to484-tone RUs (40 MHz)Eight options (8 options), corresponding0100111to eight 40 MHz segments in ascendingorder of frequency, respectively0101000 to484 + 996-tone RU in a firstFour options (4 options)0101011160 MHz (484 + 996 in 1st160 MHz)0101100 to484 + 996-tone RU in aFour options (4 options)0101111second 160 MHz (484 + 996in 2nd 160 MHz)0110000Reserved (reserved)One option (1 option)0110001 toN*996-tone RU (N = 1 to 3)14 options, last four least significant bits0111110indicating four 80 MHz segments,respectively (14 options, last 4 LSBindicate each 80 MHz segment)0111111Full BW (full bandwidth)One option (1 option)or 4*996-tone RU1000000 to2*996 + 484-tone RU12 options (12 options)10010111001100 to3*996 + 484-tone RUEight options (8 options)10100111010100 toReserved (reserved)44 options (44 options)1111111 In an optional embodiment, the RU indication index uses five bits, indicating a single RU. In this way, indication overheads can be further reduced. For example, the RU indication index may perform indication according to Table 21. TABLE 21RU indication indexIndicated bandwidth/RU(five bits)(indicated BW/RU)Comments00000 to 01111242-tone RUs (20 MHz)16 options, corresponding to sixteen 20MHz segments in ascending order offrequency, respectively10000 to 10111484-tone RUs (40 MHz)Eight options (8 options), corresponding toeight 40 MHz segments in ascending orderof frequency, respectively11000 to 11011996-tone RU (80 MHz)Four options (4 options)11100 to 111012*996-tone RU (160 MHz)Two options (2 options)111104*996-tone RU (320 MHz)One option (1 option)11111Reserved (reserved)One option (1 option) In the foregoing embodiments provided in this application, the methods provided in embodiments of this application are separately described from perspectives of an access point and a station. To implement functions in the methods provided in the foregoing embodiments of this application, the access point and the station may include a hardware structure and a software module, and implement the foregoing functions in a form of the hardware structure, the software module, or a combination of the hardware structure and the software module. A function in the foregoing functions may be performed in a manner of the hardware structure, the software module, or a combination of the hardware structure and the software module. FIG.15is a schematic diagram of a structure of a transmission apparatus according to an embodiment of this application. The transmission apparatus includes a processing unit1501and a transmitting unit1502. The processing unit1501is configured to generate an NDPA frame. The NDPA frame includes a station information field. The station information field includes an AID subfield indicating an association identifier AID of a station. The station information field further includes a partial bandwidth information subfield and/or a number of columns subfield. The partial bandwidth information subfield indicates an RU that is in a bandwidth corresponding to the NDPA frame and for which the station needs to feed back channel state information. The number of columns subfield indicates a quantity of columns in a compressed beamforming feedback matrix. The bandwidth corresponding to the NDPA frame is greater than 160 MHz. A column quantity indicated by the number of columns subfield is greater than 8. The transmitting unit1502is configured to transmit the NDPA frame. In this way, the partial bandwidth information subfield in the NDPA frame indicates an RU that is in a bandwidth greater than 160 MHz and for which channel state information needs to be fed back. In this case, the station information field can indicate the station to sound a channel with a bandwidth greater than 160 MHz, and feed back a beamforming report based on a channel sounding result, thereby implementing data transmission in a larger bandwidth and improving transmission efficiency. In addition, the column quantity indicated by the number of columns subfield is greater than 8. The station information field can indicate the station to sound a channel with a bandwidth whose column quantity is greater than 8, and feed back a beamforming report based on a channel sounding result, thereby implementing data transmission in more streams and improving transmission efficiency. The transmission apparatus1500may be a communication apparatus or an access point, or the transmission apparatus1500may be deployed in a communication apparatus or in an access point. The processing unit1501of the transmission apparatus1500may be a processor, and the transmitting unit1502of the transmission apparatus1500may be a transceiver. It should be understood that a related description of the foregoing NDPA frame transmission method is also applicable to the transmission apparatus1500. Details are not described herein again. FIG.16is a schematic diagram of a structure of a transmission apparatus according to an embodiment of this application. The transmission apparatus includes a processing unit1601and a transmitting unit1602. The processing unit1601is configured to generate an NDPA frame. The NDPA frame includes two station information fields. The two station information fields include an AID subfield indicating an association identifier AID of a same station. The transmitting unit1602is configured to transmit the NDPA frame. The two station information fields meet at least one of the following:partial bandwidth information subfields in the two station information fields jointly indicate an RU for which the station needs to feed back channel state information, and a bandwidth corresponding to the NDPA frame is greater than 160 MHz; ora number of columns subfield in one station information field and a number of columns subfield in the other station information field indicate a quantity of columns in a compressed beamforming feedback matrix, and a column quantity indicated by the number of columns subfields is greater than 8. In this way, a station information field corresponding to a station can be newly added without changing a station information field that is originally included in an NDPA frame and that corresponds to the station. Partial bandwidth information subfields in the two station information fields cooperate to indicate an RU that is in a bandwidth greater than 160 MHz and for which channel state information needs to be fed back. The station can be indicated to sound a channel with a bandwidth greater than 160 MHz, and feed back a beamforming report based on a channel sounding result, thereby implementing data transmission in a larger bandwidth and improving transmission efficiency. Number of columns subfields in the two station information fields cooperate to indicate a column quantity greater than 8. The station can be indicated to sound a channel with a bandwidth whose column quantity is greater than 8, and feed back a beamforming report based on a channel sounding result, thereby implementing data transmission in more streams and improving transmission efficiency. The transmission apparatus1600may be a communication apparatus or an access point, or the transmission apparatus1600may be deployed in a communication apparatus or in an access point. The processing unit1601of the transmission apparatus1600may be a processor, and the transmitting unit1602of the transmission apparatus1600may be a transceiver. It should be understood that a related description of the foregoing NAPD frame transmission method is also applicable to the transmission apparatus1600. Details are not described herein again. FIG.17is a schematic diagram of a structure of a transmission apparatus according to an embodiment of this application. The transmission apparatus includes a processing unit1701and a transmitting unit1702. The processing unit1701is configured to generate an NDPA frame. The NDPA frame includes a sounding dialog token field, a special station information field, and a station information field. The sounding dialog token field includes a frame type subfield. The special station information field includes a frame subtype subfield. The type information is carried in the frame type subfield and the frame subtype subfield. The frame type subfield indicates that the NDPA frame is not an HE NDPA frame or a ranging NDPA frame. The frame subtype subfield indicates that the NDPA frame is an EHT NDPA frame. The transmitting unit1702is configured to transmit the NDPA frame. In this way, the frame type subfield and the frame subtype subfield jointly indicate that the NDPA frame is an EHT NDPA frame. The EHT NDPA frame can indicate an RU that is in a bandwidth greater than 160 MHz and for which channel state information needs to be fed back, to indicate a station to sound a channel with a bandwidth greater than 160 MHz, and feed back a beamforming report based on a channel sounding result, thereby implementing data transmission in a larger bandwidth and improving transmission efficiency. The EHT NDPA frame can further indicate a column quantity greater than 8, to indicate a station to sound a channel with a bandwidth whose column quantity is greater than 8, and feed back a beamforming report based on a channel sounding result, thereby implementing data transmission in more streams and improving transmission efficiency. In addition, in this case, a new frame does not need to be defined, and an available type remaining in a MAC frame is fully utilized, thereby saving resources. The transmission apparatus1700may be a communication apparatus or an access point, or the transmission apparatus1700may be deployed in a communication apparatus or in an access point. The processing unit1701of the transmission apparatus1700may be a processor, and the transmitting unit1702of the transmission apparatus1700may be a transceiver. It should be understood that a related description of the foregoing NAPD frame transmission method is also applicable to the transmission apparatus1700. Details are not described herein again. FIG.18is a schematic diagram of a structure of a transmission apparatus according to an embodiment of this application. The transmission apparatus includes a processing unit1801and a receiving unit1802. The receiving unit1802is configured to receive an NDPA frame. The NDPA frame includes a station information field. The station information field includes an AID subfield indicating an association identifier AID of a station. The station information field further includes a partial bandwidth information subfield and/or a number of columns subfield. The partial bandwidth information subfield indicates an RU that is in a bandwidth corresponding to the NDPA frame and for which the station needs to feed back channel state information. The number of columns subfield indicates a quantity of columns in a compressed beamforming feedback matrix. The bandwidth corresponding to the NDPA frame is greater than 160 MHz. A column quantity indicated by the number of columns subfield is greater than 8. The processing unit1801is configured to obtain, from the NDPA frame, the RU for which channel state information needs to be fed back. In this way, the partial bandwidth information subfield in the NDPA frame indicates an RU that is in a bandwidth greater than 160 MHz and for which channel state information needs to be fed back. In this case, the station information field can indicate the station to sound a channel with a bandwidth greater than 160 MHz, and feed back a beamforming report based on a channel sounding result, thereby implementing data transmission in a larger bandwidth and improving transmission efficiency. In addition, the column quantity indicated by the number of columns subfield is greater than 8. In this case, the station information field can indicate the station to sound a channel with a bandwidth whose column quantity is greater than 8, and feed back a beamforming report based on a channel sounding result, thereby implementing data transmission in more streams and improving transmission efficiency. The transmission apparatus may be a communication apparatus or a station, or the transmission apparatus may be deployed in a communication apparatus or in a station. The processing unit1801of the transmission apparatus1800may be a processor, and the receiving unit1802of the transmission apparatus1800may be a transceiver. It should be understood that a related description of the foregoing NAPD frame transmission method is also applicable to the transmission apparatus1800. Details are not FIG.19is a schematic diagram of a structure of a transmission apparatus according to an embodiment of this application. The transmission apparatus includes a processing unit1901and a receiving unit1902. The receiving unit1902is configured to receive an NDPA frame. The NDPA frame includes two station information fields. The two station information fields include an AID subfield indicating an association identifier AID of a same station. The two station information fields meet at least one of the following:partial bandwidth information subfields in the two station information fields jointly indicate an RU for which the station needs to feed back channel state information, and a bandwidth corresponding to the NDPA frame is greater than 160 MHz; ora number of columns subfield in one of the two station information fields and a number of columns subfield in the other station information field indicate a quantity of columns in a compressed beamforming feedback matrix, and a column quantity indicated by the number of columns subfields is greater than 8. The processing unit1901is configured to obtain, from the NDPA frame, the RU for which channel state information needs to be fed back and/or the quantity of the columns in the compressed beamforming feedback matrix. In this way, a station information field corresponding to a station can be newly added without changing a station information field that is originally included in an NDPA frame and that corresponds to the station. Partial bandwidth information subfields in the two station information fields cooperate to indicate an RU that is in a bandwidth greater than 160 MHz and for which channel state information needs to be fed back. In this case, the station can be indicated to sound a channel with a bandwidth greater than 160 MHz, and feed back a beamforming report based on a channel sounding result, thereby implementing data transmission in a larger bandwidth and improving transmission efficiency. Number of columns subfields in the two station information fields cooperate to indicate a column quantity greater than 8. The station can be indicated to sound a channel with a bandwidth whose column quantity is greater than 8, and feed back a beamforming report based on a channel sounding result, thereby implementing data transmission in more streams and improving transmission efficiency. The transmission apparatus may be a communication apparatus or a station, or the transmission apparatus may be deployed in a communication apparatus or in a station. The processing unit1901of the transmission apparatus1900may be a processor, and the receiving unit1902of the transmission apparatus1900may be a transceiver. It should be understood that a related description of the foregoing NAPD frame transmission method is also applicable to the transmission apparatus1900. Details are not described herein again. FIG.20is a schematic diagram of a structure of a transmission apparatus according to an embodiment of this application. The transmission apparatus includes a processing unit2001and a receiving unit2002. The receiving unit2002is configured to receive an NDPA frame. The NDPA frame includes a sounding dialog token field, a special station information field, and a station information field. The sounding dialog token field includes a frame type subfield. The special station information field includes a frame subtype subfield. The frame type subfield indicates that the NDPA frame is not an HE NDPA frame or a ranging NDPA frame. The frame subtype subfield indicates that the NDPA frame is an EHT NDPA frame. The processing unit2001is configured to obtain the frame type subfield and the frame subtype subfield from the NDPA frame, to determine that the NDPA frame is an EHT NDPA frame. In this way, the frame type subfield and the frame subtype subfield jointly indicate that the NDPA frame is an EHT NDPA frame. The EHT NDPA frame can indicate an RU that is in a bandwidth greater than 160 MHz and for which channel state information needs to be fed back, to indicate a station to sound a channel with a bandwidth greater than 160 MHz, and feed back a beamforming report based on a channel sounding result, thereby implementing data transmission in a larger bandwidth and improving transmission efficiency. The EHT NDPA frame can further indicate a column quantity greater than 8, to indicate a station to sound a channel with a bandwidth whose column quantity is greater than 8, and feed back a beamforming report based on a channel sounding result, thereby implementing data transmission in more streams and improving transmission efficiency. In addition, in this case, a new frame does not need to be defined, and an available type remaining in a MAC frame is fully utilized, thereby saving resources. The transmission apparatus may be a communication apparatus or a station, or the transmission apparatus may be deployed in a communication apparatus or in a station. The processing unit2001of the transmission apparatus2000may be a processor, and the receiving unit2002of the transmission apparatus2000may be a transceiver. It should be understood that a related description of the foregoing NAPD frame transmission method is also applicable to the transmission apparatus2000. Details are not An embodiment of this application further provides a computer-readable storage medium. The computer-readable storage medium stores a computer program; and when the computer-readable storage medium is executed by a computer, functions of any one of the foregoing method embodiments are implemented. An embodiment of this application further provides a computer program product. When the computer program product is executed by a computer, functions of any one of the foregoing method embodiments are implemented. An embodiment of this application further provides a processor. The processor is configured to perform steps that can be performed by an access point in any one of the foregoing method embodiments, or is configured to perform steps that can be performed by a station in any one of the foregoing method embodiments. In a process of performing these methods, a process of sending the foregoing information and a process of receiving the foregoing information in the foregoing methods may be understood as a process of outputting the foregoing information by the processor and a process of receiving the foregoing input information by the processor. Specifically, when outputting the information, the processor outputs the information to a transceiver, so that the transceiver transmits the information. Still further, after the information is output by the processor, other processing may further need to be performed on the information before the information arrives at the transceiver. Similarly, when the processor receives the input information, the transceiver receives the information and inputs the information into the processor. Still further, after the transceiver receives the information, other processing may need to be performed on the information before the information is input into the processor. In this case, for operations such as transmission, sending, and receiving related to the processor, if there is no particular statement, or if the operations do not contradict an actual function or internal logic of the operations in related descriptions, the operations may be more generally understood as operations such as output, receiving, and input of the processor, instead of operations such as transmission, sending, and receiving directly performed by a radio frequency circuit and an antenna. In a specific implementation process, the processor may be a processor specially configured to perform these methods, or a processor, for example, a general-purpose processor, configured to execute computer instructions in a memory to perform these methods. The memory may be a non-transitory memory such as a read-only memory (ROM). The memory and the processor may be integrated on a same chip, or may be separately disposed on different chips. A type of the memory and a manner of disposing the memory and the processor are not limited in embodiments of the present invention. An embodiment of this application further provides a chip system. The chip system includes a processor and an interface, and is configured to support a communication transmission device in implementing a function related to an access point or a station in any one of the foregoing method embodiments, for example, determining or processing at least one of data and information in the foregoing method. In a possible design, the chip system further includes a memory. The memory is configured to store information and data that are necessary for the communication apparatus. The chip system may include a chip, or may include a chip and another discrete component. According to an eighteenth aspect, an embodiment of this application provides a functional entity. The functional entity is configured to implement the foregoing NDPA frame transmission method. It should be further understood that “first”, “second”, “third”, “fourth”, and various numbers in this specification are merely used for differentiation for ease of description, and are not construed as a limitation on the scope of this application. It should be understood that the term “and/or” in this specification describes only an association relationship for describing associated objects and represents that three relationships may exist. For example, A and/or B may represent the following three cases: Only A exists, both A and B exist, and only B exists. In addition, the character “I” in this specification generally indicates an “or” relationship between the associated objects. It should be understood that sequence numbers of the foregoing processes do not mean execution orders in various embodiments of this application. The execution orders of the processes should be determined based on functions and internal logic of the processes, and should not be construed as any limitation on the implementation processes of embodiments of this application. A person of ordinary skill in the art may be aware that units and algorithm steps in the examples described with reference to embodiments disclosed in this application can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are performed by hardware or software depends on particular applications and design constraints of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of this application. It may be clearly understood by a person skilled in the art that, for the purpose of convenient and brief description, for a detailed working process of the foregoing system, apparatus, and unit, refer to a corresponding process in the foregoing method embodiments, and details are not described herein again. In the several embodiments provided in this application, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the described apparatus embodiments are merely examples. For example, division into the units is merely logical function division and may be other division during actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electrical, mechanical, or other forms. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, and may be located in one location, or may be distributed on a plurality of network units. Some or all of the units may be selected depending on actual requirements to achieve the objectives of the solutions in the embodiments. In addition, functional units in embodiments of this application may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units are integrated into one unit. When the functions are implemented in a form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of this application essentially, or the part contributing to the conventional technology, or some of the technical solutions may be implemented in a form of a computer software product. The computer software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or some of the steps of the methods described in embodiments of this application. The foregoing storage medium includes: any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disc. A sequence of the steps of the method in embodiments of this application may be adjusted, combined, and deleted based on an actual requirement. The modules in the apparatus in embodiments of this application may be combined, divided, and deleted based on an actual requirement. In conclusion, the foregoing embodiments are merely intended for describing the technical solutions of this application, but not for limiting this application. Although this application is described in detail with reference to the foregoing embodiments, a person of ordinary skill in the art should understand that modifications to the technical solutions described in the foregoing embodiments or equivalent replacements to some technical features thereof may still be made, without departing from the scope of the technical solutions of embodiments of this application.
146,215